A TALE OF TWO SIGMOIDS

Guest essay by Todd “Ike” Kiefer, Captain, U.S. Navy (retired)

image

Figure 1 – Symmetric Sigmoid (red) v. Classic Sigmoid (blue)

This piece is a response to the excellent discussion begun by Andy May on 17 Feb. I am in firm agreement with the substance of his essay, but would like to expand and explore a bit further. Dr. Marion King Hubbert did not fit a generic or Gaussian bell curve to his data plot of oil production numbers. Rather, his eponymous curve is a specific mathematical construct that emerges from his mathematical assumptions. I will show below why I believe that those assumptions and the curve are both wrong, and suggest a substitute curve and worldview for crude oil production.

The Math of the Hubbert Curve

There are dozens of different mathematical constructions that yield bell-shaped curves. The “Hubbert” or “Peak Oil” curve is actually a special case of a class of s-shaped functions called sigmoids. While most sigmoid functions begin and end at different values, Hubbert’s curve is constrained to begin and end at zero by the formula and boundary conditions imposed that represent a perfect mathematical translation of Hubbert’s worldview. The curve reflects a battle between two competing forces or trends – one for growth and one for contraction – where the balance shifts between the two along the way.

The curve is usually plotted as the annual quantity of oil produced on the vertical scale against the year of production on the horizontal scale. However, the math of the curve is best understood as a relationship between the rate of oil production (dQ/dt) and the cumulative quantity of oil so far produced (Q). This is because Hubbert derived the curve by assuming the forces that affect oil production were related to Q, not a function of the year of production. There are three variables that are adjustable to shape the curve: first is Q0 that starts the curve and is usually set to be zero in the year 1859 when the first commercial oil was produced in the USA; second is a rate scalar r that symmetrically adjusts the steepness of the rising and falling slopes; third is Qmax – the postulated maximum amount of oil which can ever be produced from the geographic area under consideration, and which corresponds to the area under the curve. By adjusting r and Qmax, Hubbert and others have been able to get a good fit to historical U.S. oil production through about 1990 with some significant caveats. Hubbert’s 1956 predicted production curve for USA based on his estimate of total recoverable reserves of 200 billion barrels is shown below. It is followed by a plot of his curve overlaid with actual historical production through 2015.

image

Figure 2 – Excerpt from Hubbert’s 1956 Paper (annotated)

image

Figure 3 – Hubbert’s Curve v. Historical U.S. Crude Oil Production

The formula for the rate of production dQ/dt shows the two trends that are competing with each other. First there is a term rQ that tries to increase production in linear proportion to how much oil has already been produced. This term essentially models a scenario where more oil production stimulates proportionally increased consumption, driving more producers to enter the oil business and drill more wells. Unchecked, this portion of the formula would cause the curve to grow exponentially. However, the check comes in the second term, 1- Q/Qmax, that applies brakes on the rate of production in proportion to how close Q approaches a pre-determined maximum value. The second term essentially models a scenario where there is a fixed amount of a resource in the ground and it becomes harder to find and extract as the balance remaining decreases. Qmax is the key assumption and guiding worldview of Hubbert’s approach and curve. The two terms work together to produce a symmetric sigmoid, where unconstrained growth dominates initially, but is eventually overtaken by insurmountable resistance, and production reaches zero as Q reaches Qmax. Hubbert’s curve is an elegantly simple model of more and more people looking for a scarcer and scarcer resource.

Limitations and Hidden Assumptions of Hubbert’s Worldview:

The Hubbert curve is appealing because of its simple logic and because of its close apparent fit with the data through the U.S. production peak in 1970. But is the math too simplistic? Indeed, there are three principal weaknesses that flow from questionable assumptions. First of these is the global assumption baked into Hubbert’s mathematics that the rules are largely fixed for the entire lifespan of production, and particularly the rule that oil is monotonically more difficult to extract with every barrel. To be fair, Hubbert did allow for some minor growth in reserves over time due to continued exploration and improving technology, and this is seen in the fattened post-peak tail of his curves as plotted in his 1956 paper. But he did not allow for the possibility that technological progress and evolving geophysical understanding might be great enough to actually reverse the overall trend of slowing production that was supposed to be inexorable beginning with the 1952 inflection point he saw in U.S. production data and built into the Hubbert Curve. Another key assumption made by Hubbert and continued by his disciples today is that scarcity is the one and only dominant force that shapes all actual oil production curves, both regional and global, with little credit given to other economic factors that are known to grossly affect other international commodities. Some of these other factors include producer competition, shifting market share, and substitution. Third, the Hubbert approach simplistically focuses only on production, with no separate consideration for the demand side of the economic equation. Whatever is produced is assumed to be readily consumed and thereby to maintain a constant economic pressure favoring increased production (i.e., keeping the rate coefficient r positive and stable).

In the real world, each of these assumptions has been invalidated. A host of technological and scientific innovations has dramatically recalibrated reserves, costs, and efficiencies for both terrestrial and offshore oil. Whole new realms of reserves have become accessible and economic, including terrestrial source rock and offshore pre-salt oil, upending long-held geologic assumptions. As oil production has continued beyond the 1970 U.S. peak, neither the U.S. curve nor the global curve has cooperated in following the mathematical predictions. Instead, U.S. production has waxed and waned and waxed again dramatically reflecting how, like all commodities, oil production remains responsive to factors which have always affected competitiveness and market share such as government policy and regulation, capital investment cycles, and economic boom and bust cycles. Hubbert’s initial prediction in 1956 was that U.S. oil production would peak between 1962 to 1967 at no more than 3 billion barrels per year (8.2 Mbpd) based on 200 billion barrels of ultimately recoverable oil. His global prediction was for production to peak in 2000 at 12.5 billion barrels per year (34.2 Mbpd) based on 1.25 trillion barrels of ultimately recoverable oil. Instead the USA has now twice peaked at 3.5 million barrels per year (9.5 Mbpd). Global production has already exceeded Hubbert’s estimate of ultimately recoverable oil, and proved reserves have been growing faster than production since 1980. Global crude oil production is already 150% of his predicted peak production rate, yet refuses to peak, and continuing to refute Peak Oil doomsayers. Apologists have tried to excuse Hubbert’s poor fit with U.S. production data after 1970 by saying he could not have anticipated Alaskan oil. But he probably also could not have anticipated the fact that the oil-saturated California coast would soon be virtually barred from oil production, and this knowledge would have reduced his production estimates. Another contrived effort to redeem Hubbert’s prediction consists of ignoring all production that falls outside a recently invented narrow categorization of “conventional oil” comprising only heavy crude produced by terrestrial vertical rigs from classic geological traps. Peak Oil theory is thus supposedly excused from failing to address the flood of new light-sweet crude being produced by directional and horizontal wells from terrestrial source rock and ultra-deep offshore reservoirs. Earlier generations of oil producers would have categorized “conventional oil” differently as the years marched on as only oil collected from surface ponds and pits, or only oil produced from human-drilled wells less than 100 feet deep, or only oil from east of the Mississippi river, etc. Even using this specious category of “conventional oil,” there is no evidence of a peak or cliff in global crude production, but rather continued responsiveness to capital investment. So obvious has been the absence of the predicted scarcity that many governments and activist organizations are now frantically trying to figure out how to pile on new regulatory and tax burdens to keep oil production and consumption from accelerating further. Concern about scarcity has been replaced by concern about how to “keep it in the ground.”

image

Figure 4 – Global Peak Oil Predictions

A New Curve

Rather than trying to patch up the Hubbert theory, it is past time to reconsider the assumptions and choose a better curve. The better curve is the classic sigmoid, also known as the logistics curve. The logistics curve is one of the most ubiquitous naturally-occurring mathematical forms in science and nature, empirically emerging as titration curves in chemistry, population growth curves in biology and demographics, and market penetration curves in economics, to name but a few. The math of the logistics curve is very similar to the Hubbert curve, but it substitutes P (the rate of production) for Q (the quantity of production), where P = dQ/dt. In other words, the logistic function has the appearance of the integral of the symmetric sigmoid, and where Hubbert’s curve was limited in maximum quantity of production, the logistic function is limited in maximum rate of production. So the first term in the logistics equation produces exponential growth in the rate of production, and the second term sets a maximum boundary on the rate of production. Instead of total oil production being limited by Qmax, oil production rate is limited to Pmax, which in logistics terminology is known as a carrying capacity. Such natural limits to growth often appear in complex systems with many interdependent variables. Real-word systems tend to display self-constrained behavior like this from internal negative feedbacks that prevent disastrous overshoots, rather than running away like many overly simplistic human models (e.g., actual climate v. climate GCMs).

Below is a logistics curve fit I did in 2012 for U.S. oil production data available at that time. I have since updated the data through 2015, but have had no need to adjust the curve. The logistics curve best fit to empirically match the data revealed a natural plateau for U.S. production of about 3.5 billion barrels per year (9.5 Mbpd). A positive and premature spike to that level in the early 1970s was explainable by a set of special circumstances including a surge in Vietnam, the Apollo program, record cold winters, and the oil embargo. Alaska oil production, rather than being an anomaly, actually appeared to be a natural progression that fit the curve perfectly. A major break with the curve occurred in 1986, which was a year when the global oil market belatedly recognized a glut of overproduction, and prices collapsed for a period that would last 17 years until 2003. U.S. oil, made uncompetitively expensive by the world’s most restrictive drilling policies and environmental regulations, could not compete, and market share quickly dwindled even as global production and consumption continued to rise. Another contributor was the fact that U.S. oil majors since 1950 had been making the bulk of their capital investment in exploration and production overseas in Saudi Arabia, Venezuela, Nigeria, etc., with most domestic spending limited to refineries. These E&P investments continued to pay off handsomely in increased oil production overseas, but this was accounted as imported crude oil when fed to domestic refineries. Thus the dip in the curve is more about a limitation in data accounting – there is no EIA data tag to demark oil production that is the fruit of U.S. foreign investment. However, there was a coming revolution in domestic production that would indeed show up in the data.

image

Figure 5 – Kiefer Curve

Based on 3 previous boom and bust cycles in global production in the 20th century, it was clear that it was only a matter of time before the march of U.S. technology would improve oil exploration and production efficiency enough to again make U.S. domestic production competitive and recapture market share. In fact, this was already well underway in the form of a massive wave of capital investment by the world’s remaining privately held oil and gas companies benefiting from the rising prices accompanying another cycle of perceived scarcity that had arrived in 2003. As had happened many times before in its history, panic about the end of oil helped create profit margins that financed the investments that renewed the supply. By 2006, all of the technologies that enabled the fracking revolution were already fielded (3-D seismology, directional and horizontal drilling, sophisticated bore-head sensors and real-time telemetry, bore cementing and sequential perforation, hydraulic fracturing, advanced drilling fluids and proppant, etc.). Additionally, the first commercially successful ultra-deep “pre-salt” offshore well in the Gulf of Mexico was drilled in 1993, and by 2007 similar wells were being drilled off Brazil, ushering in another revolution of less notoriety but likely equal import with fracking that has yet to really make itself felt. Both of these revolutions depend upon specific technology and expertise for which the USA is unsurpassed. The stage was set by 2010 for U.S. oil production to come roaring back. The trend lines in 2012 indicated that U.S. production would reach the logistic curve carrying capacity of 9.5 Mbpd sometime before the summer of 2016. In January of 2014 I specifically predicted a price collapse to $50-$60 bbl approaching this natural limit. According to EIA data after the fact, U.S. crude oil production hit 9.0 Mbpd in September 2014 and peaked at 9.6 Mbpd in April of 2015. WTI Cushing spot price peaked at $108/bbl in June of 2014 before beginning the plunge that would see prices below $50/bbl by January 2016. Current U.S. production is stable at 8.8 Mbpd. I expect to see U.S. production remain somewhat south of the 9.5 Mbpd limit, though not dip as low as it did following the 1986 glut. This is because most global oil companies have now been nationalized and foreign innovation and technology migration is thus slower today, allowing the USA to maintain a more enduring competitive advantage and preserve more market share. Private land and mineral rights ownership is also key to the economics of oil and gas, and this almost exclusively favors the USA as well. I don’t believe perceived scarcity will again come into play to significantly boost prices for well beyond a decade. Of course the global market is always susceptible to short-term spikes from geopolitical crises.

The Global Carrying Capacity for Crude Oil

We have already seen that Hubbert’s prediction for U.S. oil production was pessimistic and completely failed to predict our current condition. His prediction for global production was equally flawed.

image

Figure 6 – Hubbert’s Global Oil Prediction

According to Hubbert’s prediction, 2017 global crude oil production should be 12 billion bbl/yr and falling irretrievably. Instead it is over 30 billion bbl/yr and climbing steadily. And while oil is now being more properly priced as a premium transportation fuel and industrial feedstock rather than as a bulk combustion fuel, there is still an unquenchable thirst for this commodity in the developing world representing a huge latent demand. Applying the same logistics curve fit technique to global production data is illuminating.

image

Figure 7 – Kiefer’s Global Oil Prediction

If trustworthy, this logistic curve shows that global production and consumption are only halfway to the natural plateau. You can see in the figure some reasons for why progress may have departed from the ideal curve to a more linear path of growth. I primarily believe this is due to a breakdown in international markets since the world transitioned to a pure fiat currency system and governments have universally abused this to incur crippling levels of debt. The fiscal economy of the entire globe is way over-leveraged and is operating with a huge drag on it. All ability to fiscally stimulate economies has been exhausted by the central banks. The only way out now is for a massive influx of cheap energy to cause a surge in the creation of goods and services that will elevate the real GWP to catch up with the global money supply, and thereby reduce the leverage. Right now, the USA seems to be uniquely positioned to benefit from the cheap energy revolution of fracking, while pre-salt hydrocarbons may be more globally accessible.

Nevertheless, continued growth in global production toward the predicted carrying capacity is indeed my prediction – one which will not bring much comfort to those who demonize CO2 and think the Earth is on the knife edge of climate catastrophe. I’m not sure which nations will be contributing which fractions to this production peak, but I believe it will come to pass. Even if we built 2 MW of nuclear plant every week for the next 50 years, we would not displace the need for this transportation energy, particularly for air and sea travel, when the growing demand of developing nations is considered. Fortunately, fossil fuel energy has proven an excellent resource for helping civilizations cope with a host of threats. Hydrocarbons excel at reducing human exposure to and harm from severe weather, and in making crops much more fruitful and far less dependent upon the vagaries of nature. Climate change adaptation and mitigation would appear to be the only reasonable strategy going forward, as it has been for all of human history.

Finite v. Sustainable

If the logistic curve is indeed the better fit than the Hubbert curve, what does that tell us about the underlying commodity and the forces shaping its production? The essential difference between the two curves is that a Hubbert curve describes a finite resource whose production is being observably choked down by scarcity, while a logistics curve describes a sustainable resource who production is stabilized by potentially a host of factors. The question of finite v. sustainable is really were the prevailing worldview is most challenged. For oil to appear to be a resource that can be sustainably consumed, there are two possibilities. First is the possibility that the amount of ultimately producible oil is very, very large compared to its stabilized consumption rates, and essentially dwarfs demand, so that true scarcity is not a factor. A second possibility is that oil is indeed a renewable resource, and that the geologic processes that created the oil already extracted are still at work creating more at a significant rate compared to consumption. A combination of these two is also possible. David Middleton recently submitted an excellent guest post on what is known and theorized about the thermogenic processes that produce oil. He makes a point about how much reservoir quality sedimentary rock there is in the oil and gas zones of the Earth’s crust. The amount is so vast that every 10 ppm of its volume can hold a trillion barrels of oil, though we really don’t know how much of it is charged with oil. Pre-existing oil may be enough to sustain the logistics curve for generations, or for millennia. Additionally, there is every indication that kerogen continues to be cooked into crude oil and gas according to ancient processes. He also acknowledges abiotic methane production, though he does not believe it oligomerizes to become long-chain hydrocarbons and thus contribute to the oil supply. He holds firmly to the dominant view that crude oil is produced from biological feedstock such as ancient buried bacterial and plankton. One need not take sides in biogenic-abiotic debate to still accept the evidence that oil is behaving more like a sustainable resource than a finite one.

There are other reasons why I hold to the sustainable oil view, including my own research and analysis of fossil fuel energy return on investment (EROI) and oil production versus drilling effort. An essential part of this analysis that many get wrong is to ignore the often lengthy delay between oil industry capital investment and ROI. During the crisis window of perceived scarcity, there is much capital investment and negative cashflow as a flurry of wildcatters chase prospects. Once the glut is recognized, the capital investment dries up and there begins a lean period of low prices which includes a painful battle for market share and brutal consolidations, as most of the wildcatters fold up and are absorbed by larger companies with more fat to live on. Then finally comes a long period of steady, profitable production from reserves that seem to miraculously grow and grow without much further investment – this is the payback period that is usually ignored because the crisis is long past. Any ROI or EROI analysis that does not include the full bust-and-boom cycle will yield false results. When the accordion-effect lag between capital investment and ROI is properly considered, U.S. oil production EROI has remained above 10:1 for its entire commercial history. Oil yields today are still about 40 barrels per foot drilled, the same as in the mid 1980s. If scarcity starts to rear its head as an emerging force in shaping oil production, we should first see it in falling EROI and yields per foot. I don’t yet see that signature.


Captain Todd “Ike” Kiefer, USN (ret.) is director of government relations and economic development for East Mississippi Electric Power Association and president of North Lauderdale Water Association. His career in public utilities follows 25 years as a naval officer and aviator. He has degrees in physics, strategy, and military history, and diverse military experience that spans airborne electronic warfare, nuclear submarines, operational flight test, particle accelerators, Pentagon Joint Staff strategic planning, and war college faculty. Deployed eight times to the Middle East and Southwest Asia, he spent 22 months on the ground in Iraq and Commanded Al Asad Air Base and Training Squadron NINE. Author of several published papers on energy and energy security.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
115 Comments
Inline Feedbacks
View all comments
Moderately Cross of East Anglia
March 2, 2017 10:49 am

There is one form of energy which seems to be often overlooked and which can be viewed as sustainable almost infinitely from the present day standpoint. We also also know it works very well in the few places it is extensively exploited – I’m referring to geothermal energy and of course Iceland is the example which springs to mind. While of course it can’t replace oil and gas to power transport, where it is available it saves the use of valuable fossil fuels for heating. Clearly many other places apart from Iceland could exploit geothermal, well pretty much anywhere along the Pacific ring of fire for starters. Presumably much of the money thrown at windmills would be better spent on geothermal?

Reply to  Moderately Cross of East Anglia
March 2, 2017 11:12 am

Geothermal is well used where it is viable. Like Japan and Iceland.
The problem is that it takes energy to pump water down and up again. And it costs energy to keep the hole open (rocks move).

So where the heat is near the surface it works.
And where it isn’t, it doesn’t.

We’ve been trying this for 40 years. And we are good at digging holes. But it never pays off unless the heat is near the surface.

Moderately Cross of East Anglia
March 2, 2017 11:02 am

Perhaps I should have made explicit that the point of more geothermal use would be to extend the availability of fossil fuels.

richardscourtney
Reply to  Moderately Cross of East Anglia
March 3, 2017 6:44 am

Moderately Cross of East Anglia:

You say

Perhaps I should have made explicit that the point of more geothermal use would be to extend the availability of fossil fuels.

Why?
There is sufficient coal to provide all our fossil fuel needs (including possible need to replace crude oil with synthetic crude oil) for at least the next 300 years (probably 1,000 years).

Hay to feed horses was the major transport fuel 300 years ago and ‘peak hay’ was feared in the nineteenth century, but availability of hay is not a significant consideration for transportation today. Nobody can know what – if any – demand for fossil fuels will exist 300 years in the future.

Richard

March 2, 2017 12:27 pm

One thing this post brought to mind, from an old chemist’s point of view, is that elution and other processes are things that one can fit a logistic curve to a fair number of quantitative measurements. See: https://en.wikipedia.org/wiki/Logistic_function .

Editor
March 2, 2017 1:20 pm

Captain Kiefer, Thanks for the great post. I enjoyed it and the discussion. Thanks also for the link to your “Twenty-First Century Snake Oil” paper, it is very good. Especially section 6, “Evaluating Biofuels.” A great read. You mentioned in it that KiOR opened a 10 million gallon-per-year commercial cellulosic biorefinery in 2012, which is true. Did you hear they are now under investigation by the SEC and have been sued for making false statements? The Mississippi plant is now shut down and the company filed for bankruptcy in 2014. What a mess. The Mississippi AG called the company “one of the largest frauds ever perpetuated on the state of Mississippi.” Anyway, great post and paper. Thanks.

Todd "Ike" Kiefer
March 2, 2017 2:17 pm

KiOR is a big scandal in MS because it folded with an unrepaid $75 million loan. They went so far in their deception as to film trucks being filled with non-existent “clean gasoline” while the stuff they were actually cooking was acidic black tar bio-crude. Along the way I’ve warned people about KL-Energy (now bankrupt), Range Fuels (now bankrupt), INEOS Bio (shut down and for sale with no commercial ethanol production), Codexis (CEO walked away from cellulosic ethanol after spending $375 million investment by Shell), and Cool Planet (no commercial activity for years after promising to save the planet with green gasoline and biochar and being funded by high-profile investors such as Google). I predicted KiOR’s demise in my paper, and also the same for the larger cellulosic ethanol projects that have since been built by DuPont, Poet, and Abengoa, but have not produced a profitable commercial gallon. Robert Rapier and I have been dogging Vinod Khosla for years for his biofuel frauds.

The technological feasibility of cellulosic ethanol is readily assessible for those willing to look and think. It is essentially the same challenge as a paper mill making paper from trees, except after extracting the cellulose, you are trying to make a very expensive additional conversion from solid to liquid that involves the steps of colonization and fertilization, fermentation, distillation, and dehydration. And the liquid product (ethanol) has a lower market value than the the solid (paper). Pretty awful business model, especially when we know paper mills are barely hanging on. Also, when the EROI of corn ethanol is less than 2:1 after a century of perfecting its processing, and it is chemically 5 times harder to hydrolize cellulose than corn starch, there is a pretty good chance EROI is going to be upside down for cellulosic ethanol. Don’t know where these operations get the PhD chemists they need to convince the investors to buy into such snake oil schemes, but probably the same place Sierra Club gets expert witnesses to testify in their bogus lawsuit consent decree criminal conspiracy with EPA.

Khwarizmi
March 2, 2017 5:29 pm

“So much needless human misery has been perpetuated by people opposing the world’s safest, greenest….”
========

Nuclear power isn’t green because it produces no CO2, Todd.

The earth is greening at a phenomenal pace because our emissions are bringing the CO2 famine to an end.

Nuclear doesn’t do that.

CO2 emitting energy is green.
Nuclear energy isn’t green. It’s our Orwellian culture that always wants black to be white.

Todd "Ike" Kiefer
Reply to  Khwarizmi
March 3, 2017 2:09 pm

“Green” is, unfortunately, a subjective term that many twist to perverse ends. I take it to mean “compatible with nature”. I have proposed that we should all formally define “green” using the exact language the U.S. Congress used in 1970 as the mission statement for the newly-formed EPA. It says the agency’s goal is “to create and maintain conditions, under which humans and nature can exist in productive harmony, that permit fulfilling the social, economic, and other requirements of present and future generations.”

By either of my favored definitions, nuclear is green because it has the smallest overall environmental footprint per total lifetime MWhs generated, particularly including land and habitat impacts and polluting emissions.

As to CO2, my research leads me to believe that the Earth is still recovering from a period of sub-optimally cold temperatures and CO2 starvation which is known to scientists as the last ice age of the Pliestocene Epoch. The indisputable observation that plants and animals today are far more active in the warmth of summer than the cold of winter, and the fact that 17 times more humans die of cold in winter than of heat in summer, are both to me compelling metrics that indicate the Earth has still not warmed to its optimum interglacial temperature most favorable for life. Humans have been adapting to sea level rising at its current stable pace of 3mm/yr for all of our recorded history, and we are fortunate it has slowed from peak rates of 50 mm/yr ten thousand years ago. The 2,500 climate scientists who collaborated on the IPCC 2013 Working Group 1 acknowledged in their official report that the Earth’s green plant coverage had increased 6% since 1982, and they also reneged on all their catastrophic predictions from the 2007 report (increasing droughts, fires, floods, hurricanes, deforestation, glacial melting, rapid coastal inundation, disease) for lack of evidence. They also formally acknowledged the hiatus in warming beginning in 1998 and admitted that the entire suite of GCMs were so flawed in their warming predictions that they had to disregard their outputs. They also revised the ranges of both the ECS and TCR coefficients downward, and the best mathematical fit to actual observations of the past 20 years puts the likely value of each at the bottom of is respective range. In other words, CO2 is nearly beyond the point where it can warm surface temperatures any more and we may never even reach optimum. Even more menacing, it would only take one asteroid impact or super volcano eruption or large-scale nuclear missile exchange to kick up enough dust into the stratosphere to tip the temperatures instantly downward 5 deg C and give us 3 to 5 years without a growing season in the temperate latitudes, likely starving a large fraction of Earth’s population within months. So my concern is not that humans will ever suffer from too much CO2, but that we will suffer from having too little surplus energy capacity when the real crisis hits.

richardscourtney
March 3, 2017 7:08 am

Khwarizmi:

You assert

The earth is greening at a phenomenal pace because our emissions are bringing the CO2 famine to an end.

Please state your evidence for claiming “our emissions bringing the CO2 famine to an end”.
At issue is what atmospheric CO2 concentration would be in the absence of our trivially small emissions of CO2.

I refer you to findings of one of our 2005 papers
(ref. Rorsch, A; Courtney, RS; Thoenes, D; 2005: The Interaction of Climate Change and the Carbon Dioxide Cycle. E&E, V16, No2.)
that are supported by later work of Selby, later work of Berry, and indications from the OCO-2 satellite.

Our analyses indicate the atmospheric CO2 concentration would probably be the same if the CO2 emission from human emissions were absent: It would probably be the same .

Those analyses show the short term sequestration processes for CO2 can easily adapt to sequester “our” (i.e. the anthropogenic) CO2 emissions in a year. But, according to each of our six different models, the total emission of a year affects the equilibrium state of the entire carbon cycle system. Some processes of the system are very slow with rate constants of years and decades. Hence, the system takes decades to fully adjust to a new equilibrium. So, the atmospheric CO2 concentration slowly changes in response to any change in the equilibrium condition.

Importantly, each of our models demonstrates that the observed recent rise of atmospheric CO2 concentration may be solely a consequence of altered equilibrium of the carbon cycle system caused by, for example, the anthropogenic emission or may be solely a result of desorption from the oceans induced by the temperature rise that preceded it.

The most likely explanation for the continuing rise in atmospheric CO2 concentration is adjustment towards the altered equilibrium of the carbon cycle system provided by the temperature rise in previous decades during the centuries of recovery from the Little Ice Age.

This slow rise in response to the changing equilibrium condition also provides an explanation of why the accumulation of CO2 in the atmosphere continued when in two subsequent years the flux into the atmosphere decreased (the years 1973-1974, 1987-1988, and 1998-1999).

Richard