Guest essay by Professor Philip Lloyd, Cape Peninsula University of Technology
Daily we are told that we are wicked to burn fossil fuels. The carbon dioxide which is inevitably emitted accumulates in the atmosphere and the result is “climate change.” If the stories are to believed, disaster awaits us. Crops will wither, rivers will dry up, polar bears will disappear and malaria will become rampant.
It is a very big “IF”. We could waste trillions for nothing. Indeed, Lord Stern has estimated that it would be worth spending a few trillion dollars each year to avoid a possible disaster in 200 years’ time. Because he is associated with the London School of Economics he is believed – by those whose experience of insurance is limited. Those who have experience know that it is not worth insuring against something that might happen in 200 years time – it is infinitely better to make certain your children can cope. With any luck, they will do the same for their children, and our great-great-great grandchildren will be fine individuals more than able to deal with Lord Stern’s little problem.
So I decided to examine the hypothesis from first principles. There are five steps to the hypothesis:
1. The carbon dioxide (CO2) content of the atmosphere is rising.
2. The rise in CO2 in the atmosphere is largely paralleled by the increase in fossil fuel combustion. Combustion of fossil fuels results in emission of CO2, so it is eminently reasonable to link the two increases.
3. CO2 can scatter infra-red over wavelengths primarily at about 15 µm. Infra-red of that wavelength, which should be carrying energy away from the planet, is scattered back into the lower troposphere, where the added energy input should cause an increase in the temperature.
4. The expected increase in the energy of the lower troposphere may cause long-term changes in the climate and thermosphere, which could be characterized by increasing frequency and/or magnitude of extreme weather events, an increase in sea temperatures, a reduction in ice cover and many other changes.
5. The greatest threat is that sea levels may rise and flood large areas presently densely inhabited.
Are these hypotheses sustainable in the scientific sense? Is there a solid logic linking each step in this chain?
The increase in CO2 in the atmosphere is incontrovertible. Many measurements show this. For instance, since 1958 there have been continuous measurements at the Mauna Loa observatory in Hawaii:
The annual rise and fall is due to deciduous plants growing or resting, depending on the season. But the long-term trend is ever-increasing levels of CO2 in the atmosphere.
There were only sporadic readings of CO2 before 1958, no continuous measurements. Nevertheless, there is sufficient information to construct a view back to 1850:
There was a slight surge in atmospheric levels about 1900, then a period of near stasis until after 1950, when there was a strong and ongoing increase which has continued to this day. Remember this pattern – it will re-appear in a different guise.
The conclusion is clear – there has been an increase in the carbon dioxide in the atmosphere. What may have caused it?
Well, there is the same pattern in the CO2 emissions from the burning of fossils fuels and other industrial sources:
A similar pattern is no proof – correlation is not causation. But if you try to link the emissions directly to the growth in atmospheric CO2, you fail. There are many partly understood “sinks” which remove CO2 from the atmosphere. Trying to follow the dynamics of all the sinks has proved difficult, so we do not have a really good chemical balance between what is emitted and what turns up in the air.
Fortunately isotopes come to our aid. There are two primary plant chemistries, called C3 and C4. C3 plants are ancient, and they tend to prefer the 12C carbon isotope to the 13C. Plants with a C4 chemistry are comparatively recent arrivals, and they are not so picky about their isotopic diet. Fossil fuels primarily come from a time before C4 chemistry had evolved, so they are richer in 12C than today’s biomass. Injecting into the air 12C-rich CO2 from fossil fuels should therefore cause the 13C in the air to drop, which is precisely what is observed:
So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive. But does it have any effect?
Carbon dioxide scatters infra-red over a narrow range of energies. The infra-red photons, which should be carrying energy away from the planet, are scattered back into the lower troposphere. The retained energy should cause an increase in the temperature.
Viewing the planet from space is revealing:
The upper grey line shows the spectrum which approximates that of a planet of Earth’s average albedo at a temperature 280K. That is the temperature about 5km above surface where incoming and outgoing radiation are in balance. The actual spectrum is shown by the blue line. The difference between the two is the energy lost by scattering processes caused by greenhouse gases. Water vapour has by far the largest effect. CO2 contributes to the loss between about 13 and 17 μm, and ozone contributes to the loss between about 9 and 10µm.
The effect of carbon dioxide absorption drops off logarithmically with concentration. Doubling the concentration will not double any effect. Indeed, at present there is ~400ppm in the atmosphere. We are unlikely see a much different world at 800ppm. It will be greener – plants grow better on a richer diet – and it may be slightly warmer and slightly wetter, but otherwise it would look very like our present world.
However, just as any effect will lessen proportionately with increase in concentration, so it will increase proportionately with any decrease. If there are to be any observable effects, they should be visible in the historical records. Have we seen them?
There are “official” historical global temperature records. A recent version from the Hadley Climate Research unit is:
The vertical axis gives what is known as the “temperature anomaly”, the change from the average temperature over the period 1950-1980. Recall that carbon dioxide only became significant after 1950, so we can look at this figure with that fact in mind:
* from 1870 to 1910, temperatures dropped, there was no significant rise in carbon dioxide
* from 1910 to 1950, temperatures rose, there was no significant rise in carbon dioxide.
* from 1950 to 1975, temperatures dropped, carbon dioxide increased
* from 1975 to 2000, both temperature and carbon dioxide increased
* from 2000 to 2015, temperatures rose slowly but carbon dioxide increased strongly.
Does carbon dioxide drive temperature changes? Looking at this evidence, one would have to say that, if there is any relationship, it must be a very weak one. In one study I made of the ice core record over 8 000 years, I found that there was a 95% chance that the temperature would change naturally by as much as +/-2degrees C during 100 years. During the 20th century, it changed by about 0.8degrees C. The conclusion? If carbon dioxide in the atmosphere does indeed cause global warming, then the signal has yet to emerge from the natural noise.
One of the problems with the “official” temperature records such as the Hadley series shown above is that the official record has been the subject of “adjustments”. While some adjustment of the raw data is obviously needed, such as that for the altitude of the measuring site, the pattern of adjustments has been such as to cool the past and warm the present, making global warming seem more serious than the raw data warrants.
It may seem unreasonable to refer to the official data as “adjusted”. However, the basis for the official data is what is known as the Global Historical Climatology Network, or GHCN, and it has been arbitrarily adjusted. For example, it is possible to compare the raw data for Cape Town, 1880-2011, to the adjustments made to the data in developing GHCN series Ver. 3:
The Goddard Institute for Space Studies is responsible for the GHCN. The Institute was approached for the metadata underlying the adjustments. They provided a single line of data, giving the station’s geographical co-ordinates and height above mean sea-level, and a short string of meaningless data including the word “COOL”. The basis for the adjustments is therefore unknown, but the fact that about 40 successive years of data were “adjusted” by exactly 1.10 degrees C strongly suggests fingers rather than algorithms were involved.
There has been so much tampering with the “official” records of global warming that they have no credibility at all. That is not to say that the Earth has not warmed over the last couple of centuries. Glaciers have retreated, snow-lines risen. There has been warming, but we do not know by how much.
Interestingly, the observed temperatures are not unique. For instance, the melting of ice on Alpine passes in Europe has revealed paths that were in regular use a thousand years and more ago. They were then covered by ice which has only melted recently. The detritus cast away beside the paths by those ancient travellers is providing a rich vein of archaeological material.
So the world was at least as warm a millennium ago as it is today. It has warmed over the past few hundred years, but the warming is primarily natural in origin, and has nothing to do with human activities. We do not even have a firm idea as to whether there is any impact of human activities at all, and certainly cannot say whether any of the observed warming has an anthropogenic origin. The physics say we should have some effect; but we cannot yet distinguish it from the natural variation.
Those who seek to accuse us of carbon crime have therefore developed another tool – the global circulation model. This is a computer representation of the atmosphere, which calculates the conditions inside a slice of the atmosphere, typically 5km x 5km x 1km, and links each to an adjacent slice (if you have a big enough computer – otherwise your slices have to be bigger).
The modellers typically start their calculations some years back, for which there is a known climate, and try to see they can predict the (known) climate from when they start up to today. There are many adjustable parameters in the models, and by twiddling enough of these digital knobs, they can “tune” the model to history.
Once the model seems to be able to reproduce historical data well enough, it is let rip on the future. There is a hope that, while the models may not be perfect, if different people run different tunings at different times, a reasonable range of predictions will emerge, from which some idea of the future may be gained.
Unfortunately the hopes have been dashed too often. The El Nino phenomenon is well understood; it has a significant impact on the global climate; yet none of the models can cope with it. Similarly, the models cannot do hurricanes/typhoons – the 5kmx5km scale is just too coarse. They cannot do local climates – a test of two areas only 5km apart, one of which receives at least 2 000mm of rain annually, and the other averages just on 250mm, failed badly. There was good wind and temperature data and the local topography. The problem was modelled with a very fine grid, but there were not enough tuning knobs to be able to match history.
Even the basic physics used in these models fails. The basic physics predicts that, between the two Tropics, the upper atmosphere should warm faster than the surface. We regularly fly weather balloons carrying thermometers into this region. There are three separate balloon data sets, and they agree that there is no sign of extra warming:
The average of the three sets is given by the black squares. The altitude is given in terms of pressure, 100 000Pa at ground level and 20 000Pa at about 9km above surface. There are 22 different models, and their average is shown by the black line. At ground level, measurement shows warming by 0.1degrees C per decade, but the models predict 0.2degrees C per decade. At 9km, measurement still shows close to 0.1degrees C, but the models show an average of 0.4degrees C and extreme values as high as 0.6degrees C. Models that are wrong by a factor of 4 or more cannot be considered scientific. They should not even be accepted for publication – they are wrong.
The hypothesis that we can predict future climate on the basis of models that are already known to fail is false. International agreements to control future temperature rises to X degrees C above pre-industrial global averages have more to do with the clothing of emperors than reality.
So the third step in our understanding of the climate boondoggle can only conclude that yes, the world is warming, but by how much and why, we really haven’t a clue.
What might the climate effects of a warmer world be? What is “climate”? It is the result of averaging a climatological variable, such as rainfall or atmospheric pressure, measured typically over a month or a season, where the average is taken over several years so as to give an indication of the weather that might be expected at that month or season.
Secondly, we need to understand the meaning of “change”. In this context it clearly means that the average of a climatological variable over X number of years will differ from the same variable averaged over a different set of X years. But it is readily observable that the weather changes from year to year, so there will be a natural variation in the climate from one period of X years to another period X years long. One therefore needs to know how long X must be to determine the natural variability and thus to detect reliably any change in the measured climate.
This aspect of “climate change” appears to have been overlooked in all the debate. It seems to be supposed that there was a “pre-industrial” climate, which was measured over a large number of years before industry became a significant factor in our existence, and that the climate we now observe is statistically different from that hypothetical climate.
The problem, of course, is that there is very little actual data from those pre-industrial days, so we have no means of knowing what the climate really was. There is no baseline from which we can measure change.
Faced by this difficulty, the proponents of climate change have modified the hypothesis. It is supposed that the observed warming of the earth will change the climate in such a way as to make extreme events more frequent. This does not alter the difficulty; in fact, it makes it worse.
To illustrate, assume that an extreme event is one that falls outside the 95% probability bounds. So in 100 years, one would expect 5 extreme events on average. Rather than taking 100 years of data to obtain the average climate, there are now only 5 years to obtain an estimate of the average extreme event, and the relative error in averaging 5 variable events is obviously much larger than the relative error in averaging 100 variable events.
The rainfall data for England and Wales demonstrates this quite convincingly:-
The detrended data are close to normally distributed, so that it is quite reasonable to use normal statistics for this. The 5% limits are thus two standard deviations either side of the mean. In the 250-year record, 12.5 extreme events (those outside the 95% bounds) would be expected. In fact, there are 7 above the upper bound and 4 below the lower bound, or 11 in total. Thus it requires 250 years to get a reasonable estimate (within 12%) of only the frequency of extreme rainfall. There is no possibility of detecting any change in this frequency, as would be needed to demonstrate “climate change”.
Indeed, a human lifespan is insufficient even to detect the frequency of the extreme events. In successive 60-year periods, there are 2, 4, 2 and 2 events, an average of 2.5 events with a standard deviation of 1.0. There is a 95% chance of seeing between 0.5 and 5.5 extreme events in 60 years, where 3 (5% of 60) are expected. Several lifetimes are necessary determine the frequency with any accuracy, and many more to determine any change in the frequency.
It is known to have been warming for at least 150 years. If warming had resulted in more extreme weather, it might have been expected that there was some evidence for an increase in extreme events over that period. The popular press certainly tries to be convincing when an apparently violent storm arises. But none of the climatological indicators that have data going back at least 100 years show any sign of an increase in frequency of extreme events
For instance, there have been many claims that tropical cyclones are increasing in their frequency and severity. The World Meteorological Organisation reports: “It remains uncertain whether past changes in tropical cyclone activity have exceeded the variability expected from natural causes.”
It is true that the damage from cyclones is increasing, but this is not due to more severe weather. It is the result of there being more dwellings, and each dwelling being more valuable, than was the case 20 or more years ago. Over a century of data was carefully analysed to reach this conclusion. The IPCC report on extreme events agrees with this finding.
Severe weather of any kind is most unlikely to make any part of our planet uninhabitable – that includes drought, severe storms and high winds. In fact, this is not too surprising – humanity has learned how to cope with extreme weather, and human beings occupy regions from the most frigid to the most scalding, from sea level to heights where sea-level-dwellers struggle for breath. Not only are we adaptable, but we have also learned how to build structures that will shield us from the forces of nature.
Of course, such protection comes at a cost. Not everyone can afford the structures needed for their preservation. Villages are regularly flattened by storms that would leave most modern cities undamaged. Flood control measures are designed for the one-in-a-hundred year events, and they generally work – whereas low-lying areas in poor nations are regularly inundated for want of suitable defences.
Indeed, it is a tribute to the ability of engineers to protect against all manner of natural forces. For instance, the magnitude 9 Tōhoku earthquake of 2011 (which caused the tsunami that destroyed the reactors at Fukushima) caused little physical damage to buildings, whereas earlier that year, the “mere” magnitude 7 earthquake in Wellington, New Zealand, toppled the cathedral, which was not designed to withstand earthquakes.
We should not fear extreme weather events. There is no evidence that they are any stronger than they were in the past, and most of us have adequate defenses against them. Of course, somewhere our defenses will fail, but that is usually because of a design fault by man, not an excessive force of Nature. Here, on the fourth step of our journey, we can clearly see the climate change hypothesis stumble and fall.
In the same way, most of the other scare stories about “climate change” fail when tested against real data. Polar bears are not vanishing from the face of the earth; indeed, the International Union for the Conservation of Nature can detect no change in the rate of loss of species over the past 400 years. Temperature has never been a strong determinant of the spread of malaria – lack of public health measures is a critical component in its propagation. Species are migrating, but whether temperature is the driver is doubtful – diurnal and seasonal temperature changes are so huge that a fractional change in the average temperature is unlikely to be the cause. Glaciers are melting, but the world is warmer, so you would expect them to melt.
There remains one last question – will the seas rise and submerge our coastlines?
First it needs to be recognized that the sea level is rising. It has been rising for about the past 25 000 years. However, for the past 7 millennia it has been rising slower than ever before:
The critical question is whether the observed slow rate of rise has increased as a result of the warming climate. There are several lines of evidence that it has not. One is the long-term data from tide gauges. These have to be treated with caution because there are areas where the land is sinking (such as the Gulf of Mexico, where the silt carried down the Mississippi is weighing down the crust), and others where it is rising (such as much of Scandinavia, relieved of a burden of a few thousand metres of ice about 10 000 years ago). A typical long-term tide gauge record is New York:
The 1860-1950 trend was 2.47-3.17mm/a; the 1950-2014 trend was 2.80-3.42mm/a, both at a 95% confidence level. The two trends are statistically indistinguishable. There is <5% probability that they might show any acceleration after 1950.
Another line of evidence comes from satellite measurements of sea level. The figure below shows the latest available satellite information – it only extends back until 1993. Nevertheless, the 3.3±0.3mm/a rise in sea level is entirely consistent with the tide gauge record:
Thus several lines of evidence point to the present rate of sea level rise being about 3mm/a or 30cm per century. Our existing defences against the sea have to deal with diurnal tidal changes of several metres, and low-pressure-induced storm surges of several metres more. The average height of our defences above mean sea level is about 7m, so adding 0.3m in the next century will reduce the number of waves that occasionally overtop the barrier.
The IPCC predicts that the sea level will rise by between 0.4 and 0.7m during this century. Given the wide range of the prediction, there is a possibility they could be right. Importantly, even a 0.7m rise is not likely to be a disaster, in the light of the fact that our defences are already metres high – adding 10% to them over the next 80 years would be costly, but we would have decades to make the change, and should have more than adequate warning of any significant increase in the rate of sea level rise.
To conclude, our five steps have shown:
· the combustion of ever increasing quantities of fossil fuel has boosted the carbon dioxide concentration of the atmosphere.
· The physical impact of that increase is not demonstrable in a scientific way. There may be some warming of the atmosphere, but at present any warming is almost certainly hidden in the natural variation of temperatures.
· there is no significant evidence either for any increase in the frequency or magnitude of weather phenomena, or climate-related changes in the biosphere.
· any sea level rise over the coming century is unlikely to present an insuperable challenge.
Attempts to influence global temperatures by controlling carbon dioxide emissions are likely to be both futile and economically disastrous.





Excellent article.
Anthony / Phillip …” Nevertheless, the 3.3±0.3mm/a rise in sea level is entirely consistent with the tide gauge record: is entirely consistent with the tide gauge record:”
Should that be ” 3.3±0.3mm/YEAR rise ” ???
3.3 +/- 0.3 mm/a – that a is annum, my interpretation.
Never seen it used that way, but, makes sense !! And no, I am not going to correct YOUR sticky fingers !! LOL
What an extremely concise and clear summary. Excellent.
The Goddard Institute for Space Studies is NOT responsible for the GHCN. GISS uses already adjusted GHCN data as its principal input. See GHCN to see more, and see who is actually responsible for GHCN.
I’m posting at the moment on GHCN and Gistemp adjustments. I’ve just made available the post I’m working on Self-adjusting the Adjustments although it is not yet completed, in order to refer to it here. You will also find numerous examples of GHCN and USHCN adjustment volatility in posts on my blog since last summer. The current incomplete post is intended to summarise this volatility and discuss its effect on the GHCN and Gistemp surface temperature records. I hope to complete the post in the next few days.
Wow, just WOW. A suggestion: Why not make this excellent, excellent essay available to the global network of skeptic sites to insure it has the greatest possible distribution.?????
That would assume they care a single smidgen about the truth.
Sorry, wrong place for this comment.
Actually, the models have an essential task in CAGW. If one applies the equations in a naive manner, one gets a temperature increase of a little more than one degree C. per doubling of CO2. That isn’t nearly scary enough and most people would even agree that it would be beneficial. To get a scary warming of 4 degrees C. it is necessary to have positive feedback and, if possible, a tipping point.
It is a ‘big deal’ that the models are basically incompetent. None of the data demonstrates positive feedback. Unless the alarmists can demonstrate that the models are competent, CAGW has no scientific credibility (not that it seems to matter).
The problem is that the modellers are barking up the wrong tree. Edward Lorenz was one of the fathers of climate modelling and arguably the most influential meteorologist in history, having laid the foundations of chaos theory. Here’s a quote:
In other words the model has to be based entirely on the physics. It can’t be based on curve fitting. Ah but the models are tuned to match the historical climate. Oops. link
This post makes a ton of errors, but the one that stands out the most to me is:
No details are given for this “study,” but being familiar with ice core data lets me know this isn’t a fair or accurate summary of things. That it says “the ice core record” suggests this is built upon one ice core, or maybe even a few ice cores, but not that it is representative of any sort of global or even hemispheric record. All sorts of problems commonly found in temperature reconstructions almost certainly exist in this “study” as well. There is simply no way to generate results like this post describes for the planet.
It seems to me people are simply accepting this result because they like it. Had this “study” found results they didn’t like, I wager both the author of this post and most of the commenters here would say it was garbage and completely invalid. The data simply doesn’t exist to support this sort of conclusion.
“Sources of uncertainty in ice core data” Eric J. Steig
No need for the inverted commas – see Lloyd, Philip J. An estimate of the centennial variability of global temperatures. Energy & Environment, 26(3), pp. 417–424 2015. DOI: 10.1260/0958-305X.26.3.417
The undercurrent not-so-hidden agenda is the constant caterwauling about coal fired power generation and the obvious indifference to all the other sources.
FTA:
“So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive.”
Because it happens that the 13C drop is consistent with a rationalization of what should happen when fossil fuels are burned?
Hardly. If that were enough to go on, then the rationalization that temperatures rising is consistent with increasing atmospheric CO2 would be conclusive as well.
Neither are. In fact, the rate of change of CO2 is virtually a perfect match with temperatures, and this indicates that human inputs have little impact on overall atmospheric concentration.
That the CO2 concentration apparently began to rise a century or so before human emissions took off (and that data for the 12C to 13C ratio is confined to the satellite era) remains a bit of a puzzle but it’s a minor quibble to an overall incisive summary IMO.
Only consistent, Ferdinand. There are more things in heaven and Earth than are dreamt of in your philosophy.
“…which can be as good provided by 90% human, 10% temperature”
Not really as good, and only through a very contrived modeling effort, really little more than a complicated means of placing the data points where you want them. It’s like recutting the pieces of a jigsaw puzzle to make them fit, but the lines of the picture don’t mesh.
There is really no doubt about it, Ferdinand. Watch what happens when temperature anomaly starts to drop.
Bart,
It is not because your “fit” is mainly a matter of a good correlation between temperature variability and CO2 variability, that the slope in CO2 is caused by temperature. The slope is entirely from a different process than what causes the variability, as the latter is proven the response of (tropical) vegetation, while the slope in CO2 by vegetation is negative: it is a small, but growing sink for CO2.
My fit is entirely based on observations: the transient response of vegetation and oceans to short and longer term temperature changes and the net sink rate which is directly proportional to the extra CO2 pressure in the atmosphere above steady state per Henry’s law.
“The slope is entirely from a different process than what causes the variability…”
Nonsense. There is no indication of it whatsoever. You are just guessing.
Your fit is based on shoe-horning the data into your preconceived model. It is nothing but wishful thinking.
Bart,
If you don’t understand that the opposite CO2 and δ13C changes proves that the main reaction to temperature variability is from vegetation, neither that the oxygen balance and satellites prove that the earth is greening, then there is no discussion possible.
My model is based on observations, yours is just “curve” fitting of two straight slopes…
Bart,
Not only consistent. There are only two sources of low-13C on earth: fossil organics and recent organics (with one exception, inorganic methane may be formed in the deep earth crust, but that is an aside).
All inorganic CO2 has a 13C/12C (δ13C as measurement standard) ratio which is higher than in the atmosphere. That makes that any extra release of CO2 (or even throughput or increase in cycle) from oceans, volcanoes, carbonate rock weathering,… will increase the δ13C level in the atmosphere. That means that these are not the main causes of the CO2 increase in the atmosphere, or the δ13C level would have increased, not decreased…
We see a firm drop of ~1.6 per mil in δ13C level in the atmosphere (and in the ocean surface and vegetation) in exact ratio with human emissions since ~1850. Not only in ice cores, but also in coralline sponges and fallen leaves over the past 160 years, while the previous levels in the whole Holocene didn’t vary with more than +/- 0.2 per mil (ice cores). Coralline sponges also show not more than +/- 0.2 natural variability between 1400 and 1850. These have a resolution of 2-4 years, far better than ice cores.
Thus the oceans can’t be the cause of the increase. Still the biosphere could be the cause. But as the oxygen (and satellites confirm) show: the biosphere is a net source of oxygen, thus a net sink for CO2 and preferentially 12CO2, thus also not the cause of the 13C/12C ratio decline. The earth is greening…
You can wave this evidence away, but if you have one big, increasing, source of low-13C and the decrease in the atmosphere exactly follows the expected δ13C decline in ratio of that source and the CO2 increase in the atmosphere is in exact ratio with that source, then you need a lot of better arguments than the arbitrary fit of two trends (which can be as good provided by 90% human, 10% temperature) to prove that humans are not to blame…
“that humans are not to blame…”
If humans do a preferable thing why are they to blame?
Rainer,
You are right, no “blame” here, only in the heads of the climate activists…
Yes, Bart, this is embarrassingly shallow analysis on the part of the author. (right or wrong, he should have at least gone a little deeper into it…) Reading Ferdinand’s well written 2010 piece a while back, i came away with three points about the C13 argument that seemed weak. The first and most obvious was given by several commentors. The atmospheric decrease in the C13 ratio may well be an indicator of one thing only, that we are burning fossil fuels. The second point was also mentioned by a commentor or two. Natural warming before the industrial revolution produced lower C13 ratios, not higher. And lastly, one point that i figured out on my own. The oceans emit half of all co2 and there’s no reason why they shouldn’t be driving the C13 ratio higher as is…
Dr Spencer was always of the mind set that plankton could produce the same C13 ratio fingerprint as fossil feuls. (with a warming ocean there is less plankton thus more C12…)
Fonzie,
The first point is already quite convincing: both CO2 increase and δ13C decrease are in exact ratio to human emissions over at least the past 55 years. There is very little variability over the whole Holocene in δ13C (+/- 0.2 per mil) and suddenly with the industrial revolution the δ13C levels start to drop, nowadays already 1.6 per mil below pre-industrial.
Indeed the δ13C drop can be simple dilution, but that implies that the natural addition must be about three times human emissions, or a total increase in the atmosphere of four times the human addition. What is measured is an increase of only half the human addition. Doesn’t sound good for the dilution theory…
It would need a lot of coincidence that some natural low-13C cycle increased in complete lockstep with human emissions, which increased a fourfold in the past 55 years. There is zero evidence that the natural carbon cycle(s) substantially increased in the past 55 years: recent residence time estimates are even slightly longer than older ones…
The second point: Natural warming before the industrial revolution produced lower δ13C levels may be true (reference?), but over the whole Holocene the variability was not more than +/- 0.2 per mil.
there’s no reason why they shouldn’t be driving the C13 ratio higher as is
Yes the (deep) oceans cycle will increase the current δ13C level of the atmosphere, as before the industrial revolution that was in equilibrium with plant growth at around -6.4 per mil. It removes about 2/3 of the human “fingerprint”, but not all of it. Still a firm drop visible…
Dr. Spencer is not completely right: plankton is somewhat in between the C3 and C4 cycle of CO2 uptake, thus its δ13C is less reduced than of most land plants. Despite that, the ocean surface is around +1 per mil, going up to +5 per mil where abundant bio-life is present. while the deep oceans are around zero per mil. Thus the oceans can’t be the cause of the δ13C drop in the atmosphere…
“That is not to say that the Earth has not warmed over the last couple of centuries. Glaciers have retreated, snow-lines risen. There has been warming, but we do not know by how much.”
An ice cube on my kitchen counter continues to melt even if I turn the air conditioner all the way down.
Just sayin’.
Thanks Philip. Clear and concise.
Hey, what about Ernst Beck’s 30,000 chemical CO2 bottle readings? Direct chemical evaluation of CO2 that showed that CO2 was significantly higher than now during three periods of the last 200 years. It was over 550 ppm as recently as the 1940s. Using that misleading and data deficient graph to plot anything earlier than 1950 is lazy.
H7, while,you are correct about Higleys data, it is unfortunately suspect in terms of being representative. Especially the high measured stuff. CO2 can vary locally a great deal near the surface (where all those bottles were), and diurnally, thanks to things like photosynthesis, soil composition/dampness and microbial composition,… Even though higher in the atmosphere it is supposedly well mixed.
I have personally measured CO2 before 9 AM in the north end of Ulaanbaatar at 800 ppm, rising to 1100 briefly about 11 AM then dropping rapidly as the accumulated CO2 from the night’s burning rose up the hillsides in the faint morning breeze. We measure background CO2 to subtract it from combustion experiments. The city is in a valley and there is a nearly nightly inversion in winter. The source is domestic coal combustion (not power stations).
The cautions about the chemical measurements are not based on the idea that the measurements were in error, only that the location was influenced by local combustion.
higley7,
Sorry, see my response to Richard Greene
The historical measurements were fairly accurate (+/- 10 ppmv) but most were too near huge sources and sinks to give any insight of the real “background” CO2 levels of that time…
Ice cores are quite accurate in CO2 level, but these are averaged over 10-600 years, depending of local snow accumulation rates. Ice cores or any CO2 or δ13C proxy like stomata data or coralline sponges show anything special around the 1942 “peak” in the late Beck’s data, only a steady increase in CO2 and a steady decline in δ13C. Which simply means that there was no such peak.
Sorry about the data deficiency – you can find the data in AR4
Solar basis for everything:
1) Since 1650 the Sun has essentially been on average increasing in output as indicated by the Sunspot record.
2) The Solar Cycle 24 is the smallest in recent record [last 100 years][1913 was about equal].
3) The Earth has warmed for the last 300 years.
4) The Oceans, as an integral of energy storage are the warmest ever.
5) As Solar output decreases, based on the Solar Flux Proxy [energy actually reaching the Earth], we will have warm oceans and cool land masses, This is due to the land masses not retaining as much heat as the oceans.
6) Since the El Nino/La Nina are both about stored ocean heat [cold], reduced Solar output means less Trade Winds and a reduced Hadley Cell.
7) Reduced Hadley Cell means that the Jet Streams will move further South in the North and further North in the South. Basically, the cold will arrive.
8) Calculations show about 0.1C/2 years.
9) In 10 years, the Global temperature could be down 0.5C.
“… There is a hope that, while the models may not be perfect, if different people run different tunings at different times, a reasonable range of predictions will emerge, from which some idea of the future may be gained…”
This is a feeble emulation of a Monte Carlo method. You might call it the Mountebanko method.
Objective analysis (not funded by government grants or energy companies) reveals that change to the amount of CO2 in the atmosphere (and thus burning fossil fuels) has no significant effect on climate.
A conservation of energy equation, employing the time-integral of sunspot number anomalies (as a proxy) and a simple approximation of the net effect of all ocean cycles achieves a 97% match with measured average global temperatures since before 1900. Including the effects of CO2 improves the match by 0.1%. http://globalclimatedrivers.blogspot.com
Open your lens out to “when we live” and the actuarial calculus becomes even more interesting.
Let us assume that we now live in the Anthropocene. We will do this by acceding to the insulating capacity of greenhouse gases as described by the IPCC et al. We do this because it actually reverses the case for removing GHGs.
How?
Because if we now live in the Anthropocene, then wouldn’t that mean that the Holocene is over? In fact, it actually should be given the Holocene’s present 11,719 year age. So we are living in the Anthropocene extension of Holocene interglacial warmth.
“We will illustrate our case with reference to a debate currently taking place in the circle of Quaternary climate scientists. The climate history of the past few million years is characterised by repeated transitions between `cold’ (glacial) and `warm’ (interglacial) climates. The first modern men were hunting mammoth during the last glacial era. This era culminated around 20,000 years ago [3] and then declined rapidly. By 9,000 years ago climate was close to the modern one. The current interglacial, called the Holocene, should now be coming to an end, when compared to previous interglacials, yet clearly it is not. The debate is about when to expect the next glacial inception, setting aside human activities, which may well have perturbed natural cycles.” http://arxiv.org/pdf/0906.3625.pdf
For brevity I include just one such reference. So, widening the lens to encompass that were it not for the avowed prowess of GHGs to sustain interglacial warmth, the question is what climate state is left?
Why, the cold glacial state, of course.
So in terms of insurance, we may have taken out the only policy that can delay or offset glacial inception, GHGs.
Now, who wants to cancel that policy?
An excellent clear paper. Well done. And great comments above.
If we used normal graphs rather than anomaly graphs we would highlight that world temperature has just risen just 0.8 degrees C since 1880. It is not unusual to have 5 or 10 degrees variation over a single day or by moving a few hundred km away.
Most land based measurements have been tampered with and the only reliable temperature measure has been the satellite records since 1979 which agree with balloon measurements. The past 18 years 8 months show no rise despite humans emitting 25% of all CO2 emissions in that time. Weather stations are now located on airports near engines and air conditioners. Many of the old stations in remote cold areas have been closed. The warming measured has been mostly in warmer minimums (i.e. night) temperatures which result from urban heat island effects.
The constant stream of claimed disasters are proved false again and again.
Governments get massive tax revenue from fuel levies on gasoline. Nobody has thought about how to replace this government revenue when we all change to electric cars. Currently this tax is about 40% of gasoline cost in Australia. Wait for the protests when electric cars have to pay for road costs!!
The warmists’ movement is overtly political. If they were serous they would stop mass air travel and adopt hydro and nuclear power.
Where is the problem?
The good professor had best be careful, the witch hunt is going full bore ahead. The socialists are seeking all dissenters so as to burn them at the stake.
Looks like Cape Town had a temperature highpoint in the 1930’s, according to the raw data.
We can include Cape Town in the “worldwide” heatwave of the 1930’s.
Some people claim the extreme weather in the 1930’s was not a worldwide phenomenon, but the data seems to say otherwise.
Fortunately isotopes come to our aid. […] Fossil fuels primarily come from a time before C4 chemistry had evolved, so they are richer in 12C than today’s biomass.
===========
Unfortunately, they don’t…
* * * * * * * * *
During the 1950’s, increasingly numerous measurements of the carbon isotope ratios of hydrocarbon gases were taken, particularly of methane; and too often assertions were made that such ratios could unambiguously determine the origin of the hydrocarbons. The validity of such assertions were tested, independently by Colombo, Gazzarini, and Gonfiantini in Italy and by Galimov in Russia. Both sets of workers established that the carbon isotope ratios cannot be used reliably to determine the origin of the carbon compound tested.
Colombo, Gazzarini, and Gonfiantini demonstrated conclusively, by a simple experiment the results of which admitted no ambiguity, that the carbon isotope ratios of methane change continuously along its transport path, becoming progressively lighter with distance traveled. Colombo et al. took a sample of natural gas and passed it through a column of crushed rock, chosen to resemble as closely as possible the terrestrial environment.27 Their results were definitive: The greater the distance of rock through which the sample of methane passes, the lighter becomes its carbon isotope ratio.
The reason for the result observed by Colombo et al. is straightforward: there is a slight preference for the heavier isotope of carbon to react chemically with the rock through which the gas passes. Therefore, the greater the transit distance through the rock, the lighter becomes the carbon isotope ratio, as the heavier is preferentially removed by chemical reaction along the transport path. This result is not surprising; contrarily, such is entirely consistent with the fundamental requirements of quantum mechanics and kinetic theory.
[…]
Galimov demonstrated that the carbon isotope ratio of methane can become progressively heavier while at rest in a reservoir in the crust of the Earth, through the action of methane-consuming microbes.28 The city of Moscow stores methane in water-wet reservoirs on the outskirts of that city, into which natural gas is injected throughout the year. During summers, the quantity of methane in the reservoirs increases because of less use (primarily by heating), and during winters the quantity is drawn down. By calibrating the reservoir volumes and the distance from the injection facilities, the residency time of the methane in the reservoir is determined. Galimov established that the longer the methane remains in the reservoir, the heavier becomes its carbon isotope ratio.
The reason for the result observed by Galimov is also straightforward: In the water of the reservoir, there live microbes of the common, methane-metabolizing type. There is a slight preference for the lighter isotope of carbon to enter the microbe cell and to be metabolized. The longer the methane remains in the reservoir, the more of it is consumed by the methane-metabolizing microbes, with the molecules possessing lighter isotope being consumed more. Therefore, the longer its residency time in the reservoir, the heavier becomes the carbon isotope ratio, as the lighter is preferentially removed by methane-metabolizing microbes. This result is entirely consistent with the fundamental requirements of kinetic theory.
Furthermore, the carbon isotope ratios in hydrocarbon systems are also strongly influenced by the temperature of reaction. For hydrocarbons produced by the Fischer-Tropsch process, the δ13C varies from -65% at 127 C to -20% at 177 C.29, 30 No material parameter, the measurement of which varies by almost 70% with a variation of temperature of only approximately 10%, can be used as a reliable determinant of any property of that material.
The δ13C carbon isotope ratio cannot be considered to determine reliably the origin of a sample of methane, – or any other compound.
http://www.gasresources.net/disposal-bio-claims.htm
* * * * * * * * *
Khwarizmi,
You are right about methane and hydrocarbons, but emissions inventories take into account the different δ13C levels of the sources. That gives an average of about -25 per mil.
Not extremely important, it may be average -20 or -30 per mil, but what is important is that it is the only known huge source of low-13C. Oceans and the bio-sphere as a whole are both sources of high-13C…
Quote *7 earthquake in Wellington, New Zealand, toppled the cathedral,*
It was Christchurch not Wellington
Re: Ferdinand Engelbeen, 4/8/16 @ur momisugly 1:56 pm.
Ferd,
¶¶1, 3, & 5) You said, Quite a lot of remarks, but with very little basis…, but everything I wrote is supported. I decided to show only about as much support as the author did. The same is true of your response where you show an irrelevant, unsourced graph of atmospheric CO2 and temperature anomaly on two different ordinates under the response That simply is not true:. What I was talking about specifically was IPCC’s Fingerprint chart, AR4, Figure 2.3.
http://www.rocketscientistsjournal.com/2010/03/_res/AR4_F2_3_CO2p138.jpg
Here IPCC demonstrates its fingerprint conjecture by graphing CO2 mixing ratio antiparallel to O2 depletion and Global emissions parallel to the C13 mixing ratio. Like scientist Henry Lee said when he drew a size 14 outline around some blood spatter, Sumpin’ wrong here. The so-called fingerprints exist because the graphs are scaled and adjusted to give the desired appearance.
You say,
there are more than 70 CO2 monitoring stations all over the oceans, of which 10 by NOAA, the rest are maintained by different people of independent different organizations and different countries. The only calibration done (nowadays by NOAA) is of the calibration gases, used to calibrate all measurement devices all over the world. But even so, other organizations like Scripps still make their own calibration gases…
As IPCC readily admits, it reviews and assesses the most recent scientific, technical and socio-economic information produced worldwide relevant to the understanding of climate change. It does not conduct any research nor does it monitor climate related data or parameters.
Furthermore, it says MLO data constitute the master time series documenting the changing composition of the atmosphere. … Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope and molecular oxygen (O2) uniquely identified this rise in CO2 with fossil fuel burning. References deleted (everywhere), AR4, ¶1.3.1.
It says, Careful calibration and cross-validation procedures are necessary … . TAR, ¶2.3.2. And
CO2 has been measured at the Mauna Loa and South Pole stations since 1957, and through a global surface sampling network developed in the 1970s that is becoming progressively more extensive and better inter-calibrated. Bold added, TAR, ¶3.5.1, p. 205. In case anyone found intercalibration unclear, it further refers to calibration procedures within and between monitoring networks. TAR, ¶3.5.3.
You claim that the only calibration done … nowadays is intrastation. Without some evidence that IPCC sources abandoned the necessary and progressively more extensive station inter-calibrations, your claim is most dubious.
¶2. The Revelle Factor was a failure — twice. It failed first in 1957 when Revelle and Suess introduced their fantastic parameter, but they admit in that same report that they could not make the data fit their desired result.
It might be tempting to assume that the effects found in the samples investigated and their individual variations are due to local contamination of air masses by industrial CO2 and that the world-wide decrease in the C14 activity of wood is practically zero. This, however, implies a too fast exchange rate and is inconsistent with the lower limit of τ(atm) given above. … An exchange time of 7 years, however, makes it necessary to assume unexpectedly short mixing times for the ocean. Revelle & Suess (1957) p. 23.
So in conclusion, they say
Present data on the total amount of CO2 in the atmosphere, on the rates and mechanisms of CO2 exchange between the sea and the air and between the air and the soils, and on possible fluctuations in marine organic carbon, are insufficient to give an accurate base line for measurement of future changes in atmospheric CO2. Revelle & Suess (1957), p. 26.
Once armed with gigatons of data, IPCC tried to rehabilitate the Revelle Factor, it uncovered Henry’s Law instead. This is revealed in IPCC’s Figure 7.3.10 of the AR4 Second-Order Draft. Inset (a) is the alleged temperature dependence of the Revelle Factor, but it is readily recognized as a scaling of Henry’s Coefficient for CO2 in water:
http://www.rocketscientistsjournal.com/2007/06/_res/F7-3-10.jpg
What IPCC published in order not to confuse the reader was a version of that figure with the temperature dependence, part (a), deleted. AR4, Figure 7.11:
http://www.rocketscientistsjournal.com/2007/06/_res/F7-11.jpg
The Revelle Factor is a failed conjecture, off the bottom of the scale of scientific models. For a full discussion, see
http://www.rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html
¶4. You say, Henry’s law predicts not more than 16 ppmv/°C for the change in steady state between ocean surface and atmosphere, with neither a calculation nor a citation. Clearly IPCC didn’t say that since because it relies on Henry’s law for nothing. And what is claimed by other sources cannot substitute for what the owner says about AGW. Furthermore, should the concentration of atmospheric CO2 not correspond to Henry’s Law, then climatology would have managed to disprove a law of physics.
What Henry’s Law teaches is the great capacity that exists in the ocean to regulate atmospheric CO2. The ocean is a massive reservoir for CO2, about 6,000 times as large as the annual CO2 emissions attributed to man. Try applying Henry’s Law to the MOC/THC circulation of 15 to 50 Sv, from bottom water at 0 to 4C, saturated with CO2, brought to the surface to be warmed to as much as 35C. You should find a huge flux potential for CO2 outgassing, one several times as large as the 92.8 GtC/yr IPCC credits to ocean outgassing. And following, the CO2 depleted surface ocean recharges to capacity as the current moves slowly toward one pole or the other according to the season.
¶6. You say, Human emissions precede its effects in the atmosphere for every observation:, citing your own blog, but that is not a conclusion in any way recognized by IPCC.
The Revelle Factor was supposed to settle questions about Callendar’s conjecture:
CALLENDAR (1938, 1940, 1949) believed that nearly all the carbon dioxide produced by fossil fuel combustion has remained in the atmosphere, and he suggested that the increase in atmospheric carbon dioxide may account for the observed slight rise of average temperature in northern latitudes during recent decades. He thus revived the hypothesis of T. C. CHAMBERLIN (1899) and S. ARRHENIUS (1903) that climatic changes may be related to fluctuations in the carbon dioxide content of the air. … [¶]Subsequently, other authors have questioned Callendar’s conclusions… . R&S p. 18.
But that Callendar Effect, as the Greenhouse Effect was once called, also failed at launch:
Sir George Simpson expressed his admiration of the amount of work which Mr. Callendar had put into this paper. It was excellent work. It was difficult to criticise it, but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, … . Callendar (1938) p. 237.
None of that mattered. Revelle & Suess plowed ahead, kicking off the First International Geophysical Year with a pitch for funds. Their single contribution was a catchy slogan:
Thus human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future. Id., p. 19.
Nothing in climate is in thermodynamic equilibrium, the only kind of equilibrium worth having. Nevertheless, discussions on climate persist like chaotic reverberations from some gigantic cosmic event. AGW, once a conjecture, belongs in the dustbin of pseudoscientific speculations as the single most expensive example.
Jeff,
About the graphs: I know, you are very aware of scales. In this case, the scales are not even important, the important note is that oxygen and CO2 are antiparallel and so are CO2 and δ13C trends. Which can be calculated too and that proves that FF are at the base. The graphs are only an illustration of the observations.
Careful calibration and cross-validation procedures are necessary
I don’t think anyone would trust blood samples analyzed by a lab that is NOT calibrated and cross-validated with standards for any such lab. The calibration and cross-validation is for the calibration gases and equipment only not for the data, except for any corrections necessary if there are problems with the calibration gases or equipment found.
Even so Scripps (and the Japanese) have their own calibration gases, equipment and flask samples independent of NOAA and find the same CO2 levels +/- 0.2 ppmv at Mauna Loa.
The Revelle Factor was a failure
Not that I know… It is basic ocean chemistry, be it that in Revelle’s time the observation period was too short to give the necessary data to confirm the chemistry.
Since then over 3 million ocean samples have confirmed the existence of the Revelle factor.
It is simple to show that it exists: The increase in DIC (total dissolved inorganic carbon) in the ocean surface of longer time series is only ~10% of the increase in the atmosphere. Here for Bermuda:
http://www.biogeosciences.net/9/2509/2012/bg-9-2509-2012.pdf
DIC increased ~1.7% since 1984, while CO2 in the atmosphere increased ~14% in the same period…
Henry’s law predicts not more than 16 ppmv/°C for the change in steady state between ocean surface and atmosphere, with neither a calculation nor a citation.
The literature gives 4-17 ppmv/°C (not my own search).
Here the influence of temperature differences:
http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/text/LMG06_8_data_report.doc
with the formula to calculate the pCO2 from the temperature at measurement to in situ temperature:
(pCO2)sw @ur momisugly Tin situ = (pCO2)sw @ur momisugly Teq x EXP[0.0423 x (Tin-situ – Teq)]
Moreover, the long-term historical T/CO2 ratio in ice cores is not more than 8 ppmv/°C. As that is based on the temperature change in polar air (via δ18O and δD in the ice), the global change is about 16 ppmv/°C.
Try applying Henry’s Law to the MOC/THC circulation of 15 to 50 Sv
The problem is not the massive quantities of water or Henry’s law, the problem is the extremely slow diffusion of CO2 in water. Even with the massive water flow, only 40 GtC/year is circulating between sources at the equator and polar sinks, the latter still extremely undersaturated in CO2. The difference is about 3 GtC more CO2 sink than source.
The 40 GtC/year deep ocean – atmosphere cycle is based on both the thinning of the human δ13C “fingerprint” and the decay of the 14C spike from the atomic bomb tests.
but that is not a conclusion in any way recognized by IPCC
The IPCC doesn’t need to explicitly recognize that conclusion, as they simply assume that humans are the cause of the CO2 increase, which is confirmed by every single available observation…
All alternative explanations I have heard of do fail one or more observations. Thus should be discarded as not true. Some skeptics shoot in their own foot by insisting that humans are not the cause of the CO2 increase in the atmosphere, despite all evidence…
Ferdinand Engelbeen, 4/9/16 @ur momisugly 2:46 am.
Ferd,
You write,
In this case, the scales are not even important, the important note is that oxygen and CO2 are antiparallel and so are CO2 and δ13C trends. Which can be calculated too and that proves that FF are at the base.
First, a point of order. Oxygen and CO2 are antiparallel, but CO2 and δ13C are parallel. Parallel and antiparallel are terms from geometry, manifest on the charts. Because each of those four records has a substantial trend, they can be arbitrarily made parallel or antiparallel by the drafters choice of scales. IPCC made them so. You say they can be calculated, too. But how do you show the parallelism you claim with parameter values with differing dimensions? And why didn’t IPCC do that instead of relying on chartjunk?
You say, I don’t think anyone would trust blood samples analyzed by a lab that is NOT calibrated and cross-validated with standards for any such lab. But that is not analogous to the problem at hand. IPCC is the physician saying you and I have identical blood panels. And that that is because the good doctor calibrated them to be the same.
IPCC’s AGW narrative is based on manufactured data. IPCC inter-calibrates, not intra-calibrates, CO2 records from different stations into agreement and then asserts that atmospheric CO2 concentration is well-mixed and global, represented by the Keeling Curve.
IPCC is playing the same game today with temperature data. It’s a variation of Mann’s game with his tree rings. Their brand of science is to bring the so-called data into agreement with their narrative.
You say,
The problem is not the massive quantities of water or Henry’s law, the problem is the extremely slow diffusion of CO2 in water. Even with the massive water flow, only 40 GtC/year is circulating between sources at the equator and polar sinks, the latter still extremely undersaturated in CO2.
Henry’s Law, even with Henry’s Coefficients, doesn’t provide the rate at which diffusion occurs. You claim it is extremely slow. I claim the contrary. It is instantaneous on climate scales, which are 30 years minimum. Neither your claim nor mine is based on data, but mine jibes with observations on how long it takes to carbonate or decarbonate bottled drinks.
And why would you say polar sinks are extremely undersaturated in CO2? Oceanographers put the period of the MOC/THC at 1 millennium, and the temperature at sinking around 0C. The time is ample for Henry’s Law to take effect, which would bring the water to its maximum concentration of CO2 just as it descends to the bottom.
Re the failure of the Revelle Factor, you said, Not that I know, but then explain why it failed in Revelle’s time! What are your observations about the fact that on measurement, the Factor turned out to be Henry’s Coefficient for CO2 in water, and that IPCC concealed that discovery?
AR4 mentions the Revelle factor five times, all on one page of analysis. AR4, ¶7.3.4.2 Carbon Cycle Feedbacks to Changes in Atmospheric Carbon Dioxide, p. 531. It’s source is Revelle & Suess, 1957, except that ¶7.3.4.2 is where IPCC concealed the contemporary measurements showing the Henry’s Law dependence. Now in AR5, IPCC says only this:
The capacity of the ocean to take up additional CO2 for a given alkalinity decreases at higher temperature (4.23% per degree warming; Takahashi et al., 1993) and at elevated CO2 concentrations (about 15% per 100 ppm, computed from the so called Revelle factor; Revelle and Suess, 1957). Bold added, AR5, ¶6.3.2.5.5 Processes driving variability and trends in air-sea carbon dioxide fluxes, p. 498.
IPCC has retreated back to R&S (1957), omitting the contemporary measurements, and demoting the Revelle Factor with the label so-called. This is similar to IPCC’s retreat from Mann’s Hockey Stick, and it’s obfuscation in a spaghetti graph:
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-6-10.html
Love is never having to say you’re sorry.
Jeff,
CO2 and δ13C are anti-parallel. The CO2 levels in the atmosphere are going up, the δ13C levels in the atmosphere (and the ocean surface) are going down. The δ13C scale of the IPCC’s graph is going negative up…
In my opinion the graphs don’t matter at all, what matters is that the observations show that 1) human emissions are twice the increase in the atmosphere, 2) the increase is from a low-13C source, which excludes the oceans, volcanoes,… or any other inorganic CO2 source. 3) the oxygen balance shows that the low-13C source is not the biosphere, the only other main low-13C source.
These three points together show that human emissions are the main cause of the CO2 increase and δ13C decrease…
IPCC inter-calibrates, not intra-calibrates, CO2 records from different stations into agreement
Jeff, that is not true at all. NOAA (not the IPCC) calibrates the calibration gases used worldwide. That is all what they do. They may manipulate their own 10 baseline stations (for what purpose?), but they have zero influence on the measurements of other organizations. Of course, if there were large discrepancies, there would follow discussions about why the discrepancies, which is normal for any lab worldwide for any type of tests. No matter that, I am pretty sure that Scripps would be very happy to catch NOAA on any error or manipulation, as they were responsible for the calibration gases until NOAA took over…
One can only hope that one day temperature data are as rigorously controlled as CO2 data…
BTW, the official “global” CO2 level is the average of near ground level stations, excluding Mauna Loa.
Henry’s Law, even with Henry’s Coefficients, doesn’t provide the rate at which diffusion occurs.
The transfer rate of CO2 into seawater was measured in tanks and lakes with different wind speeds, waves, etc… Without wind, it is near zero. It is directly proportional to the pCO2 difference between atmosphere and ocean surface per Henry’s law at one side and the quadrate of wind speed at the other side. See:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml
It is instantaneous on climate scales, which are 30 years minimum
Not really: the overall sink rate of CO2 (mostly in the oceans) is quite linear in ratio with the pCO2 difference between atmosphere and ocean surface for the weighted average seawater surface temperature. That gives following e-fold decay rates for any extra CO2 in the atmosphere:
Extra pressure / net sink rate = e-fold decay rate:
In 2012:
110 ppmv, 2.15 ppmv/year, 51.2 years.
The figures for 1988 (from Peter Dietze):
60 ppmv, 1.13 ppmv/year, 53 years.
In 1959:
25 ppmv, 0.5 ppmv/year, 50 years.
Surprisingly linear and not that fast: some 200 years to remove any pulse of CO2, whatever its source, to reach (un)steady state per Henry’s law again.
The time is ample for Henry’s Law to take effect
The measured pCO2 at the polar sink places is down to 150 μatm. At the equatorial sources up to 750 μatm. In the atmosphere it is ~400 μatm. If the exchanges were fast, the surface waters everywhere on earth would all be near 400 μatm…
Factor turned out to be Henry’s Coefficient for CO2 in water
You are completely lost on that one…
Henry’s law gives a 100% change in free CO2 in seawater for a 100 % change in the atmosphere.
The Revelle factor is quite different:
Revelle factor = (Δ[CO2] / [CO2]) / (Δ[DIC] / [DIC])
That is the instantaneous change in CO2 in the atmosphere over the instantaneous change in DIC in the ocean surface. While still a 100% change in the atmosphere gives a 100% change of free CO2 per Henry’s law, free CO2 is only 1% of all carbon species in seawater. The other 99% are bicarbonates and carbonates, which are affected too by the increase in the atmosphere via the dissociation constants, but these don’t double for a double CO2 in the atmosphere, as also H+ increases and pushes the equilibriums back to free CO2. The net result is an about 10% change in DIC for a 100% change in the atmosphere. As is proven by observations…
Ferdinand wrote: “You are completely lost on that one. Henry’s law gives a 100% change in free CO2 in seawater for a 100 % change in the atmosphere. The Revelle factor is quite different: Revelle factor = (Δ[CO2] / [CO2]) / (Δ[DIC] / [DIC]) That is the instantaneous change in CO2 in the atmosphere over the instantaneous change in DIC in the ocean surface. While still a 100% change in the atmosphere gives a 100% change of free CO2 per Henry’s law, free CO2 is only 1% of all carbon species in seawater. The other 99% are bicarbonates and carbonates, which are affected too by the increase in the atmosphere via the dissociation constants, but these don’t double for a double CO2 in the atmosphere, as also H+ increases and pushes the equilibriums back to free CO2. The net result is an about 10% change in DIC for a 100% change in the atmosphere. As is proven by observations”.
Please note that the proportionality constant for CO2 includes all DIC, not just CO2. If the solubility of CO2 changed as pH changed and as the relative concentrations of DIC shifted (which occurs naturally as the concentration of CO2 changes) then the solubility of CO2 would be dependent on the concentration, and this is obviously not the case. kH in Henry’s law remains unchanged by the concentration. It is a linear law. No matter how much CO2 you add to water the proportionality constant remains the same (of course at very high concentrations Henry’s law generally doesn’t hold up as gases start to behave non-ideally). You only need to look at a Bjerrum plot to see that the total amount of CO2 (as DIC) in water is unchanged by changes to pH and the relative concentrations of DIC. Glassman is correct; there is no Revelle Factor, and Revelle himself later basically acknowledged this, stating: “It seems therefore quite improbable that an increase in the atmospheric CO2 concentration of as much as 10% could have been caused by industrial fuel combustion during the past century, as Callendar’s statistical analyses indicate”.
Richard:
Please note that the proportionality constant for CO2 includes all DIC, not just CO2.
The proportionality by Henry’s law is for CO2 (gas) in air and water only, not for bicarbonates and carbonates. (bi)Carbonates concentrations are influenced at one side by a CO2 increase in the atmosphere and thus solution, but on the other side by the increasing H+ by the increasing dissociation of CO2/H2CO3 into bicarbonates and carbonates. The net result is a 10% increase in DIC, even including a 100% increase in free CO2 for a 100% increase of CO2 in the atmosphere.
You only need to look at a Bjerrum plot to see that the total amount of CO2 (as DIC) in water is unchanged by changes to pH and the relative concentrations of DIC.
The Bjerrum plot only shows relative quantities of CO2/bicarbonate/carbonate for any pH level. It doesn’t show absolute levels.
At 1 bar pure CO2 the solubility in fresh water is 3.3 g/kg at 0°C which gives a pH of ~3.9 with about 99% free CO2 in solution.
http://www.engineeringtoolbox.com/gases-solubility-water-d_1148.html
If you make a saturated solution of sodium bicarbonate (baking soda) in water, that can have some 70 g/kg at the same temperature:
http://www.tatachemicals.com/europe/products/pdf/sodium_bicarbonate/technical_solubility.pdf
As CO2 equivalents, that is 37 g/kg, ten times more than for pure CO2 in water, only a matter of pH (> pH 8). That contains less than 1% free CO2, the rest is bicarbonate and a little carbonate.
Add a strong acid to the bicarbonate solution until you have the same pH as for pure CO2 and lots of CO2 bubble up, back to 3.3 g/kg remaining in solution…
In every longer term series of DIC measurements the increase is around 10% of the atmospheric increase, thus proving the Revelle factor.
Revelle was not sure about his factor and had not enough data to prove it. He didn’t want to go against the “consensus” of that time that almost all human CO2 disappeared in the oceans. Nowadays we know better…
@ferdinand meeus
“If you make a saturated solution of sodium bicarbonate (baking soda) in water, that can have some 70 g/kg at the same temperature. As CO2 equivalents, that is 37 g/kg, ten times more than for pure CO2 in water, only a matter of pH (pH 8). That contains less than 1% free CO2, the rest is bicarbonate and a little carbonate. Add a strong acid to the bicarbonate solution until you have the same pH as for pure CO2 and lots of CO2 bubble up, back to 3.3 g/kg remaining in solution”
What you are describing here I believe is commonly referred to as a “bubble-bomb” whereby the dissolved bicarbonate separates and produces bicarbonate ions (HCO3-) and when adding an acid such as vinegar for example, that dissociates into hydrogen ions (H+) and the dissociated hydrogen ions then react with the bicarbonate ions to form carbonic acid (H2CO3) which immediately decomposes into CO2 and thereupon bubbles out from the water. A bubble-bomb is not evidence that the solubility of CO2 is dependent on pH.
“The proportionality by Henry’s law is for CO2(gas) in air and water only, not for bicarbonates and carbonates. Carbonates concentrations are influenced at one side by a CO2 increase in the atmosphere and thus solution, but on the other side by the increasing H+ by the increasing dissociation of CO2/H2CO3 into bicarbonates and carbonates. The net result is a 10% increase in DIC, even including a 100% increase in free CO2 for a 100% increase of CO2 in the atmosphere”.
The 1:50 proportionality constant for CO2 given by Henry’s law (which is what I was referring to) includes all DIC and is fixed at a given temperature. Take note there is no time-variable in the Revelle Factor formula (ΔPCO2ml/PCO2ml)/(ΔDIC/DIC) and the total amount of CO2 water can absorb based on that formula remains eternally constant over time until the relative concentrations of DIC change. Hence if the deep-ocean had the same DIC ratio as the surface-ocean the total amount of anthropogenic CO2 the whole ocean would absorb at equilibrium would only be 10%, in violation of Henry’s law. Henry’s law governs the solubility of gases in water and states that at a given temperature the amount of a gas dissolved in water is directly proportional to its partial pressure in the air adjacent to the solvent at equilibrium. The law can be described mathematically as: p = kHc. Where p is the partial pressure of the gas above the solute, kH is the proportionality constant (i.e. Henry’s constant) and c is the concentration of dissolved gas in the liquid. The constant of proportionality for CO2 at the average surface temperature of 15°C gives us a partitioning ratio between the atmosphere and the oceans of 1:50 respectively. If the Revelle Factor were correct and the solubility of CO2 changed as the relative concentrations of DIC shifted (which occurs when the partial pressure of CO2 changes) then kH in Henry’s law (and thus CO2’s partitioning ratio) would not be a constant for a given temperature. Note that Henry’s constant (in the equilibrium state of the law) is the ratio of the partial pressure of a gas at the liquid interface with the concentration of that gas dissolved in the liquid. Hence the constant does not change with concentration. It is a linear law. This means that the partitioning ratio of a gas (including that of CO2) is unchanged by changes to the atmospheric mass and can be multiplied up proportionally for any specified concentration in ppmv. Obviously this is in conflict with the Revelle Factor which suggests that the solubility of CO2 is affected by the relative concentrations of DIC as the partial pressure of CO2 changes. As the Handbook of Chemistry points out: “Solubilities for gases which react with water, namely ozone, nitrogen, oxides, chlorine and its oxides, carbon dioxide, hydrogen sulfide, hydrogen selenide and sulfur dioxide, are recorded as bulk solubilities; i.e. all chemical species of the gas and its reaction products with water are included”.
“The Bjerrum plot only shows relative quantities of CO2/bicarbonate/carbonate for any pH level. It doesn’t show absolute levels.”
No, but you may have noticed that the Bjerrum plot is a mirror-image of itself and no matter the pH the total combined concentration of DIC remains unchanged.
Richard,
A bubble-bomb is not evidence that the solubility of CO2 is dependent on pH.
Sorry Richard, it doesn’t make any difference if you approach the solubility of CO2 in seawater, which is 90% (Ca/Mg) bicarbonate, 9% (Ca/Mg) carbonate and only 1% free CO2 from the CO2 addition side or start from the other side by adding an acid to a (Na or Ca/Mg) solution. The pH (and temperature) is what counts for the solubility.
In fresh water, near all CO2 is free CO2. Thus free CO2 and DIC are near equal. A doubling of CO2 in the atmosphere gives a doubling of free CO2 in water and a doubling of DIC.
In seawater free CO2 is only 1% of DIC. A doubling of CO2 in the atmosphere gives initially a doubling of free CO2 from 1% to 2% in DIC, that is all. Thanks to the chemical equilibriums, the ultimate increase in DIC is ~10%, that means that ~10 times more CO2 is dissolved in seawater (at pH ~8) than in fresh water for the same change in the atmosphere. See the graph at:
http://ion.chem.usu.edu/~sbialkow/Classes/3650/CO2%20Solubility/DissolvedCO2.html
The 1:50 proportionality constant for CO2 given by Henry’s law (which is what I was referring to) includes all DIC and is fixed at a given temperature.
Again Richard, you are completely mistaken on this. Henry’s law is about the partial pressure of any gas in the atmosphere vs. the pressure of the same gas in a liquid in direct contact with the atmosphere. It doesn’t talk about concentrations or pCO2 in the deep oceans or its enormous quantities (as you do with the 1:50 ratio), neither of time constants.
For 15°C and 1 bar CO2 in the atmosphere, that gives 2 g/kg. At 0.0004 bar, that is 0.8 mg/kg. For the total ocean surface layer, that is about 10 GtC, if that was fresh water. Yet it is 1000 GtC as it is seawater with a pH slightly over 8. The ocean surface layer is the only part of the ocean in direct fast contact with the atmosphere.
The deep oceans are a much slower player and needs centuries to equilibrate with the atmosphere. Thus your 1:50 has nothing to do with Henry’s law, it is in fact 10:1 for the oceans surface per Revelle/buffer factor while the 1:50 is the ultimate distribution between atmosphere and deep oceans in mass of the human contribution, but that takes a lot of time.
Note that Henry’s constant (in the equilibrium state of the law) is the ratio of the partial pressure of a gas at the liquid interface with the concentration of that gas dissolved in the liquid.
Richard, read that definition again word by word and think deeply over the last part: with the concentration of that gas dissolved in the liquid
“That gas” is not “that gas and all derivatives of that gas”. The definition of Henry’s constant is only about free CO2 as gas in the liquid, not about bicarbonates or carbonates.
Obviously this is in conflict with the Revelle Factor
Not at all: Henry’s law is for free CO2 in water only (no matter if that is fresh water or seawater). The Revelle factor is a measure of how much more CO2 can be dissolved in seawater than in fresh water.
Solubilities for gases which react with water, … , are recorded as bulk solubilities
Of course, as that is a practical solution for such gases. That has nothing to do with Henry’s law for such gases, as their solubility may be (much) larger than what Henry’s constant says…
no matter the pH the total combined concentration of DIC remains unchanged
Which is proven wrong, see the above link…
“Sorry Richard, it doesn’t make any difference if you approach the solubility of CO2 in seawater, which is 90% (Ca/Mg) bicarbonate, 9% (Ca/Mg) carbonate and only 1% free CO2 from the CO2 addition side or start from the other side by adding an acid to a (Na or Ca/Mg) solution. The pH (and temperature) is what counts for the solubility. In fresh water, near all CO2 is free CO2. Thus free CO2 and DIC are near equal. A doubling of CO2 in the atmosphere gives a doubling of free CO2 in water and a doubling of DIC. In seawater free CO2 is only 1% of DIC”.
How is that in any way a logical response to what I wrote? I pointed out that you cannot use a bubble-bomb to demonstrate that the solubility of CO2 is pH-dependent because the bicarbonate and acid you are adding to the water create a surplus of CO2 and that is why CO2 bubbles out from the water. You then reply by saying “Sorry Richard, it doesn’t make any difference”, as if to imply I am the one misunderstanding things. But why I am surprised? It doesn’t matter how many percentages you put in your reply – you’re wrong. Henry’s constant (which gives a fixed partitioning ratio for CO2 of 1:50 between air and water respectively at 288K) does not change with pH. It cannot change with pH. That would be impossible, because the pH of the water is dependent on the partial pressure of CO2, and Henry’s constant is unaffected by changes to the partial pressure. It’s that simple.
“Henry’s law is about the partial pressure of any gas in the atmosphere vs. the pressure of the same gas in a liquid in direct contact with the atmosphere. It doesn’t talk about concentrations or pCO2 in the deep oceans or its enormous quantities (as you do with the 1:50 ratio), neither of time constants”.
Henry’s law determines a specific fixed partitioning ratio between the amount of CO2 residing in air and the amount that will be dissolved in water at a given temperature at equilibrium. At the current mean ocean temperature of ~15C (at the surface), that partitioning ratio comes out to be ~1:50. However, this result assumes that the average temperature of the oceans is 15C and I think that figure is too high since the deep oceans (which comprise the bulk of the oceanic mass) are generally much cooler than this and approach zero near the bottom.
“While the 1:50 is the ultimate distribution between atmosphere and deep oceans in mass of the human contribution, but that takes a lot of time”.
OK, I read you saying that that is what they claim. But claiming is not proof
“Henry’s constant is only about free CO2 as gas in the liquid, not about bicarbonates or carbonates”.
Nonsense. See the quote from the Handbook of Chemistry above. If the solubility of CO2 was dependent on the relative concentrations of DIC (which changes with partial pressure) then the solubility of CO2 would change as the partial pressure of CO2 changed and this is obviously not the case. Henry’s law constant kH for CO2 in water at room-temperature is around 3.1*10^-2 M/atm and kH is unaffected by changes to partial pressure. Hence it therefore follows that the solubility of CO2 as given by kH is unaffected by changes to partial pressure. For the last time: kH is unaffected by changes to partial pressure. You seem to want to deny this. That’s fine.
“Some skeptics shoot in their own foot by insisting that humans are not the cause of the CO2 increase in the atmosphere, despite all evidence”.
All the evidence? I don’t think so. Since AGW-advocates did not have enough real evidence ready to hand to make their case compelling they have set about manufacturing it with speculative models, even to the extent of fabricating basic data in this way too. (I know you always have to have the last-reply on these boards, so I’ll leave you to your own devices now).
Richard,
Henry’s law is for the solubility of CO2 as gas from atmosphere in water and back. It is in equilibrium when the pCO2 of atmosphere and water are equal. Bicarbonate and carbonate ions have zero pCO2. All pCO2 is from free CO2 in the water, not from bicarbonates and carbonates, these play no (direct) role in Henry’s constant.
You don’t need to take my word for it, simply ask it to anyone with some more knowledge of the solubility of reactive gases in (sea)water.
Henry’s constant (which gives a fixed partitioning ratio for CO2 of 1:50 between air and water respectively at 288K) does not change with pH.
pH doesn’t change Henry’s constant, but doesn’t give a fixed partitioning of 1:50 between air and oceans. The 1:50 is the mass partitioning between atmosphere and deep oceans when the extra CO2 in the atmosphere is redistributed between these two. That has nothing to do with Henry’s constant, all with mass distribution. Henry’s constant only works for the air-ocean surface exchanges, not for the air – deep ocean exchanges.
It cannot change with pH. That would be impossible, because the pH of the water is dependent on the partial pressure of CO2
And reverse, the partial pressure of CO2 in the water depends of the pH, if that is changed either by adding an acid (as in the CO2 “bubble bomb”) or adding a base/buffer as is the case in seawater: its pH is above 8, not the less than 4 in a saturated solution of CO2 in fresh water as per Henry’s law. That makes that the ocean surface contains 100 times more CO2 (and derivatives) than fresh water at equilibrium for the same pressure of CO2 in the atmosphere, still with exactly the same amount of free CO2 in both cases per Henry’s constant.
Again, the difference is in the derivatives: bicarbonates and carbonates, which play no (direct) role in Henry’s constant or the pCO2 of the solution.
The 100 times more CO2 in seawater than in fresh water is what is measured. Time to change your opinion…
the solubility of CO2 would change as the partial pressure of CO2 changed and this is obviously not the case
It certainly is the case: it is measured! The solubility of CO2 as gas doesn’t change with partial pressure: it remains in ratio, but there is no reason at all that the ratio’s between CO2 as gas in solution and bicarbonate/carbonate ions remains the same: the Bjerrum plot shows the changes in ratio with changing pH. As the latter changes with the change in pCO2 of the atmosphere, the ratio between the different derivatives changes too. For a 100% change in pCO2 in the atmosphere, there is a 100% change in free CO2 in seawater, still obeying Henry’s law and the fixed kH, but only a ~10% change in bicarbonates and carbonates. The latter are then 98% of all CO2 in seawater instead of 99% before the pCO2 increase…
————
Richard, this all is basic buffer chemistry where Henry’s law is for CO2 as gas in solution only, not for bicarbonates and carbonates. That is clearly explained in many textbooks of chemistry. Here specific for CO2 in seawater:
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/ZeebeWolfEnclp07.pdf
K0 (= kH) is only for the first step: the ratio between CO2(atm) and [CO2], the concentration of pure CO2 as gas in seawater. K1 and K2 are the dissociation constants for the next steps: the formation of bicarbonates and carbonates and at the same time H+. The latter influences the equilibriums backwards to [CO2] until equilibrium is reached.
Hey…where are Toneb, Mosher, Finn, Binidon, seaice1? Shouldn’t essays like this one bring them comfort? Yet no sign of……
Please note that Cape Town temperatures were adjusted upward 1950’s through 1980’s.
Professor;
My understanding is that CO2 doesn’t so much \scatter upwelling IR in the vicinity of 15um as absorb it preventing that energy from escaping to space.. The absorbed photic energy redistributed among the various translational (q, heat), rotational, and vibrational states available to the CO2 molecule. This process leads to local warming and encreased IR radiation from CO2, also locally through 4pi stereradians, allowing a portion of the absorbed 15um radiation to proceed towards space and escape earth. This doesn’t negate your argument at all, it’s just that I am an insufferable pedant and had to get this off my chest. Cheers
TG (erstwhile photochemist)