Five points about climate change

Guest essay by Professor Philip Lloyd, Cape Peninsula University of Technology

Daily we are told that we are wicked to burn fossil fuels.  The carbon dioxide which is inevitably emitted accumulates in the atmosphere and the result is “climate change.” If the stories are to believed, disaster awaits us. Crops will wither, rivers will dry up, polar bears will disappear and malaria will become rampant.

It is a very big “IF”. We could waste trillions for nothing.  Indeed, Lord Stern has estimated that it would be worth spending a few trillion dollars each year to avoid a possible disaster in 200 years’ time. Because he is associated with the London School of Economics he is believed – by those whose experience of insurance is limited. Those who have experience know that it is not worth insuring against something that might happen in 200 years time – it is infinitely better to make certain your children can cope. With any luck, they will do the same for their children, and our great-great-great grandchildren will be fine individuals more than able to deal with Lord Stern’s little problem.

So I decided to examine the hypothesis from first principles. There are five steps to the hypothesis:

1. The carbon dioxide (CO2) content of the atmosphere is rising.

2. The rise in CO2 in the atmosphere is largely paralleled by the increase in fossil fuel combustion. Combustion of fossil fuels results in emission of CO2, so it is eminently reasonable to link the two increases.

3. CO2 can scatter infra-red over wavelengths primarily at about 15 µm. Infra-red of that wavelength, which should be carrying energy away from the planet, is scattered back into the lower troposphere, where the added energy input should cause an increase in the temperature.

4. The expected increase in the energy of the lower troposphere may cause long-term changes in the climate and thermosphere, which could be characterized by increasing frequency and/or magnitude of extreme weather events, an increase in sea temperatures, a reduction in ice cover and many other changes.

5. The greatest threat is that sea levels may rise and flood large areas presently densely inhabited.

Are these hypotheses sustainable in the scientific sense? Is there a solid logic linking each step in this chain?

The increase in CO2 in the atmosphere is incontrovertible. Many measurements show this. For instance, since 1958 there have been continuous measurements at the Mauna Loa observatory in Hawaii:

clip_image002

The annual rise and fall is due to deciduous plants growing or resting, depending on the season.  But the long-term trend is ever-increasing levels of CO2 in the atmosphere.

There were only sporadic readings of CO2 before 1958, no continuous measurements. Nevertheless, there is sufficient information to construct a view back to 1850:

clip_image004

There was a slight surge in atmospheric levels about 1900, then a period of near stasis until after 1950, when there was a strong and ongoing increase which has continued to this day. Remember this pattern – it will re-appear in a different guise.

The conclusion is clear – there has been an increase in the carbon dioxide in the atmosphere. What may have caused it?

Well, there is the same pattern in the CO2 emissions from the burning of fossils fuels and other industrial sources:

clip_image006

A similar pattern is no proof – correlation is not causation. But if you try to link the emissions directly to the growth in atmospheric CO2, you fail. There are many partly understood “sinks” which remove CO2 from the atmosphere. Trying to follow the dynamics of all the sinks has proved difficult, so we do not have a really good chemical balance between what is emitted and what turns up in the air.

Fortunately isotopes come to our aid. There are two primary plant chemistries, called C3 and C4. C3 plants are ancient, and they tend to prefer the 12C carbon isotope to the 13C. Plants with a C4 chemistry are comparatively recent arrivals, and they are not so picky about their isotopic diet. Fossil fuels primarily come from a time before C4 chemistry had evolved, so they are richer in 12C than today’s biomass. Injecting into the air 12C-rich CO2 from fossil fuels should therefore cause the 13C in the air to drop, which is precisely what is observed:

clip_image008

So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive.  But does it have any effect?

Carbon dioxide scatters infra-red over a narrow range of energies.  The infra-red photons, which should be carrying energy away from the planet, are scattered back into the lower troposphere. The retained energy should cause an increase in the temperature.

Viewing the planet from space is revealing:

clip_image010clip_image012

The upper grey line shows the spectrum which approximates that of a planet of Earth’s average albedo at a temperature 280K. That is the temperature about 5km above surface where incoming and outgoing radiation are in balance. The actual spectrum is shown by the blue line. The difference between the two is the energy lost by scattering processes caused by greenhouse gases. Water vapour has by far the largest effect. CO2 contributes to the loss between about 13 and 17 μm, and ozone contributes to the loss between about 9 and 10µm.

The effect of carbon dioxide absorption drops off logarithmically with concentration. Doubling the concentration will not double any effect. Indeed, at present there is ~400ppm in the atmosphere.  We are unlikely see a much different world at 800ppm. It will be greener – plants grow better on a richer diet – and it may be slightly warmer and slightly wetter, but otherwise it would look very like our present world.

However, just as any effect will lessen proportionately with increase in concentration, so it will increase proportionately with any decrease.  If there are to be any observable effects, they should be visible in the historical records.  Have we seen them?

There are “official” historical global temperature records. A recent version from the Hadley Climate Research unit is:

clip_image014

The vertical axis gives what is known as the “temperature anomaly”, the change from the average temperature over the period 1950-1980. Recall that carbon dioxide only became significant after 1950, so we can look at this figure with that fact in mind:

* from 1870 to 1910, temperatures dropped, there was no significant rise in carbon dioxide

* from 1910 to 1950, temperatures rose, there was no significant rise in carbon dioxide.

* from 1950 to 1975, temperatures dropped, carbon dioxide increased

* from 1975 to 2000, both temperature and carbon dioxide increased

* from 2000 to 2015, temperatures rose slowly but carbon dioxide increased strongly.

Does carbon dioxide drive temperature changes? Looking at this evidence, one would have to say that, if there is any relationship, it must be a very weak one. In one study I made of the ice core record over 8 000 years, I found that there was a 95% chance that the temperature would change naturally by as much as +/-2degrees C during 100 years. During the 20th century, it changed by about 0.8degrees C. The conclusion? If carbon dioxide in the atmosphere does indeed cause global warming, then the signal has yet to emerge from the natural noise.

One of the problems with the “official” temperature records such as the Hadley series shown above is that the official record has been the subject of “adjustments”. While some adjustment of the raw data is obviously needed, such as that for the altitude of the measuring site, the pattern of adjustments has been such as to cool the past and warm the present, making global warming seem more serious than the raw data warrants.

It may seem unreasonable to refer to the official data as “adjusted”. However, the basis for the official data is what is known as the Global Historical Climatology Network, or GHCN, and it has been arbitrarily adjusted. For example, it is possible to compare the raw data for Cape Town, 1880-2011, to the adjustments made to the data in developing GHCN series Ver. 3:

clip_image016

The Goddard Institute for Space Studies is responsible for the GHCN. The Institute was approached for the metadata underlying the adjustments. They provided a single line of data, giving the station’s geographical co-ordinates and height above mean sea-level, and a short string of meaningless data including the word “COOL”. The basis for the adjustments is therefore unknown, but the fact that about 40 successive years of data were “adjusted” by exactly 1.10 degrees C strongly suggests fingers rather than algorithms were involved.

There has been so much tampering with the “official” records of global warming that they have no credibility at all. That is not to say that the Earth has not warmed over the last couple of centuries.  Glaciers have retreated, snow-lines risen. There has been warming, but we do not know by how much.

Interestingly, the observed temperatures are not unique. For instance, the melting of ice on Alpine passes in Europe has revealed paths that were in regular use a thousand years and more ago. They were then covered by ice which has only melted recently. The detritus cast away beside the paths by those ancient travellers is providing a rich vein of archaeological material.

So the world was at least as warm a millennium ago as it is today. It has warmed over the past few hundred years, but the warming is primarily natural in origin, and has nothing to do with human activities.  We do not even have a firm idea as to whether there is any impact of human activities at all, and certainly cannot say whether any of the observed warming has an anthropogenic origin. The physics say we should have some effect; but we cannot yet distinguish it from the natural variation.

Those who seek to accuse us of carbon crime have therefore developed another tool – the global circulation model. This is a computer representation of the atmosphere, which calculates the conditions inside a slice of the atmosphere, typically 5km x 5km x 1km, and links each to an adjacent slice (if you have a big enough computer – otherwise your slices have to be bigger).

The modellers typically start their calculations some years back, for which there is a known climate, and try to see they can predict the (known) climate from when they start up to today.  There are many adjustable parameters in the models, and by twiddling enough of these digital knobs, they can “tune” the model to history.

Once the model seems to be able to reproduce historical data well enough, it is let rip on the future. There is a hope that, while the models may not be perfect, if different people run different tunings at different times, a reasonable range of predictions will emerge, from which some idea of the future may be gained.

Unfortunately the hopes have been dashed too often. The El Nino phenomenon is well understood; it has a significant impact on the global climate; yet none of the models can cope with it. Similarly, the models cannot do hurricanes/typhoons – the 5kmx5km scale is just too coarse. They cannot do local climates – a test of two areas only 5km apart, one of which receives at least 2 000mm of rain annually, and the other averages just on 250mm, failed badly.  There was good wind and temperature data and the local topography. The problem was modelled with a very fine grid, but there were not enough tuning knobs to be able to match history.

Even the basic physics used in these models fails. The basic physics predicts that, between the two Tropics, the upper atmosphere should warm faster than the surface. We regularly fly weather balloons carrying thermometers into this region. There are three separate balloon data sets, and they agree that there is no sign of extra warming:

clip_image018

The average of the three sets is given by the black squares. The altitude is given in terms of pressure, 100 000Pa at ground level and 20 000Pa at about 9km above surface. There are 22 different models, and their average is shown by the black line.  At ground level, measurement shows warming by 0.1degrees C per decade, but the models predict 0.2degrees C per decade.  At 9km, measurement still shows close to 0.1degrees C, but the models show an average of 0.4degrees C and extreme values as high as 0.6degrees C. Models that are wrong by a factor of 4 or more cannot be considered scientific. They should not even be accepted for publication – they are wrong.

The hypothesis that we can predict future climate on the basis of models that are already known to fail is false. International agreements to control future temperature rises to X degrees C above pre-industrial global averages have more to do with the clothing of emperors than reality.

So the third step in our understanding of the climate boondoggle can only conclude that yes, the world is warming, but by how much and why, we really haven’t a clue.

What might the climate effects of a warmer world be? What is “climate”? It is the result of averaging a climatological variable, such as rainfall or atmospheric pressure, measured typically over a month or a season, where the average is taken over several years so as to give an indication of the weather that might be expected at that month or season.

Secondly, we need to understand the meaning of “change”. In this context it clearly means that the average of a climatological variable over X number of years will differ from the same variable averaged over a different set of X years.  But it is readily observable that the weather changes from year to year, so there will be a natural variation in the climate from one period of X years to another period X years long.  One therefore needs to know how long X must be to determine the natural variability and thus to detect reliably any change in the measured climate.

This aspect of “climate change” appears to have been overlooked in all the debate.  It seems to be supposed that there was a “pre-industrial” climate, which was measured over a large number of years before industry became a significant factor in our existence, and that the climate we now observe is statistically different from that hypothetical climate.

The problem, of course, is that there is very little actual data from those pre-industrial days, so we have no means of knowing what the climate really was.  There is no baseline from which we can measure change.

Faced by this difficulty, the proponents of climate change have modified the hypothesis. It is supposed that the observed warming of the earth will change the climate in such a way as to make extreme events more frequent. This does not alter the difficulty; in fact, it makes it worse.

To illustrate, assume that an extreme event is one that falls outside the 95% probability bounds. So in 100 years, one would expect 5 extreme events on average.  Rather than taking 100 years of data to obtain the average climate, there are now only 5 years to obtain an estimate of the average extreme event, and the relative error in averaging 5 variable events is obviously much larger than the relative error in averaging 100 variable events.

The rainfall data for England and Wales demonstrates this quite convincingly:-

clip_image020

The detrended data are close to normally distributed, so that it is quite reasonable to use normal statistics for this. The 5% limits are thus two standard deviations either side of the mean.  In the 250-year record, 12.5 extreme events (those outside the 95% bounds) would be expected.  In fact, there are 7 above the upper bound and 4 below the lower bound, or 11 in total. Thus it requires 250 years to get a reasonable estimate (within 12%) of only the frequency of extreme rainfall.  There is no possibility of detecting any change in this frequency, as would be needed to demonstrate “climate change”.

Indeed, a human lifespan is insufficient even to detect the frequency of the extreme events.  In successive 60-year periods, there are 2, 4, 2 and 2 events, an average of 2.5 events with a standard deviation of 1.0. There is a 95% chance of seeing between 0.5 and 5.5 extreme events in 60 years, where 3 (5% of 60) are expected. Several lifetimes are necessary determine the frequency with any accuracy, and many more to determine any change in the frequency.

It is known to have been warming for at least 150 years. If warming had resulted in more extreme weather, it might have been expected that there was some evidence for an increase in extreme events over that period. The popular press certainly tries to be convincing when an apparently violent storm arises. But none of the climatological indicators that have data going back at least 100 years show any sign of an increase in frequency of extreme events

For instance, there have been many claims that tropical cyclones are increasing in their frequency and severity.  The World Meteorological Organisation reports:  “It remains uncertain whether past changes in tropical cyclone activity have exceeded the variability expected from natural causes.”

It is true that the damage from cyclones is increasing, but this is not due to more severe weather.  It is the result of there being more dwellings, and each dwelling being more valuable, than was the case 20 or more years ago.  Over a century of data was carefully analysed to reach this conclusion.  The IPCC report on extreme events agrees with this finding.

Severe weather of any kind is most unlikely to make any part of our planet uninhabitable – that includes drought, severe storms and high winds. In fact, this is not too surprising – humanity has learned how to cope with extreme weather, and human beings occupy regions from the most frigid to the most scalding, from sea level to heights where sea-level-dwellers struggle for breath. Not only are we adaptable, but we have also learned how to build structures that will shield us from the forces of nature.

Of course, such protection comes at a cost. Not everyone can afford the structures needed for their preservation.  Villages are regularly flattened by storms that would leave most modern cities undamaged. Flood control measures are designed for the one-in-a-hundred year events, and they generally work – whereas low-lying areas in poor nations are regularly inundated for want of suitable defences.

Indeed, it is a tribute to the ability of engineers to protect against all manner of natural forces.  For instance, the magnitude 9 Tōhoku earthquake of 2011 (which caused the tsunami that destroyed the reactors at Fukushima) caused little physical damage to buildings, whereas earlier that year, the “mere” magnitude 7 earthquake in Wellington, New Zealand, toppled the cathedral, which was not designed to withstand earthquakes.

We should not fear extreme weather events. There is no evidence that they are any stronger than they were in the past, and most of us have adequate defenses against them.  Of course, somewhere our defenses will fail, but that is usually because of a design fault by man, not an excessive force of Nature. Here, on the fourth step of our journey, we can clearly see the climate change hypothesis stumble and fall.

In the same way, most of the other scare stories about “climate change” fail when tested against real data. Polar bears are not vanishing from the face of the earth; indeed, the International Union for the Conservation of Nature can detect no change in the rate of loss of species over the past 400 years. Temperature has never been a strong determinant of the spread of malaria – lack of public health measures is a critical component in its propagation. Species are migrating, but whether temperature is the driver is doubtful – diurnal and seasonal temperature changes are so huge that a fractional change in the average temperature is unlikely to be the cause. Glaciers are melting, but the world is warmer, so you would expect them to melt.

There remains one last question – will the seas rise and submerge our coastlines?

First it needs to be recognized that the sea level is rising. It has been rising for about the past 25 000 years. However, for the past 7 millennia it has been rising slower than ever before:

clip_image022

The critical question is whether the observed slow rate of rise has increased as a result of the warming climate. There are several lines of evidence that it has not.  One is the long-term data from tide gauges. These have to be treated with caution because there are areas where the land is sinking (such as the Gulf of Mexico, where the silt carried down the Mississippi is weighing down the crust), and others where it is rising (such as much of Scandinavia, relieved of a burden of a few thousand metres of ice about 10 000 years ago). A typical long-term tide gauge record is New York:

clip_image024

The 1860-1950 trend was 2.47-3.17mm/a; the 1950-2014 trend was 2.80-3.42mm/a, both at a 95% confidence level. The two trends are statistically indistinguishable. There is <5% probability that they might show any acceleration after 1950.

Another line of evidence comes from satellite measurements of sea level.  The figure below shows the latest available satellite information – it only extends back until 1993. Nevertheless, the 3.3±0.3mm/a rise in sea level is entirely consistent with the tide gauge record:

clip_image026

Thus several lines of evidence point to the present rate of sea level rise being about 3mm/a or 30cm per century. Our existing defences against the sea have to deal with diurnal tidal changes of several metres, and low-pressure-induced storm surges of several metres more.  The average height of our defences above mean sea level is about 7m, so adding 0.3m in the next century will reduce the number of waves that occasionally overtop the barrier.

The IPCC predicts that the sea level will rise by between 0.4 and 0.7m during this century. Given the wide range of the prediction, there is a possibility they could be right. Importantly, even a 0.7m rise is not likely to be a disaster, in the light of the fact that our defences are already metres high – adding 10% to them over the next 80 years would be costly, but we would have decades to make the change, and should have more than adequate warning of any significant increase in the rate of sea level rise.

To conclude, our five steps have shown:

· the combustion of ever increasing quantities of fossil fuel has boosted the carbon dioxide concentration of the atmosphere.

· The physical impact of that increase is not demonstrable in a scientific way. There may be some warming of the atmosphere, but at present any warming is almost certainly hidden in the natural variation of temperatures.

· there is no significant evidence either for any increase in the frequency or magnitude of weather phenomena, or climate-related changes in the biosphere.

· any sea level rise over the coming century is unlikely to present an insuperable challenge.

Attempts to influence global temperatures by controlling carbon dioxide emissions are likely to be both futile and economically disastrous.

0 0 votes
Article Rating
175 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
April 8, 2016 8:08 am

Per IPCC AR5 Table 6.1 the increase in CO2 is 2/3 FF & 1/3 land use which rarely gets mention.

ossqss
Reply to  Nicholas Schroeder
April 8, 2016 10:06 am

Courtesy of R.P. Sr. via twitter yesterday relating to such.
http://onlinelibrary.wiley.com/doi/10.1002/2015JD024102/full

RWturner
Reply to  Nicholas Schroeder
April 8, 2016 11:12 am

Also, CO2 is always shown to be a normal green house gas along with the others, but it’s not. CO2 is a non polar molecule, unlike the others, and has only a temporary induced dipole. I think this may be why the GCMs fail so miserably.
CO2 is not the green house gas they make it out to be; it absorbs no infrared radiation when not in its temporary dipole state and the wavelengths it does absorb as a dipole were already opaque to the atmosphere in preindustrial times.
https://stevengoddard.wordpress.com/2014/01/25/ir-expert-speaks-out-after-40-years-of-silence-its-the-water-vapor-stupid-and-not-the-co2/

Reply to  RWturner
April 8, 2016 2:53 pm

Methane is a polar molecule? It does not look like it. It’s a symmetric tetrahedron and the C-H dipole is not very large at all. I don’t see much a dipole for methane and notice no electron asymmetry. At least with O=C=O oxygen has significant electronegativity. I guess methane must be “special” in some way, with its GHG effect due to some other cause?

Reply to  RWturner
April 9, 2016 9:48 am

Not true, the bending mode of CO2 which is the wavelength we are concerned with in this case is perpetually bending even in its ground state. Consequently its dipole is constantly changing and is nonzero for the vast majority of the time only very briefly is it straight during this vibration Think of it like a pendulum which is only vertical for a very small part of its period (the period of the CO2 vibration is order 10^-14 sec). In the case of CO2 it is further complicated by the fact that there are two orthogonal bending modes.
The post on Goddard’s site is a joke, an astronomer cites the fact that ‘window’ region in which he was constrained to make his observations didn’t show absorption by CO2, therefore CO2 doesn’t absorb IR, the whole reason why he was only using that ‘window’ was because CO2 doesn’t emit there! Had he tried to make observations at 14-15 microns he would only have seen CO2 emissions.
Mark also comments about the fact that methane is a symmetrical molecule, the same logic applies there, the individual bonds are vibrating independently so the molecule is virtually never symmetrical!
http://www2.ess.ucla.edu/~schauble/MoleculeHTML/CH4_html/CH4_page.html
mark4asp April 8, 2016 at 2:53 pm

Ron Clutz
April 8, 2016 8:10 am

Excellent, concise argument against global warming alarms. Why are such articles not widely available for the public to appreciate and understand?
You are doing science. But “Climate Science” these days is something else altogether. For those who have forgotten how we got here, Richard Lindzen remembers:
https://rclutz.wordpress.com/2016/04/08/climate-science-was-broken/

Reply to  Ron Clutz
April 8, 2016 8:18 am

+10

Barbara
Reply to  Ron Clutz
April 8, 2016 8:22 pm

Can’t make money off what Prof. Lloyd has written!

Ron Clutz
Reply to  Barbara
April 9, 2016 5:24 am

That’s the bottom line, all right.

Ian Magness
April 8, 2016 8:14 am

Quite brilliant argument. This should be widely circulated in government and education, to name but two spheres.

Reply to  Ian Magness
April 8, 2016 2:03 pm

…and the MSM, but their “investigative journalists” would not understand even the simple parts of this reasoning. Our daily newspaper in Auckland is an extreme example, regularly printing the most ridiculous garbage that one has to wonder what financial inducements are involved. Similarly, the most unscientific young reporters are permitted to spout ignorant personal opinions with no balancing fact-based comment.

Firey
Reply to  mikelowe2013
April 8, 2016 6:01 pm

Exactly the same in Sydney, one of the major daily newspapers prints any “wild” climate change claim. No investigation required just print it. It seems some reporters are “green” activists first & investigative journalists second.
As our former PM said, “beware of socialism masquerading as environmentalism”.

Tom Yoke
Reply to  Ian Magness
April 8, 2016 6:58 pm

mikelowe, I yield to no one in cynicism towards the MSM, but I don’t really agree with your analysis of the problem.
The problem is that any reporter who went against the “Green” platitudes that pervade such places, would very likely find his/her own career in jeopardy. They would most certainly be accused of being a “denier”, with overtones of the Holocaust. They are also certain to be accused of being in the pay of the fossil fuel companies.
Furthermore, and very much to the point, these accusations would be made by people who seriously matter in determining the reporter’s rise or fall in status. This is not a mild effect. Conformity to the dictates of political correctness is an important part of the power dynamic faced by an MSM reporter.
The root problem then is political correctness. That is why the MSM is uninterested in articles like the above. Political correctness is the contemporary secular version of “praying in public”. It is how one announces to the world one’s own virtue and moral righteousness. By comparison, scientific arguments are hard, uncomfortable, and uncertain, and will always face serious difficulties in prevailing against this powerful force.

Gerard Flood
Reply to  Tom Yoke
April 8, 2016 8:09 pm

Tom Yoke, Thank you for identifying the ‘force’ which mandates that MSM ‘group speak’! However, this raises the question of what explains the overwhelming level of that illegitimate force in a supposedly open, educated, egalitarian, democratic and self-respecting society. [Incidentally, from my world view, it reminds me of the support by Western opinion leaders for Communism, from blinded English fools in the times of Lenin, to the on-going blindness to mass murderers and torturers as are Castro and [were] Che Guevara.] What marks these campaigns is the eternal war against our search for truth. The ordinary voter knows the MSM has zero credibility. Be a ‘backbone’, not a ‘jawbone’. Support your Proven Alternative Media with your public actions!!

Richard Petschauer
Reply to  Ian Magness
April 8, 2016 8:35 pm

Agree. Great paper!! I would add another point:
Was the preindustrial temperature, that we use as a baseline for warming, the ideal temperature?

April 8, 2016 8:18 am

The IPCC predicts that the sea level will rise by between 0.4 and 0.7m during this century. Given the wide range of the prediction, there is a possibility they could be right.
Simple arithmetic says that in order for that to happen the seas need to rise at an average rate of 4.8 or 8.3 millimeters per year respectively starting right now. Satellites say the rate today is around 3 mm/yr and tide gauges a good deal less than that. Satellite data shows no acceleration. Tide gauges show so much variation between locations it is difficult to say.

Reply to  Steve Case
April 8, 2016 1:08 pm

Also helpful to keep in mind a few other salient points regarding damage from rising sea level:
– Structures do not last forever, particularly housing built to modern US standards.
– Coastal storms already inundate areas of our coastlines every year, and there is much evidence that we are in a very weak point in natural hurricane cycles. In decades past, destructive coastal storms were more common. When I was a kid, the coastlines of the US were sparsely built up. Today, expensive homes line the coast nearly everywhere.
– Given the above two points, how many places will be flooded by rising seas before they are destroyed by a coastal storm? I think that number is very low.
– Despite the near constant barrage of warnings about the dangers of climate change, and the alarmist fairy tales of accelerating sea level rise being a reality, homes and businesses and entire cities and stretches of coastline are rebuilt as quickly as humanly possible after storms like Katrina and Sandy. If an acceleration of rising seas are thought to be a reality by so many, why is this the case? Why are the politicians who issue dire warnings about climate change and rising seas, also the same people who cannot throw money fast enough to the people who choose to build in risky coastal and low-lying locations?
Actions speak louder than words, and the actions of politicians and filthy rich entertainers do not match up with the words that spew from their mouths.

Reply to  Menicholas
April 10, 2016 7:16 pm

Excellent point Menicholas…. It never fails to amaze me that riverside city suburbs in my home town of Brisbane, QLD, Australia are “allowed” (if not encouraged) to rebuild in the same area with the same chance of 1/20 year (or whatever) floods…. These suburbs should be reserved for parks or other such zoning where the financial consequences of floods are minimal… Of course I am sure that subsidizing relocation of such numbers of residents/businesses would be regarded as prohibitively expensive, and in our culture I doubt very much if the people could be persuaded (without force (:-)) to do so, but the role of government should be to make it at least harder to rebuild + give incentives to move to “safer” areas….

Tom Halla
April 8, 2016 8:37 am

The post is a good overview of “global warming” .

birdynumnum
April 8, 2016 8:37 am

Just a point. The earthquake that toppled the cathedral in 2011 was in Christchurch NZ not Wellington.

HAS
Reply to  birdynumnum
April 8, 2016 12:53 pm

Ironically if it had been in Wgtn it would have been built or strengthen to a code that would have meant it was still standing.

The_Iceman_Cometh
Reply to  birdynumnum
April 9, 2016 1:39 am

Thanks – quite correct!

April 8, 2016 8:45 am

Nice concise summary of the situation as it stands today. Here’s one observation that might reflect on the merits of the global warming story.
Climate models needed documented volcanic eruptions to put “aerosols” into the atmosphere to cause short-term cooling to bring the model predictions into approximate conformity with historical reality. Without those volcanoes, their modelled rate of warming would have been greater than the observed rate of warming.
As far as I can tell, climate models have no volcanoes in the future. This is totally unrealistic and renders the models invalid, BASED ON THEIR OWN INTERNAL LOGIC.
If anyone knows differently, please say so. I’m quite prepared to be corrected. I’ve been wrong before.

old engineer
Reply to  Smart Rock
April 8, 2016 11:56 am

Smart Rock –
I don’t know about all climate models, but Dr. Hansen’s GISS model in his 1988 paper did include future volcanos in two of his scenarios. Quote from his paper:
“4.2 Stratospheric Aerosols
….. In scenarios B and C additional large volcanoes are inserted in the year 1995 (identical in properties to EL Chichon), in the 2015 (identical to Agung) and in the year 2025 (identical to El Chichon) ….”
Note that scenario A in Hansen’s paper did not include volcanoes. It would indeed be interesting to know how many models do include future volcanoes.

Reply to  Smart Rock
April 8, 2016 1:11 pm

Terrestrial volcanoes only versus ocean floor volcanic & geothermal heat flux and gases of which very little is actually known.

April 8, 2016 8:54 am

Excellent summary. While most was probably known to most regulars here, it’s a substantial service to rehearse the evidence as the author did in a clear and organized manner.
Moreover, at least the following observation had previously escaped at least my attention:

In one study I made of the ice core record over 8 000 years, I found that there was a 95% chance that the temperature would change naturally by as much as +/-2degrees C during 100 years.

This provides yet another basis for stating the conclusion that many had come to:

During the 20th century, it changed by about 0.8degrees C. The conclusion? If carbon dioxide in the atmosphere does indeed cause global warming, then the signal has yet to emerge from the natural noise.

Great job.

Reply to  Joe Born
April 8, 2016 9:52 am

The lack of an observed tropical mid- upper tropospheric warm spot as predicted by CO2 GHG strong warming theory (the 8th figure in the above essay) should be seen as strong evidence to reject the theory. That main stream climate science can’t find it within itself to do that, (probably for reputational, monetary and political reasons) identifies today’s state of climate science as junk science.

Alastair Brickell
Reply to  Joe Born
April 9, 2016 7:37 am

This is a very important point…perhaps the author of the article could provide more detail of his study.

The_Iceman_Cometh
Reply to  Alastair Brickell
April 9, 2016 12:29 pm

The ice core study was Lloyd, Philip J. An estimate of the centennial variability of global temperatures. Energy & Environment, 26(3), pp. 417–424 2015. DOI: 10.1260/0958-305X.26.3.417
The upper troposphere story came from AMERICAN PHYSICAL SOCIETY
CLIMATE CHANGE STATEMENT REVIEW WORKSHOP in January 2014 the transcript of which is required reading if only for the struggle Ben Santer had to stop the other participants from taking him seriously.

Reply to  Alastair Brickell
April 10, 2016 6:39 am

The_Iceman_Cometh:
Thank you for the cite.
I must confess, though, that after reviewing it I now find Dr. Lloyd’s (to me) new argument less compelling than I it first appeared, for the reason Brandon Shollenberger sets forth nearby (although I see no basis for his claiming that the head post makes a “ton of errors”).
It turns out that what Dr. Lloyd studied was the records of four isolated locations. It seems clear that at least as far as annual-temperature anomalies go we would expect a single location’s variance greatly to exceed that of the average surface temperature. I haven’t thought through whether that should be as true of centennial trends, but at first blush it does seem to me that even for centennial durations a global trend is likely to be amplified in the local trends of high-latitude locations such as Vostok. That is, the fact that a local trend has varied by a certain amount in pre-industrial periods may not tell us much about how unusual a lesser global trend since then is.
Dr. Lloyd addressed a similar objection elsewhere, but, perhaps because of my statistics limitations or failure to understand the data, I was unable to see how what he said was an adequate answer.
So, while I continue to be impressed by Dr. Lloyd’s post in general, I will put off for now adding that new argument to my arsenal.

Peter Sable
April 8, 2016 8:57 am

Well done. Here’s a way to think of these steps – a Markov chain of events, where each event depends on the previous one happening.
The Markov chain going from “C02 is going up” to “let’s do some effective measures to counter it” is lost on the public. Each of these steps has a probability of being right, but you have to multiply the probabilities together to get the probability of the entire sequence being correct.
Sample calculation for a positive outcome for reducing C02 emissions in order to avoid dangerous climate changes:
(1) emissions are causing C02 to go up – I think the author demonstrates p = 1.0
(2) Physical impact of increase in C02 is measurable in dangerous temperature changes, extreme events, or dangerous acceleration in sea level rise – I think the author demonstrates p = 0.01 of this being true.
(3) probability of success of centralized command economy efforts to mitigate C02 rise sufficient to cap C02 below say 500ppm: p = 0.01. These types of efforts rarely succeed, and the side effects are usually far more costly than the benefit. Source: history on command economies.
the total probability of this chain is 1.0 * 0.01 * 0.01 = 0.0001. You can interpret this to mean that CAGW meme has a 0.01% chance of a net positive outcome if carried out in its entire detail of attempting to mitigate C02 emissions. The chance of a negative outcome is 1 minus this, or 99.99%.
You can compare this to the alternative proposed by the author – mitigate if the worst happens. The probability of a net positive outcome is far higher, because mitigation is local, i.e. distributed, and there’s no chance of bad command economy effects in the common case where nothing happens.
You have to calculate two Markov chains here: the probability of a positive outcome for “nothing happens” and “something happens, mitigate:”
(1) emissions cause C02 to go up: p = 1.0
(2) physical impact requiring mitigation : p = 1 – 0.01 = 0.99, note this is the inverse of the above, we are calculating positive outcomes here.
do nothing positive outcome: 0.99
For the mitigation chain:
(1) emissions cause C02 to go up: p = 1.0
(2) physical impact requires mitigation, p = 0.01
(3) mitigation effective: guess, but let’s say it’s p = 0.5
Probability of a positive outcome is 1*0.01*0.5 = 0.005. You add this to the “do nothing” to get 99.5% chance of a positive outcome.
A more detailed analysis would assign $ or some other metric of value to these rather that comparing probabilities.
Peter

Marcus
April 8, 2016 9:02 am

..WOW, this is definitely the best and most informative analysis of the whole Climate Change / Glo.Bull Warming picture !! Thank You, I will spread it around !!

April 8, 2016 9:15 am

the only empirical evidence that relates warming to fossil fuel emissions is a correlation between cumulative values. this correlation is spurious.
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2725743
also no correlation between the rate of change of atmos co2 and the rate of fossil fuel emissions
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639
i think the fundamental issue is that we don’t know natural flows well enough to measure the effect of fossil fuel emissions
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654191

Reply to  chaamjamal
April 8, 2016 11:40 am

Chaamjamal,
“also no correlation between the rate of change of atmos co2 and the rate of fossil fuel emissions
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639
This paper is superb. I think it will make Engelbeen’s head explode…

Reply to  Michael Moon
April 8, 2016 1:09 pm

Michael,
No reason at all to explode, only to shake my head for such stupidity…
Take the second paper:
is likely to be spurious because it vanishes when the two series are detrended.
What happens if you detrend the emissions and the increase in the atmosphere?
You remove the correlation and the causation in this case, which is in the double trend of the emissions compared to the increase in the atmosphere…
All what remains is a huge correlation between temperature variability and CO2 rate of change variability, which is a correlation of only the noise around the trend: +/- 1.5 ppmv around a trend of 80+ ppmv over the past 55 years, a variability which zeroes out after a few years…

CaligulaJones
April 8, 2016 9:19 am

Thank you for a brilliantly clear view of why I call myself a “lukewarmer”.

Reply to  CaligulaJones
April 10, 2016 7:27 pm

!(:-)!

Jpatrick
April 8, 2016 9:25 am

“The annual rise and fall [of CO2 within a year] is due to deciduous plants growing or resting, depending on the season. ”
I used to presume this was true, but really? For this to happen, the deciduous plant activity would have to be considerably more either the Northern Hemispher or the Southern Hemisphere. Is that the case?

ECB
Reply to  Jpatrick
April 8, 2016 9:39 am

Yes, the Arctic tundra is not duplicated in the southern hemisphere.

Reply to  ECB
April 8, 2016 10:00 am

Neither is the boreal forest. I’m looking at it now. It’s quite extensive, and it’s tough as heck to walk through.

Reply to  ECB
April 8, 2016 10:22 am

Indeed, forests of Siberia, Canada, Alaska, northern US are not duplicated in the Suthern Hemisphere. And the tropical forests of Amazonia, the Congo, Malaysia-PNG, and Indonesia exhibit a photosynthesis cycle not linked to the seasonal component.
A recent Science article on Amazonia photosynthesis describes this well:

Abstract:
In evergreen tropical forests, the extent, magnitude, and controls on photosynthetic seasonality are poorly resolved and inadequately represented in Earth system models. Combining camera observations with ecosystem carbon dioxide fluxes at forests across rainfall gradients in Amazônia, we show that aggregate canopy phenology, not seasonality of climate drivers, is the primary cause of photosynthetic seasonality in these forests. Specifically, synchronization of new leaf growth with dry season litterfall shifts canopy composition toward younger, more light-use efficient leaves, explaining large seasonal increases (~27%) in ecosystem photosynthesis. Coordinated leaf development and demography thus reconcile seemingly disparate observations at different scales and indicate that accounting for leaf-level phenology is critical for accurately simulating ecosystem-scale responses to climate change.
http://science.sciencemag.org/content/351/6276/972

ExNOAAman
Reply to  Jpatrick
April 8, 2016 10:43 am

That’s right. The NH is largely land mass (with plants), and th SH is mostly water (ocean), so you’ve figured it out.

CaligulaJones
Reply to  ExNOAAman
April 8, 2016 10:51 am

“That’s right. The NH is largely land mass (with plants), and th SH is mostly water (ocean), so you’ve figured it out.”
Which is why I get a bit ornery when the warmists try to raise the old “the Medieval Warming Period and Little Ice Age were only local/regional/NH” crap.

Jpatrick
Reply to  Jpatrick
April 8, 2016 10:47 am

Joel. Many thanks for that.

Reply to  Jpatrick
April 8, 2016 2:18 pm

Look at a globe, and note whether there is a big difference between the land area in the Northern vs. Southern Hemisphere. Subtract out the completely barren continent of Antarctica, and there is very little land surface outside of the tropics in the S.H.

Reply to  Menicholas
April 10, 2016 7:31 pm

& additionally one of/the largest land surface (Australia) is pretty “plant poor” vs. your average NH surface….

Philip Lloyd
Reply to  Jpatrick
April 9, 2016 3:56 am

There is far more land in the Northern hemisphere.

Reply to  Jpatrick
April 10, 2016 7:50 pm

Is it more due to the temperature change influence on the Oceans (the biggest sink/source of CO2, correct?)? How much CO2 is gained/released by the Ocean per degree Temperature difference between Summer/Winter does anyone know? That should cause some seasonal fluctuation in CO2 (& in both hemispheres). Also, presume NH forests (at least above the Tropics) are net sinks/absorbers of CO2 in Summer (depending on growth rates, rainfall events, other seasonal patterns as per another post here…) but more neutral in Winter… Then factor in more emissions from eg. Air Conditioners in Summer but presumably a lot more fires burning in NH winter… And all with extreme differences between NH and SH, between Ocean vs. Land Uses and a plethora of other factors… Another extremely complex system…. How the hell could you model that with any hope of success?

Reply to  llydon2015
April 12, 2016 7:09 am

llydon2015,
The seasonal in/out fluxes are reasonably known, based on O2 and δ13C changes: changes in CO2 uptake/release by vegetation make a lot of difference, as O2 is released with CO2 uptake and as that is preferentially 12CO2, the δ13C level at the same time increases. For the oceans, these changes are much smaller. Over a full seasonal cycle, the quantities involved are:
~50 GtC out and in the ocean surface
~60 GtC in and out the biosphere.
As these fluxes are countercurrent over the seasons and the seasons are opposite between NH and SH, the remaining changes in CO2 and δ13C are rather modest and mainly in the NH, where the changes in the NH biosphere are dominant. The changes in the SH are far smaller:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/month_2002_2004_4s.jpg

Trebla
April 8, 2016 9:34 am

A beautifully written and well reasoned article. We are an adaptive species. Far better to cope with any change that might come our way than to try to reverse a century of human progress and well being.

markl
April 8, 2016 9:40 am

Well said, especially for scientifically challenged people such as I. The last statement …”Attempts to influence global temperatures by controlling carbon dioxide emissions are likely to be both futile and economically disastrous.”…..brings us to the reason why AGW is being pushed. Economics, not temperature, is the reason.

Robert Barry
April 8, 2016 9:41 am

Most compelling and uplifting! Sadly, Egos are wing the day . . .

ossqss
April 8, 2016 9:48 am

Thanks Professor Lloyd, nicely done.
I am hopeful this is viewed by some who have never gotten past the headlines.
Well worth the read for those activist types to better understand the strength of their arguments.
.
Where are all the resident naysayers trying to poke holes in this? .

April 8, 2016 9:56 am

Excellent discussion!
The increase in our global temperature has effected the highest latitudes, the coldest times of year and the coldest times of day the most(you can see this in several metrics like record high minimums far exceeding record high maximum temps)
Meteorology 101 tells us that when the warming is greater at higher latitudes, decreasing the meridional(north to south) temperature gradient that it decreases many types of extreme weather.
The pressure gradient is reduced……..along with wind. Jet streams are weaker. Energy for mid latitude cyclones is reduced. The number of violent tornadoes(and the most damaging severe thunderstorms) are reduced.
Observations confirm this.
Increasing the global temperature has however, allowed the atmosphere to hold more moisture. This HAS increased extremely heavy rain amounts and flooding for many events with a similar synoptic scale set up to the past.
With additional low level moisture and precipitable water, many rain events, even smaller scale ones have the potential to dump more rain and faster. Again, this is meteorology 101.
Overall, the effects of increasing CO2 on our atmosphere/the weather and life on this planet, viewed objectively are a net positive. In fact, the past 3 decades have featured the best weather/climate and growing conditions on this planet in almost 1,000 years(since the Medieval Warm Period that was warmer than this in many places).
If you dial in the contribution from the effects of photosynthesis, the current atmosphere is benefiting life on this greening planet more than the Medieval or Warm Period’s by a wide margin.

John W. Garrett
April 8, 2016 10:05 am

Fantastic!
Well done !!

D Lake
April 8, 2016 10:14 am

Good article. I have a question. Is it possible that an increase in C02 in the upper atmosphere creates a slight cooling by reradiating the suns long wave radiation back into space and allowing less to reach the surface of the earth?

Reply to  D Lake
April 8, 2016 1:18 pm

A basic application of S-B equation shows that at -40C (lower troposphere) the theoretical emission of a black body is a fraction of a body at 15C (surface). That T^4 makes a big difference. And when applied to a grey body with emissivity included and a CO2 emissivity of essentially zero turns the entire re-radiation into nonsense. It’s the water vapor that runs the greenhouse not the pitiful ppm of GHGs.

Reply to  D Lake
April 8, 2016 1:21 pm

D Lake,
The sun’s IR is a lot nearer the visible spectrum than the IR which is emitted by the earth’s temperature. That makes that CO2 is transparent for the sun’s (IR) spectrum, while opaque (in some narrow bands) for the earth’s radiation… Water on the other side may absorb and re-emit to space some of the incoming near-IR from the sun. See the spectra here:
http://earthguide.ucsd.edu/eoc/special_topics/teach/sp_climate_change/p_solar_radiation.html

jorgekafkazar
Reply to  Ferdinand Engelbeen
April 8, 2016 2:05 pm

Thanks, F.E.

April 8, 2016 10:15 am

Re: Five points about climate change, 4/8/16, says,
The carbon dioxide which is inevitably emitted accumulates in the atmosphere and the result is “climate change.”
The assumption that CO2 accumulates in the atmosphere is essential to the AGW story, but it is invalid. It is essential in order that the Keeling Curve, which is not CO2 measurements at MLO but a doubly smoothed representation of those measurements, be attributed to a global phenomenon. For the story, CO2 must be a long-lived greenhouse gas, and be well-mixed in the atmosphere, two assumptions seriously challenged, if not contradicted, by satellite images of atmospheric CO2.
IPCC termed the Keeling Curve the master time series, which it then used to calibrate its global network of CO2 measuring stations into agreement. However, MLO measurements are local. The Observatory sits in the seasonal wind-modulated plume of Eastern Equatorial Pacific outgassing from ancient bottom waters, a terminus of MOC/THC, the carbon pump. The flow is massive, estimated between 15 and 50 Sv, where the sum flow of all rivers and streams on Earth is 1 Sv.
The conjecture that CO2 accumulates in the atmosphere is based on the false presumption that the surface of the ocean is in thermodynamic equilibrium so that the surface layer has the corresponding thermodynamic equilibrium distribution of carbon molecules. Instead, the surface layer comprises bubbles of gaseous CO2 and aqueous CO2, forms not found in TE. The surface layer is the buffer accumulator for CO2, not the atmosphere. The ocean is not a bottleneck to the dissolution of atmospheric CO2.
The rise in CO2 in the atmosphere is largely paralleled by the increase in fossil fuel combustion. Combustion of fossil fuels results in emission of CO2, so it is eminently reasonable to link the two increases.
This combustion conjecture is one of IPCC’s two human fingerprints on atmospheric CO2. IPCC’s combustion curve parallels the Keeling Curve because IPCC graphed the two records on separate ordinates, then adjusted the scales of the ordinates to make the records appear parallel. This is chartjunk, designed to deceive, and the result cannot be honestly reproduced.
The increase in CO2 in the atmosphere is incontrovertible.
Henry’s Law of Solubility predicts the rise in global atmospheric CO2 based on an equivalent global average surface ocean temperature. Global warming increases atmospheric CO2, not the reverse. Global warming leads atmospheric CO2, not the reverse. The ocean regulates the concentration of CO2 in the atmosphere.
Injecting into the air 12C-rich CO2 from fossil fuels should therefore cause the 13C in the air to drop, which is precisely what is observed: [Chart]. So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive.
The parallel between CO2 emissions and isotopic concentration is IPCC’s second human fingerprint on the climate. It, too, is false, created again by chartjunk.
To conclude, our five steps have shown: [¶] • the combustion of ever increasing quantities of fossil fuel has boosted the carbon dioxide concentration of the atmosphere.
The principle that every effect has a cause is causation, and that every cause must precede all its effects is causality. No evidence exists that the AGW theoreticians have even attempted to show causality, an essential in demonstrating science literacy.
The temperature side of the debunked Climate Change conjecture fairs no better than the CO2 side. Just as the ocean regulates atmospheric CO2, cloud cover is a negative feedback to global warming that mitigates warming from any cause.
At the same time, cloud cover is a positive feedback to solar radiation, amplifying TSI by the burn-off effect. The AGW models don’t have dynamic cloud cover, so they miss its two feedbacks, the largest feedbacks in all of climate. The Global Average Surface Temperature (GAST) follows solar radiation, well-represented by a simple, two-pole transfer function. The accuracy of that model is quite close to the accuracy of IPCC’s 22-year smoothed representation of GAST. And of course, there is no human fingerprint on the Sun (either).

Bartemis
Reply to  Jeff Glassman
April 8, 2016 12:40 pm

Agree with ‘most everything you say here.
“The Global Average Surface Temperature (GAST) follows solar radiation, well-represented by a simple, two-pole transfer function.”
Do you have a link to share?

Pauly
Reply to  Jeff Glassman
April 8, 2016 1:12 pm

Jeff Glassman,
The following link is to a paper that specifically investigated causality between atmospheric CO2 concentrations and temperature:
http://dx.doi.org/10.4236/ijg.2010.13014
Its conclusion was that increasing temperature always precedes increasing atmospheric CO2 concentrations, but that increasing levels of atmospheric CO2 did not lead to increasing temperature.

Reply to  Pauly
April 8, 2016 2:03 pm

Pauly,
That paper says nothing about the cause of the increase of CO2 in the atmosphere, besides pointing to human CO2. They talk about temperature change preceding CO2 change, not about temperature preceding CO2 increase…

Reply to  Jeff Glassman
April 8, 2016 1:27 pm

Per IPCC AR5 Table 6.1 anthropogenic CO2 output between 1750 and 2011 (this number takes some major WAGing) is twice the 133 ppm concentration increase. Uh-oh, big problem. So IPCC AR5 created entirely new spontaneous magic unicorn sinks (Table 6.1) under which to sequester/sweep 57% of the anthro contribution. That leaves 43% or 4 Gt/y residual. Figure 6.1 goes from a net 0.4 Gt/y of sinks before 1750 to 2011 ocean & vegetation sinks of 2.8 Gt/y. That’s a yuuuuge change. Even a trivial change in the 45,000 Gt of reservoirs could easily account for the change.

Reply to  Jeff Glassman
April 8, 2016 1:56 pm

Jeff,
Quite a lot of remarks, but with very little basis…
1. Keeling Curve the master time series, which it then used to calibrate its global network of CO2 measuring stations into agreement.
This is not based on reality: there are more than 70 CO2 monitoring stations all over the oceans, of which 10 by NOAA, the rest are maintained by different people of independent different organizations and different countries. The only calibration done (nowadays by NOAA) is of the calibration gases, used to calibrate all measurement devices all over the world. But even so, other organizations like Scripps still make their own calibration gases…
2. The surface layer is the buffer accumulator for CO2, not the atmosphere.
Never heard of the Revelle factor? The chemistry of the ocean surface makes that any change in the atmosphere is followed by an only 10% change in the ocean surface. As the resp. quantities are 1000 GtC (mixed layer) and 800 GtC (atmosphere), the ocean surface is a small sink (~0.5 GtC/year) for the atmospheric increase. The main sink is in the deep oceans, but these exchanges are limited (~40 GtC/year via the MOC/THC, ~3 GtC more sink than source).
3. IPCC graphed the two records on separate ordinates, then adjusted the scales of the ordinates to make the records appear parallel.
That simply is not true:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_emiss_increase.jpg
4. Henry’s Law of Solubility predicts the rise in global atmospheric CO2 based on an equivalent global average surface ocean temperature.
Henry’s law predicts not more than 16 ppmv/°C for the change in steady state between ocean surface and atmosphere. That is all. Not the 110 ppmv increase above pre-industrial for 0.8°C temperature increase. The net CO2 flux is from atmosphere into the oceans, not reverse, as also DIC and pH measurements show.
5. The parallel between CO2 emissions and isotopic concentration is … false, created again by chartjunk.
Sorry, fully based on measurements.
6. every cause must precede all its effects is causality.
Human emissions precede its effects in the atmosphere for every observation:
http://www.ferdinand-engelbeen.be/klimaat/co2_origin.html
Thus Jeff, your story not only lacks evidence on about every point, it is a disserve to the many skeptics who want to tackle the AGW story where the real battle should be: the lack of strong response of temperature to the increased CO2 levels…

jorgekafkazar
Reply to  Ferdinand Engelbeen
April 8, 2016 2:11 pm

And thanks again.

bw
Reply to  Jeff Glassman
April 8, 2016 3:27 pm

Correct. CO2 never “accumulates” in Earth’s atmosphere. All atmospheric CO2 is a small part of a global biogeochemical carbon cycle. That is, a river that flows from sources to sinks. Carbon dioxide seeping from deep ocean geology is grossly underestimated by the IPCC. The amount of anthropogenic CO2 in the global carbon cycle is only about 3 percent of the annual stream. The deep ocean CO2 sinks are also underestimated. Most of the increased CO2 in the atmosphere reflects shifts in the deep long term exchanges.
https://chiefio.wordpress.com/2009/02/25/the-trouble-with-c12-c13-ratios/
Changes in atmosphere CO2 13C/12C ratio are mostly natural. Only about 20ppm of atmospheric CO2 is due to fossil fuels, and that addition is entirely beneficial. The water cycle accounts for 95 percent of the so-called greenhouse effect. CO2 is about 4 percent according to Lindzen.

Reply to  bw
April 9, 2016 12:42 am

bw,
1. Human CO2 is not part of the cycle. It is additional. It is like adding an extra flow to a lake where a rather constant river flow goes in and out. Besides the small natural variability in the river flow, the extra flow, whatever small it is, will increase the lake’s level, not the river, whatever high its flow is.
2. Most of the deep vents CO2 remains in the deep oceans, not even measurable in the huge mass there.
3. 3% additional to a rather constant cycle adds to the cycle and amounts will increase, which disturbs the cycle and some of the additional CO2 will be absorbed, but not all. Even the only 3% is responsible for near the full increase, as the average sink rate is only 1.5%.
4. The deep oceans are a net sink of ~3 GtC/year. Total sinks ~4.5 GtC/year. Human emissions are ~9 GtC/year…
5. The huge downward change in 13C/12C ratio in the atmosphere is all human: oceans releases, volcanoes, rock weathering,… all increase the ratio in the atmosphere and the biosphere as a whole is a net sink for CO2, thus also increasing the 13C/12C ratio by its preference for 12C. The only known huge source of low-13C is fossil fuels, besides a very small addition of natural methane.
6. about 8% of the atmosphere is original human, but that says next to nothing about the cause of the increase, as 20% of all CO2 in the atmosphere is exchanged with CO2 from other reservoirs, thus diluting the human “fingerprint”.

Bartemis
Reply to  bw
April 9, 2016 1:11 pm

“1. Human CO2 is not part of the cycle. It is additional. It is like adding an extra flow to a lake where a rather constant river flow goes in and out.”
It is peeing in the Meuse. Your suggestion that it is “rather constant” is pure speculation. You base it on ice core proxies for the past but: A) the ice core measurements cannot be independently verified B) as they always warn regarding mutual funds, past performance is not indicative of future results.
“4. The deep oceans are a net sink of ~3 GtC/year. Total sinks ~4.5 GtC/year. Human emissions are ~9 GtC/year…”
Ridiculous pseudo-mass balance argument again. Your persistent failure to understand why it is so abjectly flawed reveals that you do not understand the nature of dynamic systems, and your static accounting is not up to the task.

Reply to  bw
April 9, 2016 3:14 pm

Bart,
It is peeing in the Meuse. Your suggestion that it is “rather constant” is pure speculation.
It is ~6% of the natural cycle, or the whole town of Liège peeing in the Meuse…
That it is rather constant is proven by the small variability (maximum +/- 1.5 ppmv) around the trend and a rather constant seasonal cycle as well in CO2 changes as in δ13C changes over the past 55 years. That is the period with the largest increase in CO2 amount and speed over the past 800,000 years.
You base it on ice core proxies
No, it is based on current information, ice cores can’t give any information about the carbon cycles over the seasons, which are the largest fluxes within a year or even inter-annual.
Indeed there is little possibility to confirm the ice core data, except for a 20 year overlap with direct measurements: they are much better than any proxy…
Ridiculous pseudo-mass balance argument again
As long as you can’t produce any evidence that the natural carbon cycle increased a fourfold over the past 55 years in lockstep with human emissions, increase in the atmosphere and net sink rate, your objections against the mass balance argument are of no value…

Reply to  Jeff Glassman
April 10, 2016 8:33 pm

“Henry’s Law of Solubility predicts the rise in global atmospheric CO2 based on an equivalent global average surface ocean temperature. Global warming increases atmospheric CO2, not the reverse. Global warming leads atmospheric CO2, not the reverse. The ocean regulates the concentration of CO2 in the atmosphere.”
Thanks for this Jeff Glassman. Would you have a reference or two that clearly demonstrates (1) that it is Global temperature that drives atomospheric CO2 levels, and (2) that the Oceans are the dominant driver (both as source and sink)? Thanks for that, would be much appreciated….

GTL
April 8, 2016 10:24 am

Fortunately isotopes come to our aid. There are two primary plant chemistries, called C3 and C4. C3 plants are ancient, and they tend to prefer the 12C carbon isotope to the 13C. Plants with a C4 chemistry are comparatively recent arrivals, and they are not so picky about their isotopic diet. Fossil fuels primarily come from a time before C4 chemistry had evolved, so they are richer in 12C than today’s biomass. Injecting into the air 12C-rich CO2 from fossil fuels should therefore cause the 13C in the air to drop, which is precisely what is observed:
So C-3 uses 12C, C-4 uses 12C and 13C, ergo emitting 12C reduces 13C. Can someone connect the dots for me? A great article but I do grasp this piece of the puzzle.

GTL
Reply to  GTL
April 8, 2016 10:25 am

First paragraph should be in quotes.

GTL
Reply to  GTL
April 8, 2016 10:26 am

do not grasp

Reply to  GTL
April 8, 2016 10:51 am

GTL, let me try. It is the ratio of 12C to 13C. Fossil fuels produce 12C cause thats what C3 photosynthesis plants consumed to produce the fossil fuels in the first place. 15% of terrestrial plant biomass is C4 (e.g grasses and derivatives like corn and sugarcane). Dunno about aquatic plants. Those C4 grasses take up 12C and 13C equally. So over time, biological sinks reduce 13C relative to 12C. Only 12C added by FF; 12C and 13C being biologically sinked.

GTL
Reply to  ristvan
April 8, 2016 11:15 am

So it is the ratio of 12C to 13C, not an absolute reduction of 13C. Thank you, that makes perfect sense.

Bartemis
Reply to  ristvan
April 8, 2016 12:34 pm

It’s just a rationalization, on which I will expand below.

Reply to  ristvan
April 8, 2016 3:36 pm

It’s a ratio change and an absolute reduction. The oceans take C-12 and C-13 out of the atmosphere.

April 8, 2016 10:29 am

Great article. One minor correction. The finest grid in CMIP5 is 110×110 km at yhe equator. Typical is 250×250 km. regional weather models are run at 5km typical; the finest are now 1.5 to 2km grids. That is because they do not model the planet, only regions. And theymonly run out typically 7 days, not 100 years.
Doubling resolution (halving grid sides) increases compution one order of magnitude since time steps need to more than half per the CFL constraint. Climate models have a 6-7 orders of magnitude computational limitation.

April 8, 2016 10:37 am

Yes, excellent article, however: considering IPCC AR5 Figure 6.1 & Table 6.1 I am of the opinion that the magnitudes of the reservoirs and sinks and the uncertainty bands are such that anthropogenic contributions are insignificant. Same with the power flux, 2 W/m^2 out of 340 and ebbs and flows with +/- 20 uncertainties. And the C13/14 concentrations are too small to “prove” much of anything. Volcanic activity is another possible source and could easily originate from ocean floor geothermal heat flow and volcanic vents of which almost nothing is known.

April 8, 2016 10:49 am

I read your report and it is one of the best on this website so far this year.
I think your conclusion should have been first, used as an executive summary, but that’s a minor point
I do think you didn’t directly make these points, and should have:
(1) Anyone who thinks 1750 was the “ideal climate” , and considers any changes from 1750 to be bad news, is a “warmunist” or an imbecile (I repeat myself sometimes!).
(2) The average temperature in 1750 was near the coldest since the glaciation peak about 20,000 years ago, and well below average for our planet’s entire history (assuming estimates of the prior 4.5 billion years are reasonable)
(3) CO2 levels in the 1750 atmosphere may have been near the lowest ever (assuming estimates of the prior 4.5 billion years are reasonable)
(4) Humans generally did not like the cool centuries from 1300 to 1800, and left much anecdotal evidence describing their unhappiness about the cool weather.
(5) Green plants certainly did not prefer 250 ppmv airborne plant food in 1750 (I speak for them)
(6) In summary, we have warmunists looking at unpleasant, bad news climate in 1750, declaring that climate to be normal, and getting hysterical about any changes since 1750.
In fact, the warming and additional CO2 in the air since 1750 are GREAT NEWS for people and plants.
I do not subscribe to the left-wing belief that CO2 caused the warming since the late 1600 Maunder Minimum trough, in the absence of any scientific proof — I think it is more likely that natural causes of warming led to the oceans releasing more CO2 into the air.
Hating the warming and more CO2 in the air since 1750 has nothing to do with common sense, or science — but there has to be some reason(s) for the false demonization of CO2, and the false glorification of the climate in 1750 !
In my opinion, the reasons are getting attention, money, and an excuse to expand the government and more tightly control the general public, in the name of “fighting” CAWG to save the planet — the imaginary left-wing crisis !
As an aside, warmunists completely ignore real time CO2 measurements, and use ice core proxies for CO2 levels from 1750 to 1958, which may not be as accurate as the old chemical measurement methodologies.
I think you ignored those measurements too.
YOU WROTE:
“There were only sporadic readings of CO2 before 1958, no continuous measurements. Nevertheless, there is sufficient information to construct a view back to 1850:”
MY COMMENTS:
Not correct.
More than 90,000 individual chemical measurements of CO2 levels were recorded between 1812 and 1961.
Warmunists ignore all of them.
For one example:
– Wilhelm Kreutz, working at a Giessen (Germany) meteorological station, used a closed, volumetric, automatic system designed by Paul Schuftan, the father of modern gas chromatography.
Kreutz compiled over 64,000 single measurements — about 120 samples per day — in an 18-month period during 1939–1941.
Kreutz’s measurements were precise enough to capture the seasonal CO2 cycle, and weather events around the city of Giessen.
Kreutz’s real-time measurements found persistent CO2 levels above 400 ppm over most of a 2-year period … while warmunists claim 300 ppm in that two-year period per ice core air bubble proxies.
Above sentences from my climate blog post — full post located here
http://elonionbloggle.blogspot.com/2016/04/historical-or-hysterical-co2-levels.html
Source data from here:
http://www.friendsofscience.org/assets/files/documents/CO2%20Gas%20Analysis-Ernst-Georg%20Beck.pdf

CaligulaJones
Reply to  Richard Greene
April 8, 2016 10:57 am

“(2) The average temperature in 1750 was near the coldest since the glaciation peak about 20,000 years ago, and well below average for our planet’s entire history (assuming estimates of the prior 4.5 billion years are reasonable)

(4) Humans generally did not like the cool centuries from 1300 to 1800, and left much anecdotal evidence describing their unhappiness about the cool weather.”
Exactly. There is a reason why so many Christmas carols have to do with the warmth of Christmas cheer compared to the bloody cold and snowy outdoors…they were written when it was “nice” and cheerful and life-threateningly dangerous out.
But seriously, when you ask any warmunist when was a better time than now, its exceedingly easy to find a dozen or so indicators that they wouldn’t want to live under. It would suck to be a vegan jonesing for a soy latte in Dickens’ London during Yule.

Reply to  CaligulaJones
April 8, 2016 2:23 pm

Yeah, but them half rotten and moldy veggies were all ORGANIC!

Reply to  Richard Greene
April 8, 2016 2:30 pm

Richard Greene,
Most of the historical wet chemical CO2 measurements are of no value at all, because they were taken in places (towns, fields, forests,…) too close to huge sources and sinks. They show local values which could change hundreds of ppmv within a full 24 h day. That includes the Giessen data. There is a modern station, a few km from the historical one, which shows the problem:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
Further, the sampling in Giessen was not automatic, it was 3 times a day at 4 heights, or 12 samples a day, manually measured. The other measurements were temperature, humidity, wind speed and direction.
Only these samples taken over the oceans or coastal with wind from the sea are of value. These show observations around ice core CO2 levels for the same period.
See further my comment on the work of the late Ernst Beck:
http://www.ferdinand-engelbeen.be/klimaat/beck_data.html

Reply to  Ferdinand Engelbeen
April 9, 2016 12:28 pm

I don’t see why a measurement is automatically bad because it varies too much during a day.
That’s what averages are for.
Is it not true that a large majority — perhaps 75% — of Mauna Loa raw measurements are discarded because they ‘vary too much’.?
There is more CO2 in the air than in the 1700s.
I thinks that great news for green plants,
But how accurate are the 1700s estimates based on climate proxies?.
After all, we have a large number of people claiming 1750 climate was “normal”, and getting hysterical about any changes since then — so claims about 1750 CO2 levels have become very important (at least to them).
When throwing out real time measurements to rely on climate proxies (ice core air bubbles), the next question is whether the proxy data are better than the real time measurements — you say so — you are a bright man — so I will agree with you … but the more important question is whether the proxy data are good enough as a substitute for accurate real-time measurements?
In my opinion, climate proxy data are NEVER good enough to be a substitute for accurate real time measurements of average CO2 levels and the average temperature.
They are rough estimates.
How rough is the relevant question.
Warmunists using ice core data for historical CO2 “measurements” seem to reject any climate proxy data that do not benefit their CAGW theory, so I’m always suspicious.
Sometimes they twist and splice dubious climate proxy data to support CAWG (Mann Hockey Stick).

Reply to  Ferdinand Engelbeen
April 9, 2016 2:44 pm

Richard,
The main point is what you want to measure. In the case of CO2 one is mainly interested in the levels in the bulk of the atmosphere, as that influences the capture of IR in the atmosphere. Biologists may be interested in the diurnal CO2 changes and a lot of tall towers try to register the CO2 fluxes in and out vegetation to detail that carbon cycle.
In 95% of the atmospheric mass the CO2 levels are within +/- 2% of full scale, despite the fact that about 20% of all CO2 in the atmosphere is exchanged each year with other reservoirs over the seasons. That is all over the oceans and over a few hundred meters over land. In 95% of the atmosphere, that is in the first few hundred meters over land, CO2 is not well mixed. That is where most of the historical measurements were taken. The modern Giessen data show an average bias of +40 ppmv compared to background, the historical data show a variability of 68 ppmv (one sigma). Compare that to Mauna Loa: 4 ppmv for all raw data.
Indeed Mauna Loa has a luxury problem: they sample so many data that they can throw out any suspect data if the wind is downwind from the volcanic vents or upwind from the valleys. Still more that enough data left to give the nice Keeling curve. It doesn’t matter that you keep all the data or use only “clean” data; the difference is less than 0.1 ppmv in trend. Here for Mauna Loa and the South Pole in 2008 (mind the scales!):
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
Ice cores CO2 is not a proxy. It is CO2 as it was in the air at the bottom of the firn at the moment that the bubble closed. That is not the composition of one year, but the average of 10-600 years, depending of how fast the bubbles were closed and that depends of the snow accumulation rate. For the period around 1750 we have ice cores with a resolution of 20 years, sharp enough to measure a global change of 2 ppmv sustained over the full 20 years or a two-year peak of 40 ppmv.
Moreover there is a direct overlap of three ice cores (Law Dome) with the direct measurements at the South Pole (1960-1980) which were within the variability of 1.2 ppmv (1 sigma) of the ice core CO2 measurements.
Compare that to the wild variability of the historical data: 250 ppmv in the US in the same year that in Giessen 450 ppmv was measured. Beck’s compilation shows an increase of +80 ppmv over a period of 7 years and down again in 7 years. That should be clearly visible in all high to medium resolution ice cores and about every available proxy, but it is not.
Thus only rigorous selection of the best situated high quality historical data may be of help. That is what Callendar had done… His average values (around 310 ppmv) was years later confirmed by ice core CO2 data…

Reply to  Ferdinand Engelbeen
April 9, 2016 2:46 pm

Of course, the second:
In 95% of the atmosphere, that is in the first few hundred meters over land
must be
In 5% of the atmosphere, that is in the first few hundred meters over land

Reply to  Richard Greene
April 8, 2016 2:51 pm

Agree Richard,
We have rescued life on this planet from dangerously low levels of atmospheric CO2.
Incredible that a group of humans would define these low levels of CO2 and the harshness of a cooler planet as the “ideal conditions” before humans began burning fossil fuels and declare that changes since then from burning fossil fuels………increase in beneficial CO2 and modest, beneficial global warming is detrimental.
Then, generate global climate models based on theoretical equations that project catastrophic results……….which have not been verifying for 2 decades and use them to hijack climate science to accomplish political objectives.

Reply to  Mike Maguire
April 9, 2016 5:28 pm

“before humans began burning fossil fuels” — interesting that the new CO2 sensor sats see huge flows from vegetated areas, and NOTHING from urban and industrial ones.

afonzarelli
Reply to  Mike Maguire
April 9, 2016 5:48 pm

Brian, the reason that co2 flows from vegetated areas is that trees sequester carbon and then when they die they release that carbon from those areas. So it looks as though trees produce carbon but in reality they are just redistributing carbon…

Reply to  Mike Maguire
April 11, 2016 10:18 am

Reply to Ferdinand’s post above:
YOU WROTE:
“Ice cores CO2 is not a proxy.”
MY COMMENT:
I strongly disagree:
A global CO2 level measured real time with accurate instruments in 1750 is what we really want.
Since those data do not exist, something else must be used.
A “proxy” is something else (used as a substitute).
The air bubbles are a climate proxy for several reasons:
– The assumption must be made that CO2 in ice cores measured centuries later is still representative of the average global level of CO2 in 1750.
The air bubbles may have changed over time, and under pressure, or from the process of drilling ice cores, moving them from the field to a laboratory, and then making measurements in a laboratory.
The reasons I am concerned about trashing these historical real time measurements:
(1) They seem to have been dismissed very quickly, as if they had no value at all,
(2) I’ve found almost no discussion of them in the past ten years — your page from 2008 is a rare exception,
(3) I’ve found too little skepticism of the accuracy of ice core data, and
(4) Warmunists have a history of ignoring data that do not support their beliefs … and a history of ignoring or altering climate proxy data — with CO2 data from ice cores being a rare exception where they do use proxy data.
I still find it hard to believe 100% of Pettenkofer CO2 data are completely worthless … while ice core proxy data are treated as 100% accurate, with possible margins of error rarely discussed.
In my climate change reading since 1997, I have been skeptical the whole time , but usually not skeptical enough!
I agree there is more CO2 in the air since 1750, and I’m glad there is
I wonder how far 280 ppm in 1750, from ice cores, is from what the CO2 level would have been (higher or lower) if accurate real time measurements could have been made in 1750?

Reply to  Mike Maguire
April 11, 2016 3:32 pm

Richard Greene,
One need to make a distinction between a proxy, which is a measurement that is based on some variable which is correlated to another variable one is interested in and a measurement of the same variable sampled in a locked, but smoothed form as the one where some is interested in.
Take e.g. stomata index data: that is a proxy for CO2 levels, as the density of stomata is reversely correlated with the local CO2 levels of the previous growing season. With all the problems that may give of local bias and other influences (like extra drought).
Take ice core CO2 data: that are direct CO2 measurements with the same equipment as is used for atmospheric measurements. With one main problem: the CO2 levels are not from one moment in time, but the average over 10 to 600 years sampling.
The assumption must be made that CO2 in ice cores measured centuries later is still representative of the average global level of CO2 in 1750.
Based on several tested and proven assumptions (like firn densification models), CO2 in several ice cores is representative for the average CO2 level over 1740-1760 with an accuracy of +/- 5 ppmv.
The air bubbles may have changed over time, and under pressure, or from the process of drilling ice cores, moving them from the field to a laboratory, and then making measurements in a laboratory.
Many of these objections were answered by the work of Etheridge e.a. (1996) on three ice cores at Law Dome: three different drilling techniques were used (wet and dry), same results. Different flask types were used for local firn CO2 sampling: one type was rejected.
CO2 was measured in firn, sampled in flasks on site and in ice after transport by the same apparatus (GC), which shows that in the transition zone with part firn and part ice, CO2 levels were equal in both.
CO2 levels in ice cores with extreme differences in accumulation rates and temperature, thus extreme differences in depth / pressure for the same average gas age differ not more than +/- 5 ppmv.
There is a theoretical possibility that CO2 migrates through remaining water at the inter-crystalline surfaces of the ice. That was -theoretically- derived from the increased CO2 levels near melt layers of -relative- “warm” (-23°C) coastal ice cores. The only result was a broadening in resolution from 20 to 22 years at medium depth and to 40 years at full depth. No such migration is possible in the inland ice cores at -40°C.
(1) They seem to have been dismissed very quickly, as if they had no value at all
The main problem still is where was measured. If you measure in or near a forest, you can have levels of 250 ppmv on a sunny day in daylight and 600 ppmv at night of the same day… Many times there were no series, only sporadic samples once in a while. Most series were with one sample a day, only one had three samples a day at fixed times (but even 15 minutes later or earlier could give a difference of 40-50 ppmv!).
Thus really all the measurements taken over land didn’t give you any clue of the real “background” CO2 levels of that time…
See it as the equivalent of taking temperature measurements from a thermometer hut above an asphalted parking lot. No “correction” can change that into a valuable series showing that the earth is warming -or not-.
Only those taken on sea or coastal with wind from the seaside are of value and these show values around the ice cores.
I’ve found almost no discussion of them in the past ten years
That is because most scientists, warmistas and skeptics alike, accept that the ice core data are far more accurate than the historical measurements… As for the recent past (the past ~10,000 years) the resolution of the ice cores is 10-40 years, any deviation of CO2 from the average temperature – CO2 ratio like the current one would be noticed in every ice core…
I’ve found too little skepticism of the accuracy of ice core data
The only objections I have read were from 1992 by the late Dr. Jaworowski, who was a specialist for the radioactive fallout of the Chernobyl accident, including in ice cores. With no experience – as far as I know – of CO2 in ice cores. Most of his objections were refuted by the 1996 work of Etheridge. And Dr. Jaworowski did make some remarks which were, nicely said, quite remarkable: the opposite of physical laws…
I wonder how far 280 ppm in 1750, from ice cores, is from what the CO2 level would have been (higher or lower) if accurate real time measurements could have been made in 1750?
Taken into account the ~20 year smoothing of the data and the repeatability of samples at the same depth within the same ice core of 1.2 ppmv (1 sigma), the sampling in 1750 at suitable locations (like Mauna Loa) could give +/- 50 ppmv around the ice core value without being noticed in the ice core: +/- 40 ppmv for a one-year peak and +/- 10 ppmv for the best available historical technique. If they took four years of samples (even only 1 sample per year*), that should be already better: +/- 20 ppmv around the ice core, of which +/-10 ppmv for a 4-year peak and 10 ppmv for the method. A sustained + or – 2 ppmv deviation over the full 20 years would be noticed in the ice core where historical measurements should be +/- 10 ppmv around the + or – 2 ppmv.
The point is that the ice cores best resolution in that period is ~20 years, that gives averaged values, but that doesn’t change the average ppmv measured in the ice core over that period.
Compare that to current times where the year by year global variability is not more than +/- 1.5 ppmv around the trend. I see no direct reason that the historical variability was much larger.
(*) Ignoring the average +/- 4 ppmv seasonal variation and the +/- 4 ppmv local variability.

Jim G1
April 8, 2016 10:49 am

Excellent article.

Marcus
April 8, 2016 10:54 am

Anthony / Phillip …” Nevertheless, the 3.3±0.3mm/a rise in sea level is entirely consistent with the tide gauge record: is entirely consistent with the tide gauge record:”
Should that be ” 3.3±0.3mm/YEAR rise ” ???

Reply to  Marcus
April 8, 2016 11:55 am

3.3 +/- 0.3 mm/a – that a is annum, my interpretation.

Marcus
Reply to  Nicholas Schroeder
April 8, 2016 12:47 pm

Never seen it used that way, but, makes sense !! And no, I am not going to correct YOUR sticky fingers !! LOL

pbweather
April 8, 2016 11:08 am

What an extremely concise and clear summary. Excellent.

April 8, 2016 11:35 am

The Goddard Institute for Space Studies is responsible for the GHCN. The Institute was approached for the metadata underlying the adjustments. They provided a single line of data, giving the station’s geographical co-ordinates and height above mean sea-level, and a short string of meaningless data including the word “COOL”.

The Goddard Institute for Space Studies is NOT responsible for the GHCN. GISS uses already adjusted GHCN data as its principal input. See GHCN to see more, and see who is actually responsible for GHCN.
I’m posting at the moment on GHCN and Gistemp adjustments. I’ve just made available the post I’m working on Self-adjusting the Adjustments although it is not yet completed, in order to refer to it here. You will also find numerous examples of GHCN and USHCN adjustment volatility in posts on my blog since last summer. The current incomplete post is intended to summarise this volatility and discuss its effect on the GHCN and Gistemp surface temperature records. I hope to complete the post in the next few days.

stan stendera
April 8, 2016 11:38 am

Wow, just WOW. A suggestion: Why not make this excellent, excellent essay available to the global network of skeptic sites to insure it has the greatest possible distribution.?????

Reply to  stan stendera
April 8, 2016 2:25 pm

That would assume they care a single smidgen about the truth.

Reply to  Menicholas
April 8, 2016 2:26 pm

Sorry, wrong place for this comment.

commieBob
April 8, 2016 11:48 am

Those who seek to accuse us of carbon crime have therefore developed another tool – the global circulation model.

Actually, the models have an essential task in CAGW. If one applies the equations in a naive manner, one gets a temperature increase of a little more than one degree C. per doubling of CO2. That isn’t nearly scary enough and most people would even agree that it would be beneficial. To get a scary warming of 4 degrees C. it is necessary to have positive feedback and, if possible, a tipping point.
It is a ‘big deal’ that the models are basically incompetent. None of the data demonstrates positive feedback. Unless the alarmists can demonstrate that the models are competent, CAGW has no scientific credibility (not that it seems to matter).
The problem is that the modellers are barking up the wrong tree. Edward Lorenz was one of the fathers of climate modelling and arguably the most influential meteorologist in history, having laid the foundations of chaos theory. Here’s a quote:

Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound.

In other words the model has to be based entirely on the physics. It can’t be based on curve fitting. Ah but the models are tuned to match the historical climate. Oops. link

April 8, 2016 11:56 am

This post makes a ton of errors, but the one that stands out the most to me is:

Does carbon dioxide drive temperature changes? Looking at this evidence, one would have to say that, if there is any relationship, it must be a very weak one. In one study I made of the ice core record over 8 000 years, I found that there was a 95% chance that the temperature would change naturally by as much as +/-2degrees C during 100 years. During the 20th century, it changed by about 0.8degrees C. The conclusion? If carbon dioxide in the atmosphere does indeed cause global warming, then the signal has yet to emerge from the natural noise.

No details are given for this “study,” but being familiar with ice core data lets me know this isn’t a fair or accurate summary of things. That it says “the ice core record” suggests this is built upon one ice core, or maybe even a few ice cores, but not that it is representative of any sort of global or even hemispheric record. All sorts of problems commonly found in temperature reconstructions almost certainly exist in this “study” as well. There is simply no way to generate results like this post describes for the planet.
It seems to me people are simply accepting this result because they like it. Had this “study” found results they didn’t like, I wager both the author of this post and most of the commenters here would say it was garbage and completely invalid. The data simply doesn’t exist to support this sort of conclusion.

Reply to  Brandon Shollenberger
April 8, 2016 1:07 pm

“Sources of uncertainty in ice core data” Eric J. Steig

The_Iceman_Cometh
Reply to  Brandon Shollenberger
April 9, 2016 12:47 pm

No need for the inverted commas – see Lloyd, Philip J. An estimate of the centennial variability of global temperatures. Energy & Environment, 26(3), pp. 417–424 2015. DOI: 10.1260/0958-305X.26.3.417

April 8, 2016 11:59 am

The undercurrent not-so-hidden agenda is the constant caterwauling about coal fired power generation and the obvious indifference to all the other sources.

Bartemis
April 8, 2016 12:35 pm

FTA:
“So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive.”
Because it happens that the 13C drop is consistent with a rationalization of what should happen when fossil fuels are burned?
Hardly. If that were enough to go on, then the rationalization that temperatures rising is consistent with increasing atmospheric CO2 would be conclusive as well.
Neither are. In fact, the rate of change of CO2 is virtually a perfect match with temperatures, and this indicates that human inputs have little impact on overall atmospheric concentration.

Chris Hanley
Reply to  Bartemis
April 8, 2016 2:23 pm

That the CO2 concentration apparently began to rise a century or so before human emissions took off (and that data for the 12C to 13C ratio is confined to the satellite era) remains a bit of a puzzle but it’s a minor quibble to an overall incisive summary IMO.

Bartemis
Reply to  Chris Hanley
April 8, 2016 4:20 pm

Only consistent, Ferdinand. There are more things in heaven and Earth than are dreamt of in your philosophy.
“…which can be as good provided by 90% human, 10% temperature”
Not really as good, and only through a very contrived modeling effort, really little more than a complicated means of placing the data points where you want them. It’s like recutting the pieces of a jigsaw puzzle to make them fit, but the lines of the picture don’t mesh.
There is really no doubt about it, Ferdinand. Watch what happens when temperature anomaly starts to drop.

Reply to  Chris Hanley
April 9, 2016 12:54 am

Bart,
It is not because your “fit” is mainly a matter of a good correlation between temperature variability and CO2 variability, that the slope in CO2 is caused by temperature. The slope is entirely from a different process than what causes the variability, as the latter is proven the response of (tropical) vegetation, while the slope in CO2 by vegetation is negative: it is a small, but growing sink for CO2.
My fit is entirely based on observations: the transient response of vegetation and oceans to short and longer term temperature changes and the net sink rate which is directly proportional to the extra CO2 pressure in the atmosphere above steady state per Henry’s law.

Bartemis
Reply to  Chris Hanley
April 9, 2016 1:03 pm

“The slope is entirely from a different process than what causes the variability…”
Nonsense. There is no indication of it whatsoever. You are just guessing.
Your fit is based on shoe-horning the data into your preconceived model. It is nothing but wishful thinking.

Reply to  Chris Hanley
April 9, 2016 3:24 pm

Bart,
If you don’t understand that the opposite CO2 and δ13C changes proves that the main reaction to temperature variability is from vegetation, neither that the oxygen balance and satellites prove that the earth is greening, then there is no discussion possible.
My model is based on observations, yours is just “curve” fitting of two straight slopes…

Reply to  Bartemis
April 8, 2016 3:14 pm

Bart,
Not only consistent. There are only two sources of low-13C on earth: fossil organics and recent organics (with one exception, inorganic methane may be formed in the deep earth crust, but that is an aside).
All inorganic CO2 has a 13C/12C (δ13C as measurement standard) ratio which is higher than in the atmosphere. That makes that any extra release of CO2 (or even throughput or increase in cycle) from oceans, volcanoes, carbonate rock weathering,… will increase the δ13C level in the atmosphere. That means that these are not the main causes of the CO2 increase in the atmosphere, or the δ13C level would have increased, not decreased…
We see a firm drop of ~1.6 per mil in δ13C level in the atmosphere (and in the ocean surface and vegetation) in exact ratio with human emissions since ~1850. Not only in ice cores, but also in coralline sponges and fallen leaves over the past 160 years, while the previous levels in the whole Holocene didn’t vary with more than +/- 0.2 per mil (ice cores). Coralline sponges also show not more than +/- 0.2 natural variability between 1400 and 1850. These have a resolution of 2-4 years, far better than ice cores.
Thus the oceans can’t be the cause of the increase. Still the biosphere could be the cause. But as the oxygen (and satellites confirm) show: the biosphere is a net source of oxygen, thus a net sink for CO2 and preferentially 12CO2, thus also not the cause of the 13C/12C ratio decline. The earth is greening…
You can wave this evidence away, but if you have one big, increasing, source of low-13C and the decrease in the atmosphere exactly follows the expected δ13C decline in ratio of that source and the CO2 increase in the atmosphere is in exact ratio with that source, then you need a lot of better arguments than the arbitrary fit of two trends (which can be as good provided by 90% human, 10% temperature) to prove that humans are not to blame…

Reply to  Ferdinand Engelbeen
April 9, 2016 8:24 am

“that humans are not to blame…”
If humans do a preferable thing why are they to blame?

Reply to  Ferdinand Engelbeen
April 9, 2016 1:39 pm

Rainer,
You are right, no “blame” here, only in the heads of the climate activists…

afonzarelli
Reply to  Bartemis
April 8, 2016 6:03 pm

Yes, Bart, this is embarrassingly shallow analysis on the part of the author. (right or wrong, he should have at least gone a little deeper into it…) Reading Ferdinand’s well written 2010 piece a while back, i came away with three points about the C13 argument that seemed weak. The first and most obvious was given by several commentors. The atmospheric decrease in the C13 ratio may well be an indicator of one thing only, that we are burning fossil fuels. The second point was also mentioned by a commentor or two. Natural warming before the industrial revolution produced lower C13 ratios, not higher. And lastly, one point that i figured out on my own. The oceans emit half of all co2 and there’s no reason why they shouldn’t be driving the C13 ratio higher as is…
Dr Spencer was always of the mind set that plankton could produce the same C13 ratio fingerprint as fossil feuls. (with a warming ocean there is less plankton thus more C12…)

Reply to  afonzarelli
April 9, 2016 1:28 am

Fonzie,
The first point is already quite convincing: both CO2 increase and δ13C decrease are in exact ratio to human emissions over at least the past 55 years. There is very little variability over the whole Holocene in δ13C (+/- 0.2 per mil) and suddenly with the industrial revolution the δ13C levels start to drop, nowadays already 1.6 per mil below pre-industrial.
Indeed the δ13C drop can be simple dilution, but that implies that the natural addition must be about three times human emissions, or a total increase in the atmosphere of four times the human addition. What is measured is an increase of only half the human addition. Doesn’t sound good for the dilution theory…
It would need a lot of coincidence that some natural low-13C cycle increased in complete lockstep with human emissions, which increased a fourfold in the past 55 years. There is zero evidence that the natural carbon cycle(s) substantially increased in the past 55 years: recent residence time estimates are even slightly longer than older ones…
The second point: Natural warming before the industrial revolution produced lower δ13C levels may be true (reference?), but over the whole Holocene the variability was not more than +/- 0.2 per mil.
there’s no reason why they shouldn’t be driving the C13 ratio higher as is
Yes the (deep) oceans cycle will increase the current δ13C level of the atmosphere, as before the industrial revolution that was in equilibrium with plant growth at around -6.4 per mil. It removes about 2/3 of the human “fingerprint”, but not all of it. Still a firm drop visible…
Dr. Spencer is not completely right: plankton is somewhat in between the C3 and C4 cycle of CO2 uptake, thus its δ13C is less reduced than of most land plants. Despite that, the ocean surface is around +1 per mil, going up to +5 per mil where abundant bio-life is present. while the deep oceans are around zero per mil. Thus the oceans can’t be the cause of the δ13C drop in the atmosphere…

April 8, 2016 12:50 pm

“That is not to say that the Earth has not warmed over the last couple of centuries. Glaciers have retreated, snow-lines risen. There has been warming, but we do not know by how much.”
An ice cube on my kitchen counter continues to melt even if I turn the air conditioner all the way down.
Just sayin’.

Crispin in Waterloo but really in Riverhead
April 8, 2016 1:06 pm

Thanks Philip. Clear and concise.

higley7
April 8, 2016 1:35 pm

Hey, what about Ernst Beck’s 30,000 chemical CO2 bottle readings? Direct chemical evaluation of CO2 that showed that CO2 was significantly higher than now during three periods of the last 200 years. It was over 550 ppm as recently as the 1940s. Using that misleading and data deficient graph to plot anything earlier than 1950 is lazy.

Reply to  higley7
April 8, 2016 2:27 pm

H7, while,you are correct about Higleys data, it is unfortunately suspect in terms of being representative. Especially the high measured stuff. CO2 can vary locally a great deal near the surface (where all those bottles were), and diurnally, thanks to things like photosynthesis, soil composition/dampness and microbial composition,… Even though higher in the atmosphere it is supposedly well mixed.

Crispin in Waterloo but really in Riverhead
Reply to  ristvan
April 8, 2016 4:33 pm

I have personally measured CO2 before 9 AM in the north end of Ulaanbaatar at 800 ppm, rising to 1100 briefly about 11 AM then dropping rapidly as the accumulated CO2 from the night’s burning rose up the hillsides in the faint morning breeze. We measure background CO2 to subtract it from combustion experiments. The city is in a valley and there is a nearly nightly inversion in winter. The source is domestic coal combustion (not power stations).
The cautions about the chemical measurements are not based on the idea that the measurements were in error, only that the location was influenced by local combustion.

Reply to  higley7
April 8, 2016 3:27 pm

higley7,
Sorry, see my response to Richard Greene
The historical measurements were fairly accurate (+/- 10 ppmv) but most were too near huge sources and sinks to give any insight of the real “background” CO2 levels of that time…
Ice cores are quite accurate in CO2 level, but these are averaged over 10-600 years, depending of local snow accumulation rates. Ice cores or any CO2 or δ13C proxy like stomata data or coralline sponges show anything special around the 1942 “peak” in the late Beck’s data, only a steady increase in CO2 and a steady decline in δ13C. Which simply means that there was no such peak.

The_Iceman_Cometh
Reply to  higley7
April 9, 2016 12:52 pm

Sorry about the data deficiency – you can find the data in AR4

jlurtz
April 8, 2016 2:14 pm

Solar basis for everything:
1) Since 1650 the Sun has essentially been on average increasing in output as indicated by the Sunspot record.
2) The Solar Cycle 24 is the smallest in recent record [last 100 years][1913 was about equal].
3) The Earth has warmed for the last 300 years.
4) The Oceans, as an integral of energy storage are the warmest ever.
5) As Solar output decreases, based on the Solar Flux Proxy [energy actually reaching the Earth], we will have warm oceans and cool land masses, This is due to the land masses not retaining as much heat as the oceans.
6) Since the El Nino/La Nina are both about stored ocean heat [cold], reduced Solar output means less Trade Winds and a reduced Hadley Cell.
7) Reduced Hadley Cell means that the Jet Streams will move further South in the North and further North in the South. Basically, the cold will arrive.
8) Calculations show about 0.1C/2 years.
9) In 10 years, the Global temperature could be down 0.5C.

jorgekafkazar
April 8, 2016 2:19 pm

“… There is a hope that, while the models may not be perfect, if different people run different tunings at different times, a reasonable range of predictions will emerge, from which some idea of the future may be gained…”
This is a feeble emulation of a Monte Carlo method. You might call it the Mountebanko method.

April 8, 2016 2:33 pm

Objective analysis (not funded by government grants or energy companies) reveals that change to the amount of CO2 in the atmosphere (and thus burning fossil fuels) has no significant effect on climate.
A conservation of energy equation, employing the time-integral of sunspot number anomalies (as a proxy) and a simple approximation of the net effect of all ocean cycles achieves a 97% match with measured average global temperatures since before 1900. Including the effects of CO2 improves the match by 0.1%. http://globalclimatedrivers.blogspot.com

April 8, 2016 2:58 pm

Open your lens out to “when we live” and the actuarial calculus becomes even more interesting.
Let us assume that we now live in the Anthropocene. We will do this by acceding to the insulating capacity of greenhouse gases as described by the IPCC et al. We do this because it actually reverses the case for removing GHGs.
How?
Because if we now live in the Anthropocene, then wouldn’t that mean that the Holocene is over? In fact, it actually should be given the Holocene’s present 11,719 year age. So we are living in the Anthropocene extension of Holocene interglacial warmth.
“We will illustrate our case with reference to a debate currently taking place in the circle of Quaternary climate scientists. The climate history of the past few million years is characterised by repeated transitions between `cold’ (glacial) and `warm’ (interglacial) climates. The first modern men were hunting mammoth during the last glacial era. This era culminated around 20,000 years ago [3] and then declined rapidly. By 9,000 years ago climate was close to the modern one. The current interglacial, called the Holocene, should now be coming to an end, when compared to previous interglacials, yet clearly it is not. The debate is about when to expect the next glacial inception, setting aside human activities, which may well have perturbed natural cycles.” http://arxiv.org/pdf/0906.3625.pdf
For brevity I include just one such reference. So, widening the lens to encompass that were it not for the avowed prowess of GHGs to sustain interglacial warmth, the question is what climate state is left?
Why, the cold glacial state, of course.
So in terms of insurance, we may have taken out the only policy that can delay or offset glacial inception, GHGs.
Now, who wants to cancel that policy?

littleoil
April 8, 2016 3:05 pm

An excellent clear paper. Well done. And great comments above.
If we used normal graphs rather than anomaly graphs we would highlight that world temperature has just risen just 0.8 degrees C since 1880. It is not unusual to have 5 or 10 degrees variation over a single day or by moving a few hundred km away.
Most land based measurements have been tampered with and the only reliable temperature measure has been the satellite records since 1979 which agree with balloon measurements. The past 18 years 8 months show no rise despite humans emitting 25% of all CO2 emissions in that time. Weather stations are now located on airports near engines and air conditioners. Many of the old stations in remote cold areas have been closed. The warming measured has been mostly in warmer minimums (i.e. night) temperatures which result from urban heat island effects.
The constant stream of claimed disasters are proved false again and again.
Governments get massive tax revenue from fuel levies on gasoline. Nobody has thought about how to replace this government revenue when we all change to electric cars. Currently this tax is about 40% of gasoline cost in Australia. Wait for the protests when electric cars have to pay for road costs!!
The warmists’ movement is overtly political. If they were serous they would stop mass air travel and adopt hydro and nuclear power.
Where is the problem?

TonyP
April 8, 2016 4:29 pm

The good professor had best be careful, the witch hunt is going full bore ahead. The socialists are seeking all dissenters so as to burn them at the stake.

TA
April 8, 2016 4:57 pm

comment image
Looks like Cape Town had a temperature highpoint in the 1930’s, according to the raw data.
We can include Cape Town in the “worldwide” heatwave of the 1930’s.
Some people claim the extreme weather in the 1930’s was not a worldwide phenomenon, but the data seems to say otherwise.

Khwarizmi
April 8, 2016 7:00 pm

Fortunately isotopes come to our aid. […] Fossil fuels primarily come from a time before C4 chemistry had evolved, so they are richer in 12C than today’s biomass.
===========
Unfortunately, they don’t…
* * * * * * * * *
During the 1950’s, increasingly numerous measurements of the carbon isotope ratios of hydrocarbon gases were taken, particularly of methane; and too often assertions were made that such ratios could unambiguously determine the origin of the hydrocarbons. The validity of such assertions were tested, independently by Colombo, Gazzarini, and Gonfiantini in Italy and by Galimov in Russia. Both sets of workers established that the carbon isotope ratios cannot be used reliably to determine the origin of the carbon compound tested.
Colombo, Gazzarini, and Gonfiantini demonstrated conclusively, by a simple experiment the results of which admitted no ambiguity, that the carbon isotope ratios of methane change continuously along its transport path, becoming progressively lighter with distance traveled. Colombo et al. took a sample of natural gas and passed it through a column of crushed rock, chosen to resemble as closely as possible the terrestrial environment.27 Their results were definitive: The greater the distance of rock through which the sample of methane passes, the lighter becomes its carbon isotope ratio.
The reason for the result observed by Colombo et al. is straightforward: there is a slight preference for the heavier isotope of carbon to react chemically with the rock through which the gas passes. Therefore, the greater the transit distance through the rock, the lighter becomes the carbon isotope ratio, as the heavier is preferentially removed by chemical reaction along the transport path. This result is not surprising; contrarily, such is entirely consistent with the fundamental requirements of quantum mechanics and kinetic theory.
[…]
Galimov demonstrated that the carbon isotope ratio of methane can become progressively heavier while at rest in a reservoir in the crust of the Earth, through the action of methane-consuming microbes.28 The city of Moscow stores methane in water-wet reservoirs on the outskirts of that city, into which natural gas is injected throughout the year. During summers, the quantity of methane in the reservoirs increases because of less use (primarily by heating), and during winters the quantity is drawn down. By calibrating the reservoir volumes and the distance from the injection facilities, the residency time of the methane in the reservoir is determined. Galimov established that the longer the methane remains in the reservoir, the heavier becomes its carbon isotope ratio.
The reason for the result observed by Galimov is also straightforward: In the water of the reservoir, there live microbes of the common, methane-metabolizing type. There is a slight preference for the lighter isotope of carbon to enter the microbe cell and to be metabolized. The longer the methane remains in the reservoir, the more of it is consumed by the methane-metabolizing microbes, with the molecules possessing lighter isotope being consumed more. Therefore, the longer its residency time in the reservoir, the heavier becomes the carbon isotope ratio, as the lighter is preferentially removed by methane-metabolizing microbes. This result is entirely consistent with the fundamental requirements of kinetic theory.
Furthermore, the carbon isotope ratios in hydrocarbon systems are also strongly influenced by the temperature of reaction. For hydrocarbons produced by the Fischer-Tropsch process, the δ13C varies from -65% at 127 C to -20% at 177 C.29, 30 No material parameter, the measurement of which varies by almost 70% with a variation of temperature of only approximately 10%, can be used as a reliable determinant of any property of that material.
The δ13C carbon isotope ratio cannot be considered to determine reliably the origin of a sample of methane, – or any other compound.
http://www.gasresources.net/disposal-bio-claims.htm
* * * * * * * * *

Reply to  Khwarizmi
April 9, 2016 1:35 am

Khwarizmi,
You are right about methane and hydrocarbons, but emissions inventories take into account the different δ13C levels of the sources. That gives an average of about -25 per mil.
Not extremely important, it may be average -20 or -30 per mil, but what is important is that it is the only known huge source of low-13C. Oceans and the bio-sphere as a whole are both sources of high-13C…

mikebartnz
April 8, 2016 7:01 pm

Quote *7 earthquake in Wellington, New Zealand, toppled the cathedral,*
It was Christchurch not Wellington

April 8, 2016 7:16 pm

Re: Ferdinand Engelbeen, 4/8/16 @ 1:56 pm.
Ferd,
¶¶1, 3, & 5) You said, Quite a lot of remarks, but with very little basis…, but everything I wrote is supported. I decided to show only about as much support as the author did. The same is true of your response where you show an irrelevant, unsourced graph of atmospheric CO2 and temperature anomaly on two different ordinates under the response That simply is not true:. What I was talking about specifically was IPCC’s Fingerprint chart, AR4, Figure 2.3.
http://www.rocketscientistsjournal.com/2010/03/_res/AR4_F2_3_CO2p138.jpg
Here IPCC demonstrates its fingerprint conjecture by graphing CO2 mixing ratio antiparallel to O2 depletion and Global emissions parallel to the C13 mixing ratio. Like scientist Henry Lee said when he drew a size 14 outline around some blood spatter, Sumpin’ wrong here. The so-called fingerprints exist because the graphs are scaled and adjusted to give the desired appearance.
You say,
there are more than 70 CO2 monitoring stations all over the oceans, of which 10 by NOAA, the rest are maintained by different people of independent different organizations and different countries. The only calibration done (nowadays by NOAA) is of the calibration gases, used to calibrate all measurement devices all over the world. But even so, other organizations like Scripps still make their own calibration gases…
As IPCC readily admits, it reviews and assesses the most recent scientific, technical and socio-economic information produced worldwide relevant to the understanding of climate change. It does not conduct any research nor does it monitor climate related data or parameters.
Furthermore, it says MLO data constitute the master time series documenting the changing composition of the atmosphere. … Later observations of parallel trends in the atmospheric abundances of the 13CO2 isotope and molecular oxygen (O2) uniquely identified this rise in CO2 with fossil fuel burning. References deleted (everywhere), AR4, ¶1.3.1.
It says, Careful calibration and cross-validation procedures are necessary … . TAR, ¶2.3.2. And
CO2 has been measured at the Mauna Loa and South Pole stations since 1957, and through a global surface sampling network developed in the 1970s that is becoming progressively more extensive and better inter-calibrated. Bold added, TAR, ¶3.5.1, p. 205. In case anyone found intercalibration unclear, it further refers to calibration procedures within and between monitoring networks. TAR, ¶3.5.3.
You claim that the only calibration done … nowadays is intrastation. Without some evidence that IPCC sources abandoned the necessary and progressively more extensive station inter-calibrations, your claim is most dubious.
¶2. The Revelle Factor was a failure — twice. It failed first in 1957 when Revelle and Suess introduced their fantastic parameter, but they admit in that same report that they could not make the data fit their desired result.
It might be tempting to assume that the effects found in the samples investigated and their individual variations are due to local contamination of air masses by industrial CO2 and that the world-wide decrease in the C14 activity of wood is practically zero. This, however, implies a too fast exchange rate and is inconsistent with the lower limit of τ(atm) given above. … An exchange time of 7 years, however, makes it necessary to assume unexpectedly short mixing times for the ocean. Revelle & Suess (1957) p. 23.
So in conclusion, they say
Present data on the total amount of CO2 in the atmosphere, on the rates and mechanisms of CO2 exchange between the sea and the air and between the air and the soils, and on possible fluctuations in marine organic carbon, are insufficient to give an accurate base line for measurement of future changes in atmospheric CO2. Revelle & Suess (1957), p. 26.
Once armed with gigatons of data, IPCC tried to rehabilitate the Revelle Factor, it uncovered Henry’s Law instead. This is revealed in IPCC’s Figure 7.3.10 of the AR4 Second-Order Draft. Inset (a) is the alleged temperature dependence of the Revelle Factor, but it is readily recognized as a scaling of Henry’s Coefficient for CO2 in water:
http://www.rocketscientistsjournal.com/2007/06/_res/F7-3-10.jpg
What IPCC published in order not to confuse the reader was a version of that figure with the temperature dependence, part (a), deleted. AR4, Figure 7.11:
http://www.rocketscientistsjournal.com/2007/06/_res/F7-11.jpg
The Revelle Factor is a failed conjecture, off the bottom of the scale of scientific models. For a full discussion, see
http://www.rocketscientistsjournal.com/2007/06/on_why_co2_is_known_not_to_hav.html
¶4. You say, Henry’s law predicts not more than 16 ppmv/°C for the change in steady state between ocean surface and atmosphere, with neither a calculation nor a citation. Clearly IPCC didn’t say that since because it relies on Henry’s law for nothing. And what is claimed by other sources cannot substitute for what the owner says about AGW. Furthermore, should the concentration of atmospheric CO2 not correspond to Henry’s Law, then climatology would have managed to disprove a law of physics.
What Henry’s Law teaches is the great capacity that exists in the ocean to regulate atmospheric CO2. The ocean is a massive reservoir for CO2, about 6,000 times as large as the annual CO2 emissions attributed to man. Try applying Henry’s Law to the MOC/THC circulation of 15 to 50 Sv, from bottom water at 0 to 4C, saturated with CO2, brought to the surface to be warmed to as much as 35C. You should find a huge flux potential for CO2 outgassing, one several times as large as the 92.8 GtC/yr IPCC credits to ocean outgassing. And following, the CO2 depleted surface ocean recharges to capacity as the current moves slowly toward one pole or the other according to the season.
¶6. You say, Human emissions precede its effects in the atmosphere for every observation:, citing your own blog, but that is not a conclusion in any way recognized by IPCC.
The Revelle Factor was supposed to settle questions about Callendar’s conjecture:
CALLENDAR (1938, 1940, 1949) believed that nearly all the carbon dioxide produced by fossil fuel combustion has remained in the atmosphere, and he suggested that the increase in atmospheric carbon dioxide may account for the observed slight rise of average temperature in northern latitudes during recent decades. He thus revived the hypothesis of T. C. CHAMBERLIN (1899) and S. ARRHENIUS (1903) that climatic changes may be related to fluctuations in the carbon dioxide content of the air. … [¶]Subsequently, other authors have questioned Callendar’s conclusions… . R&S p. 18.
But that Callendar Effect, as the Greenhouse Effect was once called, also failed at launch:
Sir George Simpson expressed his admiration of the amount of work which Mr. Callendar had put into this paper. It was excellent work. It was difficult to criticise it, but he would like to mention a few points which Mr. Callendar might wish to reconsider. In the first place he thought it was not sufficiently realised by non-meteorologists who came for the first time to help the Society in its study, that it was impossible to solve the problem of the temperature distribution in the atmosphere by working out the radiation. The atmosphere was not in a state of radiative equilibrium, … . Callendar (1938) p. 237.
None of that mattered. Revelle & Suess plowed ahead, kicking off the First International Geophysical Year with a pitch for funds. Their single contribution was a catchy slogan:
Thus human beings are now carrying out a large scale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future. Id., p. 19.
Nothing in climate is in thermodynamic equilibrium, the only kind of equilibrium worth having. Nevertheless, discussions on climate persist like chaotic reverberations from some gigantic cosmic event. AGW, once a conjecture, belongs in the dustbin of pseudoscientific speculations as the single most expensive example.

Reply to  Jeff Glassman
April 9, 2016 2:46 am

Jeff,
About the graphs: I know, you are very aware of scales. In this case, the scales are not even important, the important note is that oxygen and CO2 are antiparallel and so are CO2 and δ13C trends. Which can be calculated too and that proves that FF are at the base. The graphs are only an illustration of the observations.
Careful calibration and cross-validation procedures are necessary
I don’t think anyone would trust blood samples analyzed by a lab that is NOT calibrated and cross-validated with standards for any such lab. The calibration and cross-validation is for the calibration gases and equipment only not for the data, except for any corrections necessary if there are problems with the calibration gases or equipment found.
Even so Scripps (and the Japanese) have their own calibration gases, equipment and flask samples independent of NOAA and find the same CO2 levels +/- 0.2 ppmv at Mauna Loa.
The Revelle Factor was a failure
Not that I know… It is basic ocean chemistry, be it that in Revelle’s time the observation period was too short to give the necessary data to confirm the chemistry.
Since then over 3 million ocean samples have confirmed the existence of the Revelle factor.
It is simple to show that it exists: The increase in DIC (total dissolved inorganic carbon) in the ocean surface of longer time series is only ~10% of the increase in the atmosphere. Here for Bermuda:
http://www.biogeosciences.net/9/2509/2012/bg-9-2509-2012.pdf
DIC increased ~1.7% since 1984, while CO2 in the atmosphere increased ~14% in the same period…
Henry’s law predicts not more than 16 ppmv/°C for the change in steady state between ocean surface and atmosphere, with neither a calculation nor a citation.
The literature gives 4-17 ppmv/°C (not my own search).
Here the influence of temperature differences:
http://www.ldeo.columbia.edu/res/pi/CO2/carbondioxide/text/LMG06_8_data_report.doc
with the formula to calculate the pCO2 from the temperature at measurement to in situ temperature:
(pCO2)sw @ Tin situ = (pCO2)sw @ Teq x EXP[0.0423 x (Tin-situ – Teq)]
Moreover, the long-term historical T/CO2 ratio in ice cores is not more than 8 ppmv/°C. As that is based on the temperature change in polar air (via δ18O and δD in the ice), the global change is about 16 ppmv/°C.
Try applying Henry’s Law to the MOC/THC circulation of 15 to 50 Sv
The problem is not the massive quantities of water or Henry’s law, the problem is the extremely slow diffusion of CO2 in water. Even with the massive water flow, only 40 GtC/year is circulating between sources at the equator and polar sinks, the latter still extremely undersaturated in CO2. The difference is about 3 GtC more CO2 sink than source.
The 40 GtC/year deep ocean – atmosphere cycle is based on both the thinning of the human δ13C “fingerprint” and the decay of the 14C spike from the atomic bomb tests.
but that is not a conclusion in any way recognized by IPCC
The IPCC doesn’t need to explicitly recognize that conclusion, as they simply assume that humans are the cause of the CO2 increase, which is confirmed by every single available observation…
All alternative explanations I have heard of do fail one or more observations. Thus should be discarded as not true. Some skeptics shoot in their own foot by insisting that humans are not the cause of the CO2 increase in the atmosphere, despite all evidence…

Reply to  Ferdinand Engelbeen
April 9, 2016 6:06 am

Ferdinand Engelbeen, 4/9/16 @ 2:46 am.
Ferd,
You write,
In this case, the scales are not even important, the important note is that oxygen and CO2 are antiparallel and so are CO2 and δ13C trends. Which can be calculated too and that proves that FF are at the base.
First, a point of order. Oxygen and CO2 are antiparallel, but CO2 and δ13C are parallel. Parallel and antiparallel are terms from geometry, manifest on the charts. Because each of those four records has a substantial trend, they can be arbitrarily made parallel or antiparallel by the drafters choice of scales. IPCC made them so. You say they can be calculated, too. But how do you show the parallelism you claim with parameter values with differing dimensions? And why didn’t IPCC do that instead of relying on chartjunk?
You say, I don’t think anyone would trust blood samples analyzed by a lab that is NOT calibrated and cross-validated with standards for any such lab. But that is not analogous to the problem at hand. IPCC is the physician saying you and I have identical blood panels. And that that is because the good doctor calibrated them to be the same.
IPCC’s AGW narrative is based on manufactured data. IPCC inter-calibrates, not intra-calibrates, CO2 records from different stations into agreement and then asserts that atmospheric CO2 concentration is well-mixed and global, represented by the Keeling Curve.
IPCC is playing the same game today with temperature data. It’s a variation of Mann’s game with his tree rings. Their brand of science is to bring the so-called data into agreement with their narrative.
You say,
The problem is not the massive quantities of water or Henry’s law, the problem is the extremely slow diffusion of CO2 in water. Even with the massive water flow, only 40 GtC/year is circulating between sources at the equator and polar sinks, the latter still extremely undersaturated in CO2.
Henry’s Law, even with Henry’s Coefficients, doesn’t provide the rate at which diffusion occurs. You claim it is extremely slow. I claim the contrary. It is instantaneous on climate scales, which are 30 years minimum. Neither your claim nor mine is based on data, but mine jibes with observations on how long it takes to carbonate or decarbonate bottled drinks.
And why would you say polar sinks are extremely undersaturated in CO2? Oceanographers put the period of the MOC/THC at 1 millennium, and the temperature at sinking around 0C. The time is ample for Henry’s Law to take effect, which would bring the water to its maximum concentration of CO2 just as it descends to the bottom.
Re the failure of the Revelle Factor, you said, Not that I know, but then explain why it failed in Revelle’s time! What are your observations about the fact that on measurement, the Factor turned out to be Henry’s Coefficient for CO2 in water, and that IPCC concealed that discovery?
AR4 mentions the Revelle factor five times, all on one page of analysis. AR4, ¶7.3.4.2 Carbon Cycle Feedbacks to Changes in Atmospheric Carbon Dioxide, p. 531. It’s source is Revelle & Suess, 1957, except that ¶7.3.4.2 is where IPCC concealed the contemporary measurements showing the Henry’s Law dependence. Now in AR5, IPCC says only this:
The capacity of the ocean to take up additional CO2 for a given alkalinity decreases at higher temperature (4.23% per degree warming; Takahashi et al., 1993) and at elevated CO2 concentrations (about 15% per 100 ppm, computed from the so called Revelle factor; Revelle and Suess, 1957). Bold added, AR5, ¶6.3.2.5.5 Processes driving variability and trends in air-sea carbon dioxide fluxes, p. 498.
IPCC has retreated back to R&S (1957), omitting the contemporary measurements, and demoting the Revelle Factor with the label so-called. This is similar to IPCC’s retreat from Mann’s Hockey Stick, and it’s obfuscation in a spaghetti graph:
https://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-6-10.html
Love is never having to say you’re sorry.

Reply to  Ferdinand Engelbeen
April 9, 2016 8:37 am

Jeff,
CO2 and δ13C are anti-parallel. The CO2 levels in the atmosphere are going up, the δ13C levels in the atmosphere (and the ocean surface) are going down. The δ13C scale of the IPCC’s graph is going negative up…
In my opinion the graphs don’t matter at all, what matters is that the observations show that 1) human emissions are twice the increase in the atmosphere, 2) the increase is from a low-13C source, which excludes the oceans, volcanoes,… or any other inorganic CO2 source. 3) the oxygen balance shows that the low-13C source is not the biosphere, the only other main low-13C source.
These three points together show that human emissions are the main cause of the CO2 increase and δ13C decrease…
IPCC inter-calibrates, not intra-calibrates, CO2 records from different stations into agreement
Jeff, that is not true at all. NOAA (not the IPCC) calibrates the calibration gases used worldwide. That is all what they do. They may manipulate their own 10 baseline stations (for what purpose?), but they have zero influence on the measurements of other organizations. Of course, if there were large discrepancies, there would follow discussions about why the discrepancies, which is normal for any lab worldwide for any type of tests. No matter that, I am pretty sure that Scripps would be very happy to catch NOAA on any error or manipulation, as they were responsible for the calibration gases until NOAA took over…
One can only hope that one day temperature data are as rigorously controlled as CO2 data…
BTW, the official “global” CO2 level is the average of near ground level stations, excluding Mauna Loa.
Henry’s Law, even with Henry’s Coefficients, doesn’t provide the rate at which diffusion occurs.
The transfer rate of CO2 into seawater was measured in tanks and lakes with different wind speeds, waves, etc… Without wind, it is near zero. It is directly proportional to the pCO2 difference between atmosphere and ocean surface per Henry’s law at one side and the quadrate of wind speed at the other side. See:
http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml
It is instantaneous on climate scales, which are 30 years minimum
Not really: the overall sink rate of CO2 (mostly in the oceans) is quite linear in ratio with the pCO2 difference between atmosphere and ocean surface for the weighted average seawater surface temperature. That gives following e-fold decay rates for any extra CO2 in the atmosphere:
Extra pressure / net sink rate = e-fold decay rate:
In 2012:
110 ppmv, 2.15 ppmv/year, 51.2 years.
The figures for 1988 (from Peter Dietze):
60 ppmv, 1.13 ppmv/year, 53 years.
In 1959:
25 ppmv, 0.5 ppmv/year, 50 years.
Surprisingly linear and not that fast: some 200 years to remove any pulse of CO2, whatever its source, to reach (un)steady state per Henry’s law again.
The time is ample for Henry’s Law to take effect
The measured pCO2 at the polar sink places is down to 150 μatm. At the equatorial sources up to 750 μatm. In the atmosphere it is ~400 μatm. If the exchanges were fast, the surface waters everywhere on earth would all be near 400 μatm…
Factor turned out to be Henry’s Coefficient for CO2 in water
You are completely lost on that one…
Henry’s law gives a 100% change in free CO2 in seawater for a 100 % change in the atmosphere.
The Revelle factor is quite different:
Revelle factor = (Δ[CO2] / [CO2]) / (Δ[DIC] / [DIC])
That is the instantaneous change in CO2 in the atmosphere over the instantaneous change in DIC in the ocean surface. While still a 100% change in the atmosphere gives a 100% change of free CO2 per Henry’s law, free CO2 is only 1% of all carbon species in seawater. The other 99% are bicarbonates and carbonates, which are affected too by the increase in the atmosphere via the dissociation constants, but these don’t double for a double CO2 in the atmosphere, as also H+ increases and pushes the equilibriums back to free CO2. The net result is an about 10% change in DIC for a 100% change in the atmosphere. As is proven by observations…

Richard
Reply to  Ferdinand Engelbeen
April 9, 2016 1:51 pm

Ferdinand wrote: “You are completely lost on that one. Henry’s law gives a 100% change in free CO2 in seawater for a 100 % change in the atmosphere. The Revelle factor is quite different: Revelle factor = (Δ[CO2] / [CO2]) / (Δ[DIC] / [DIC]) That is the instantaneous change in CO2 in the atmosphere over the instantaneous change in DIC in the ocean surface. While still a 100% change in the atmosphere gives a 100% change of free CO2 per Henry’s law, free CO2 is only 1% of all carbon species in seawater. The other 99% are bicarbonates and carbonates, which are affected too by the increase in the atmosphere via the dissociation constants, but these don’t double for a double CO2 in the atmosphere, as also H+ increases and pushes the equilibriums back to free CO2. The net result is an about 10% change in DIC for a 100% change in the atmosphere. As is proven by observations”.
Please note that the proportionality constant for CO2 includes all DIC, not just CO2. If the solubility of CO2 changed as pH changed and as the relative concentrations of DIC shifted (which occurs naturally as the concentration of CO2 changes) then the solubility of CO2 would be dependent on the concentration, and this is obviously not the case. kH in Henry’s law remains unchanged by the concentration. It is a linear law. No matter how much CO2 you add to water the proportionality constant remains the same (of course at very high concentrations Henry’s law generally doesn’t hold up as gases start to behave non-ideally). You only need to look at a Bjerrum plot to see that the total amount of CO2 (as DIC) in water is unchanged by changes to pH and the relative concentrations of DIC. Glassman is correct; there is no Revelle Factor, and Revelle himself later basically acknowledged this, stating: “It seems therefore quite improbable that an increase in the atmospheric CO2 concentration of as much as 10% could have been caused by industrial fuel combustion during the past century, as Callendar’s statistical analyses indicate”.

Reply to  Ferdinand Engelbeen
April 10, 2016 2:01 am

Richard:
Please note that the proportionality constant for CO2 includes all DIC, not just CO2.
The proportionality by Henry’s law is for CO2 (gas) in air and water only, not for bicarbonates and carbonates. (bi)Carbonates concentrations are influenced at one side by a CO2 increase in the atmosphere and thus solution, but on the other side by the increasing H+ by the increasing dissociation of CO2/H2CO3 into bicarbonates and carbonates. The net result is a 10% increase in DIC, even including a 100% increase in free CO2 for a 100% increase of CO2 in the atmosphere.
You only need to look at a Bjerrum plot to see that the total amount of CO2 (as DIC) in water is unchanged by changes to pH and the relative concentrations of DIC.
The Bjerrum plot only shows relative quantities of CO2/bicarbonate/carbonate for any pH level. It doesn’t show absolute levels.
At 1 bar pure CO2 the solubility in fresh water is 3.3 g/kg at 0°C which gives a pH of ~3.9 with about 99% free CO2 in solution.
http://www.engineeringtoolbox.com/gases-solubility-water-d_1148.html
If you make a saturated solution of sodium bicarbonate (baking soda) in water, that can have some 70 g/kg at the same temperature:
http://www.tatachemicals.com/europe/products/pdf/sodium_bicarbonate/technical_solubility.pdf
As CO2 equivalents, that is 37 g/kg, ten times more than for pure CO2 in water, only a matter of pH (> pH 8). That contains less than 1% free CO2, the rest is bicarbonate and a little carbonate.
Add a strong acid to the bicarbonate solution until you have the same pH as for pure CO2 and lots of CO2 bubble up, back to 3.3 g/kg remaining in solution…
In every longer term series of DIC measurements the increase is around 10% of the atmospheric increase, thus proving the Revelle factor.
Revelle was not sure about his factor and had not enough data to prove it. He didn’t want to go against the “consensus” of that time that almost all human CO2 disappeared in the oceans. Nowadays we know better…

Richard
Reply to  Ferdinand Engelbeen
April 10, 2016 6:33 am


“If you make a saturated solution of sodium bicarbonate (baking soda) in water, that can have some 70 g/kg at the same temperature. As CO2 equivalents, that is 37 g/kg, ten times more than for pure CO2 in water, only a matter of pH (pH 8). That contains less than 1% free CO2, the rest is bicarbonate and a little carbonate. Add a strong acid to the bicarbonate solution until you have the same pH as for pure CO2 and lots of CO2 bubble up, back to 3.3 g/kg remaining in solution”
What you are describing here I believe is commonly referred to as a “bubble-bomb” whereby the dissolved bicarbonate separates and produces bicarbonate ions (HCO3-) and when adding an acid such as vinegar for example, that dissociates into hydrogen ions (H+) and the dissociated hydrogen ions then react with the bicarbonate ions to form carbonic acid (H2CO3) which immediately decomposes into CO2 and thereupon bubbles out from the water. A bubble-bomb is not evidence that the solubility of CO2 is dependent on pH.
“The proportionality by Henry’s law is for CO2(gas) in air and water only, not for bicarbonates and carbonates. Carbonates concentrations are influenced at one side by a CO2 increase in the atmosphere and thus solution, but on the other side by the increasing H+ by the increasing dissociation of CO2/H2CO3 into bicarbonates and carbonates. The net result is a 10% increase in DIC, even including a 100% increase in free CO2 for a 100% increase of CO2 in the atmosphere”.
The 1:50 proportionality constant for CO2 given by Henry’s law (which is what I was referring to) includes all DIC and is fixed at a given temperature. Take note there is no time-variable in the Revelle Factor formula (ΔPCO2ml/PCO2ml)/(ΔDIC/DIC) and the total amount of CO2 water can absorb based on that formula remains eternally constant over time until the relative concentrations of DIC change. Hence if the deep-ocean had the same DIC ratio as the surface-ocean the total amount of anthropogenic CO2 the whole ocean would absorb at equilibrium would only be 10%, in violation of Henry’s law. Henry’s law governs the solubility of gases in water and states that at a given temperature the amount of a gas dissolved in water is directly proportional to its partial pressure in the air adjacent to the solvent at equilibrium. The law can be described mathematically as: p = kHc. Where p is the partial pressure of the gas above the solute, kH is the proportionality constant (i.e. Henry’s constant) and c is the concentration of dissolved gas in the liquid. The constant of proportionality for CO2 at the average surface temperature of 15°C gives us a partitioning ratio between the atmosphere and the oceans of 1:50 respectively. If the Revelle Factor were correct and the solubility of CO2 changed as the relative concentrations of DIC shifted (which occurs when the partial pressure of CO2 changes) then kH in Henry’s law (and thus CO2’s partitioning ratio) would not be a constant for a given temperature. Note that Henry’s constant (in the equilibrium state of the law) is the ratio of the partial pressure of a gas at the liquid interface with the concentration of that gas dissolved in the liquid. Hence the constant does not change with concentration. It is a linear law. This means that the partitioning ratio of a gas (including that of CO2) is unchanged by changes to the atmospheric mass and can be multiplied up proportionally for any specified concentration in ppmv. Obviously this is in conflict with the Revelle Factor which suggests that the solubility of CO2 is affected by the relative concentrations of DIC as the partial pressure of CO2 changes. As the Handbook of Chemistry points out: “Solubilities for gases which react with water, namely ozone, nitrogen, oxides, chlorine and its oxides, carbon dioxide, hydrogen sulfide, hydrogen selenide and sulfur dioxide, are recorded as bulk solubilities; i.e. all chemical species of the gas and its reaction products with water are included”.
“The Bjerrum plot only shows relative quantities of CO2/bicarbonate/carbonate for any pH level. It doesn’t show absolute levels.”
No, but you may have noticed that the Bjerrum plot is a mirror-image of itself and no matter the pH the total combined concentration of DIC remains unchanged.

Reply to  Ferdinand Engelbeen
April 10, 2016 12:56 pm

Richard,
A bubble-bomb is not evidence that the solubility of CO2 is dependent on pH.
Sorry Richard, it doesn’t make any difference if you approach the solubility of CO2 in seawater, which is 90% (Ca/Mg) bicarbonate, 9% (Ca/Mg) carbonate and only 1% free CO2 from the CO2 addition side or start from the other side by adding an acid to a (Na or Ca/Mg) solution. The pH (and temperature) is what counts for the solubility.
In fresh water, near all CO2 is free CO2. Thus free CO2 and DIC are near equal. A doubling of CO2 in the atmosphere gives a doubling of free CO2 in water and a doubling of DIC.
In seawater free CO2 is only 1% of DIC. A doubling of CO2 in the atmosphere gives initially a doubling of free CO2 from 1% to 2% in DIC, that is all. Thanks to the chemical equilibriums, the ultimate increase in DIC is ~10%, that means that ~10 times more CO2 is dissolved in seawater (at pH ~8) than in fresh water for the same change in the atmosphere. See the graph at:
http://ion.chem.usu.edu/~sbialkow/Classes/3650/CO2%20Solubility/DissolvedCO2.html
The 1:50 proportionality constant for CO2 given by Henry’s law (which is what I was referring to) includes all DIC and is fixed at a given temperature.
Again Richard, you are completely mistaken on this. Henry’s law is about the partial pressure of any gas in the atmosphere vs. the pressure of the same gas in a liquid in direct contact with the atmosphere. It doesn’t talk about concentrations or pCO2 in the deep oceans or its enormous quantities (as you do with the 1:50 ratio), neither of time constants.
For 15°C and 1 bar CO2 in the atmosphere, that gives 2 g/kg. At 0.0004 bar, that is 0.8 mg/kg. For the total ocean surface layer, that is about 10 GtC, if that was fresh water. Yet it is 1000 GtC as it is seawater with a pH slightly over 8. The ocean surface layer is the only part of the ocean in direct fast contact with the atmosphere.
The deep oceans are a much slower player and needs centuries to equilibrate with the atmosphere. Thus your 1:50 has nothing to do with Henry’s law, it is in fact 10:1 for the oceans surface per Revelle/buffer factor while the 1:50 is the ultimate distribution between atmosphere and deep oceans in mass of the human contribution, but that takes a lot of time.
Note that Henry’s constant (in the equilibrium state of the law) is the ratio of the partial pressure of a gas at the liquid interface with the concentration of that gas dissolved in the liquid.
Richard, read that definition again word by word and think deeply over the last part: with the concentration of that gas dissolved in the liquid
“That gas” is not “that gas and all derivatives of that gas”. The definition of Henry’s constant is only about free CO2 as gas in the liquid, not about bicarbonates or carbonates.
Obviously this is in conflict with the Revelle Factor
Not at all: Henry’s law is for free CO2 in water only (no matter if that is fresh water or seawater). The Revelle factor is a measure of how much more CO2 can be dissolved in seawater than in fresh water.
Solubilities for gases which react with water, … , are recorded as bulk solubilities
Of course, as that is a practical solution for such gases. That has nothing to do with Henry’s law for such gases, as their solubility may be (much) larger than what Henry’s constant says…
no matter the pH the total combined concentration of DIC remains unchanged
Which is proven wrong, see the above link…

Richard
Reply to  Ferdinand Engelbeen
April 11, 2016 8:29 am

“Sorry Richard, it doesn’t make any difference if you approach the solubility of CO2 in seawater, which is 90% (Ca/Mg) bicarbonate, 9% (Ca/Mg) carbonate and only 1% free CO2 from the CO2 addition side or start from the other side by adding an acid to a (Na or Ca/Mg) solution. The pH (and temperature) is what counts for the solubility. In fresh water, near all CO2 is free CO2. Thus free CO2 and DIC are near equal. A doubling of CO2 in the atmosphere gives a doubling of free CO2 in water and a doubling of DIC. In seawater free CO2 is only 1% of DIC”.
How is that in any way a logical response to what I wrote? I pointed out that you cannot use a bubble-bomb to demonstrate that the solubility of CO2 is pH-dependent because the bicarbonate and acid you are adding to the water create a surplus of CO2 and that is why CO2 bubbles out from the water. You then reply by saying “Sorry Richard, it doesn’t make any difference”, as if to imply I am the one misunderstanding things. But why I am surprised? It doesn’t matter how many percentages you put in your reply – you’re wrong. Henry’s constant (which gives a fixed partitioning ratio for CO2 of 1:50 between air and water respectively at 288K) does not change with pH. It cannot change with pH. That would be impossible, because the pH of the water is dependent on the partial pressure of CO2, and Henry’s constant is unaffected by changes to the partial pressure. It’s that simple.
“Henry’s law is about the partial pressure of any gas in the atmosphere vs. the pressure of the same gas in a liquid in direct contact with the atmosphere. It doesn’t talk about concentrations or pCO2 in the deep oceans or its enormous quantities (as you do with the 1:50 ratio), neither of time constants”.
Henry’s law determines a specific fixed partitioning ratio between the amount of CO2 residing in air and the amount that will be dissolved in water at a given temperature at equilibrium. At the current mean ocean temperature of ~15C (at the surface), that partitioning ratio comes out to be ~1:50. However, this result assumes that the average temperature of the oceans is 15C and I think that figure is too high since the deep oceans (which comprise the bulk of the oceanic mass) are generally much cooler than this and approach zero near the bottom.
“While the 1:50 is the ultimate distribution between atmosphere and deep oceans in mass of the human contribution, but that takes a lot of time”.
OK, I read you saying that that is what they claim. But claiming is not proof
“Henry’s constant is only about free CO2 as gas in the liquid, not about bicarbonates or carbonates”.
Nonsense. See the quote from the Handbook of Chemistry above. If the solubility of CO2 was dependent on the relative concentrations of DIC (which changes with partial pressure) then the solubility of CO2 would change as the partial pressure of CO2 changed and this is obviously not the case. Henry’s law constant kH for CO2 in water at room-temperature is around 3.1*10^-2 M/atm and kH is unaffected by changes to partial pressure. Hence it therefore follows that the solubility of CO2 as given by kH is unaffected by changes to partial pressure. For the last time: kH is unaffected by changes to partial pressure. You seem to want to deny this. That’s fine.
“Some skeptics shoot in their own foot by insisting that humans are not the cause of the CO2 increase in the atmosphere, despite all evidence”.
All the evidence? I don’t think so. Since AGW-advocates did not have enough real evidence ready to hand to make their case compelling they have set about manufacturing it with speculative models, even to the extent of fabricating basic data in this way too. (I know you always have to have the last-reply on these boards, so I’ll leave you to your own devices now).

Reply to  Ferdinand Engelbeen
April 11, 2016 1:45 pm

Richard,
Henry’s law is for the solubility of CO2 as gas from atmosphere in water and back. It is in equilibrium when the pCO2 of atmosphere and water are equal. Bicarbonate and carbonate ions have zero pCO2. All pCO2 is from free CO2 in the water, not from bicarbonates and carbonates, these play no (direct) role in Henry’s constant.
You don’t need to take my word for it, simply ask it to anyone with some more knowledge of the solubility of reactive gases in (sea)water.
Henry’s constant (which gives a fixed partitioning ratio for CO2 of 1:50 between air and water respectively at 288K) does not change with pH.
pH doesn’t change Henry’s constant, but doesn’t give a fixed partitioning of 1:50 between air and oceans. The 1:50 is the mass partitioning between atmosphere and deep oceans when the extra CO2 in the atmosphere is redistributed between these two. That has nothing to do with Henry’s constant, all with mass distribution. Henry’s constant only works for the air-ocean surface exchanges, not for the air – deep ocean exchanges.
It cannot change with pH. That would be impossible, because the pH of the water is dependent on the partial pressure of CO2
And reverse, the partial pressure of CO2 in the water depends of the pH, if that is changed either by adding an acid (as in the CO2 “bubble bomb”) or adding a base/buffer as is the case in seawater: its pH is above 8, not the less than 4 in a saturated solution of CO2 in fresh water as per Henry’s law. That makes that the ocean surface contains 100 times more CO2 (and derivatives) than fresh water at equilibrium for the same pressure of CO2 in the atmosphere, still with exactly the same amount of free CO2 in both cases per Henry’s constant.
Again, the difference is in the derivatives: bicarbonates and carbonates, which play no (direct) role in Henry’s constant or the pCO2 of the solution.
The 100 times more CO2 in seawater than in fresh water is what is measured. Time to change your opinion…
the solubility of CO2 would change as the partial pressure of CO2 changed and this is obviously not the case
It certainly is the case: it is measured! The solubility of CO2 as gas doesn’t change with partial pressure: it remains in ratio, but there is no reason at all that the ratio’s between CO2 as gas in solution and bicarbonate/carbonate ions remains the same: the Bjerrum plot shows the changes in ratio with changing pH. As the latter changes with the change in pCO2 of the atmosphere, the ratio between the different derivatives changes too. For a 100% change in pCO2 in the atmosphere, there is a 100% change in free CO2 in seawater, still obeying Henry’s law and the fixed kH, but only a ~10% change in bicarbonates and carbonates. The latter are then 98% of all CO2 in seawater instead of 99% before the pCO2 increase…
————
Richard, this all is basic buffer chemistry where Henry’s law is for CO2 as gas in solution only, not for bicarbonates and carbonates. That is clearly explained in many textbooks of chemistry. Here specific for CO2 in seawater:
http://www.soest.hawaii.edu/oceanography/faculty/zeebe_files/Publications/ZeebeWolfEnclp07.pdf
K0 (= kH) is only for the first step: the ratio between CO2(atm) and [CO2], the concentration of pure CO2 as gas in seawater. K1 and K2 are the dissociation constants for the next steps: the formation of bicarbonates and carbonates and at the same time H+. The latter influences the equilibriums backwards to [CO2] until equilibrium is reached.

April 8, 2016 7:39 pm

Hey…where are Toneb, Mosher, Finn, Binidon, seaice1? Shouldn’t essays like this one bring them comfort? Yet no sign of……

April 8, 2016 8:25 pm

Please note that Cape Town temperatures were adjusted upward 1950’s through 1980’s.

TPG
April 8, 2016 9:04 pm

Professor;
My understanding is that CO2 doesn’t so much \scatter upwelling IR in the vicinity of 15um as absorb it preventing that energy from escaping to space.. The absorbed photic energy redistributed among the various translational (q, heat), rotational, and vibrational states available to the CO2 molecule. This process leads to local warming and encreased IR radiation from CO2, also locally through 4pi stereradians, allowing a portion of the absorbed 15um radiation to proceed towards space and escape earth. This doesn’t negate your argument at all, it’s just that I am an insufferable pedant and had to get this off my chest. Cheers
TG (erstwhile photochemist)

downzunder
April 9, 2016 12:05 am

One of the more reasoned and reasonable essays on climate change written by a scientist that I’ve read in recent times. I was hoping to send links to many of my associates but unfortunately I can’t because I’m a New Zealander and the essay contains mistakes regarding the earthquakes in New Zealand. As stated by other posters, the 2011 earthquake that levelled the cathedral was in Christchurch not Wellington.
But what is worse is the use of the word ‘mere’ in association with any earthquake that resulted in the loss of life and near complete devastation that occurred in the central CBD of Christchurch City. The Aussies across ‘the ditch’ don’t call us the ‘Shaky Isles’ for nothing. We know a thing or two about earthquakes. We’ve had many earthquakes in New Zealand much larger on the Richter scale than that one that devastated Christchurch. However, we know the damage resulting from an earthquake is not merely a function of it’s size, it also depends upon where the quake is centred and the type of motion that the energy of the quake generates.
Professor, could you please correct these aspects of your excellent essay so there is no impediment to me sending links to my New Zealand associates and spreading the good word. Thanks.

Pauly
April 9, 2016 12:56 am

Re: Ferdinand Engelbeen April 8, 2016 at 2:03 pm
Ferdinand, perhaps you refuse to accept information that doesn’t fit into your world view. The paper below provided very detailed information on the causes of increased CO2 in the atmosphere, cited sources, and provided detailed charts at figures 14, 15 and 16. Here is the direct quote:
“The correlation of average temperature with CO2 flux is similar for ocean and continent surfaces, contradicting the expected negative flow due to larger absorption of CO2 by plants during warmer times; this means that during warming phases, ocean liberation of CO2 is much more powerful than the continental biosphere uptake.”
So this paper identifies that the predominant source of additional atmospheric CO2 is the oceans, which release it after their temperature increases. The paper also explicitly states that human CO2 has no impact on temperature increases:
“When evaluating the changes in added CO2 from fuel burning with global average
temperature change, at interannual scale, the absence of relationship by cross plotting becomes clear: the more or less wasted CO2 did not imply warming or cooling (Figure 18).”
http://file.scirp.org/pdf/IJG20100300002_69193660.pdf

Reply to  Pauly
April 9, 2016 3:20 am

Pauly,
I accept all information which is based on verifiable facts, no matter if that confirms or refutes my own opinion…
The link you gave is talking about CO2 changes vs. temperature changes not about the cause of the total increase of CO2 in the atmosphere, although human CO2 is mentioned.
Temperature caused CO2 changes over the seasons is dominated by (NH) vegetation as can be seen in the opposite CO2 and δ13C changes over the seasons:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_CO2_d13C_MLO_BRW.jpg
As you can see, CO2 goes down, δ13C up with higher temperatures due to increased photosynthesis.
Temperature caused interannual CO2 changes over short term (1-3 years) are dominated by (tropical) vegetation as can be seen in the opposite CO2 and δ13C changes:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/temp_dco2_d13C_mlo.jpg
In this case the opposite reaction as for seasonal: temperature goes up, CO2 goes up and δ13C goes down. Thus while ocean temperatures (El Niño) go up, rain patterns and temperature gives also more CO2 release from the tropical forests.
Over (very) long periods, the oceans are dominant, but that is not more than ~16 ppmv/°C. That is maximum 16 ppmv since the LIA, the rest of the 110 ppmv increase is from humans…
But of course I do agree that the influence of CO2 on temperature – even over very long periods – is minimal.

Reply to  Pauly
April 9, 2016 4:30 am

Pauly,
If you give a better look at Fig. 15, you can see that nature is a net sink in all years, except in 1988 and 1998 where it is near zero, due to the El Niño’s. Thus not the cause of the increase in total CO2 in the atmosphere…

oppti
April 9, 2016 2:12 am

Thanks!
But I have one remark. The Battery in New York is analysed in this graph and it shows NO acceleration but a form of periodical movement. The Sea level rise was higher during the 1950s!
http://tidesandcurrents.noaa.gov/sltrends/50yr.htm?stnid=8518750

Alex
April 9, 2016 2:18 am

comment image
The only conclusion that you can make about this is that the photons were not directed to the detector.
You mentioned scattering.
A 1 degree deviation of the photon would miss the detector completely. The photon could spear off to space and you have no way of knowing that.
The graph doesn’t mean what you think it means

ECB
Reply to  Alex
April 9, 2016 5:49 am

Alex
The graph is valid. The scattering occurs in all directions. The only issue is the NET outward IR, as this is the only way the earth can cool itself, other than variations in reflected short wavelengths from cloud tops, ice, etc.

Reply to  ECB
April 9, 2016 9:56 am

Also the comment that the temperature at the effective emission height of 5km is 280K is wrong, the 280K corresponds to the surface and from that graph it’s less than 280K. As a result he assumes that the H2O absorption is far higher than it actually is.

Reply to  Alex
April 9, 2016 3:18 pm

What the instrument sees from 700 km in space is not photons from the surface.comment image
Photons in the fundamental bending bands are gobbled up by CO2 (and casually absorbed by water as well) within a hundred meters of the surface. This great height is reached only by the weaker constructive and destructive rotational bands on either side of fundamental 15um/667.4 bending. The fundamental band is 90% absorbed within one meter.comment image
Photons in these fundamental bending bands are extinct in the atmosphere from about 100 meters to about 5 km, where they start to register again, probably kinetically energized by near IR excited water.
The lapse rate stalls out and goes straight up between 10 and 20 km as UV activated ozone kicks in and the satellites see strong CO2 radiance at 220k blackbody temperature corresponding to about 18 km altitude. This radiance is kinetically induced by ozone.
The reason all this is important is that all this radiance the satellites see is NOT surface energy. No “area under the curve” w/m2 derivation is valid. They think they are seeing apples, but they are seeing oranges.
The fundamental bands produce no more warming today than they did in 1850. What human CO2 has done is to concentrate that warming in a shorter altitude profile, causing surface thermometers to rise.

Alex
Reply to  Alex
April 10, 2016 6:01 am

It’s also not an actual-real graph. the general form is very close to a grey body. Not possible because the earth is not a grey body. The graph is some sort of composite- modelled thing. I am only critisizing the image and not the thread

Russ Wood
April 9, 2016 6:56 am

As a South African, may I request that Professor LLoyd tries to get as many of the SA “news blogs” to reprint this article, since it’s one of the clearest definitions of the “Great Global Warming Swindle” that I have read. Thanks, Professor, for your clarity – but I fear that so many of the modern generation just don’t read, and need to be told stuff.

Reply to  Russ Wood
April 9, 2016 9:58 am

I would suggest that he address the several errors first otherwise he may end up being extremely embarrassed!

Gamecock
April 9, 2016 7:54 am

4. The expected increase in the energy of the lower troposphere may cause long-term changes in the climate . . . which could be characterized by increasing frequency . . . of extreme weather events’
I respectfully disagree.
Coastal Carolina is subject to getting landfalling hurricanes. That is its climate. Whether they get one on average of every 7 years or every 21 years (a tripling of frequency), it DOES NOT change the climate.

Uncle Gus
April 9, 2016 10:06 am

“So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive.”
Sorry, but no.
Suppose CO2 was being removed from the atmosphere at exactly the same rate that human industry was putting it there. The proportion of C13 would still decrease. Same thing if it was being removed *faster* than we put it there.
So the increase in CO2, parallel to the increase in industrial emissions, proves… that CO2 is increasing. Only if the increase could be shown to consist of carbon isotopes in exactly the same proportions as human industrial emissions, could it definitely be blamed on human industry. Unless someone’s done that bit of hard maths, it’s still only suggestive.
I know it’s considered cool and groovy these days to concede the warmists the argument on this one, but do please lets maintain a bit of rigour, even when it’s unfashionable.

Reply to  Uncle Gus
April 10, 2016 11:24 am

Uncle Gus,
As said somewhere up thread, indeed it may be simple dilution of the human “fingerprint”, but as the observed decline in δ13C is about 1/3 of what it would be if human emissions should stay in the atmosphere, that needs a threefold extra amount of natural (oceanic) CO2. Thus all together an increase of four times the human contribution. What is observed is that the increase in the atmosphere is only half the human contribution. Which rejects the simple dilution by some extra natural CO2.
In fact one can estimate the deep ocean – atmosphere carbon cycle based on the dilution of the δ13C level caused by human emissions:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/deep_ocean_air_zero.jpg
Which gives around 40 GtC/year, the same value as found for the fast decay of the 14C bomb tests spike.

Gary Pearse
April 9, 2016 11:38 am

ristvan
April 8, 2016 at 10:51 am
“It is the ratio of 12C to 13C. Fossil fuels produce 12C cause thats what C3 photosynthesis plants consumed to produce the fossil fuels in the first place. 15% of terrestrial plant biomass is C4 (e.g grasses and derivatives like corn and sugarcane). Dunno about aquatic plants. Those C4 grasses take up 12C and 13C equally.”
However, plankton preferentially take up C12 and recent papers tell us that plankton blooms are increasing. Can I suggest that the human added CO2 therefore has a preferentially increasing sink in phytoplankton? Moreover, along with the noticeable greening of the planet – abundant new C4 vegetation is also taking up more C12 (plus C13). It is clear to me that this is no where near a steady state type situation in terms of sinks. The sinks are increasing exponentially, certainly on land but possibly also in the oceans. Do a thought experiment: Y-1 fringe of new green plants around arid areas, Y-2 a new fringe within the first and further growth of the first, Y-3…..This is exponential. Hey, it’s also fattening earth’s trees and plants globably, too.
https://www.bigelow.org/index.php/news/current-news/increased-co2-enhances-plankton-growth
https://www.sciencedaily.com/releases/2013/07/130708103521.htm
The common argument against such exponential growth in the sinks is the limitation on available soluble iron because of the low solubility of iron species in the oceans but, the earth’s crust (and surface rocks) average 15% iron and ocean basalts 5 to 10% or more iron is the principal rock type in the ocean basins. Here (from Wiki) is where the iron comes from in the sea, the rest is brought in by rivers from the land:
“The common corrosion features of underwater volcanic basalt suggest that microbial activity may play a significant role in the chemical exchange between basaltic rocks and seawater. The significant amounts of reduced iron, Fe(II), and manganese, Mn(II), present in basaltic rocks provide potential energy sources for bacteria. Some Fe(II)-oxidizing bacteria cultured from iron-sulfide surfaces are also able to grow with basaltic rock as a source of Fe(II)”
All you ‘limited iron’ folks need a re-education from geological science, not physics or sociology. Plankton taking up iron will even help the bacteria to release more. There is no limit (A Cliffs of Dover picture on your wall of coccolithophores would help you to see that there is no limits on iron or calcium, which also has a low solubility in sea water, but makes up the oceans shells and fishes bones, etc).
We will reach a point well before CO2 doubling in the atmosphere where the sinks will be in near equilibrium with CO2 in the atmosphere and have a vibrant ocean and land ecology. We are already 80% of peak population expected in mid century and it will be a new Eden if we can somehow solve the psychological problems of the destroyers and their useful idiots.

Reply to  Gary Pearse
April 10, 2016 11:33 am

Gary Pearse,
The growth in sinks is surprisingly linear in ratio to the increase of CO2 in the atmosphere. That is for the sum of all sinks: oceans (surface and deep) and the biosphere as a whole.
There is only a known limit in the ocean surface (the “mixed layer”) due to ocean chemistry (~0.5 GtC/year sink rate), but until now no measurable limit in the uptake of the deep oceans (~3 GtC/year) or vegetation (~1 GtC/year). The latter is a rather small sink, as the coccoliths and vegetation don’t produce that much permanent storage of carbon. Most is short to medium storage and recycled by fish, bacteria, molds, insects, animals,…

tom
April 9, 2016 3:21 pm

No one ever seems to address why the last ice age ended. We didn’t have cars back then, but the glaciers melted anyway.

BigWaveDave
April 9, 2016 4:47 pm

Nobody breathes without CO2.

April 9, 2016 8:07 pm

Climate change is the madness of social activism. Instill fear by false catastrophes. The damage done to institutions especially academic ones is immense because it turns science into ideology and corrupts rationality. Study Nazism and its use of propaganda and you will recognize what’s being perpetrated today by the likes of Gore, Obama and Clinton.

April 9, 2016 8:11 pm

Climate change is the madness of social activism. Instill fear by false catastrophes. The damage done to institutions especially academic ones is immense because it turns science into ideology and corrupts rationality. Study Nazism and its use of propaganda and you will recognize what’s being perpetrated today by the likes of Obama and Clinton.

Dodgy Geezer
April 10, 2016 1:12 pm

…Injecting into the air 12C-rich CO2 from fossil fuels should therefore cause the 13C in the air to drop, which is precisely what is observed:
So the evidence that fossil fuel burning is the underlying cause of the increase in the CO2 in the atmosphere is really conclusive….

I cannot see that this follows. Injecting 12C rich CO2 into 13C CO2 will certainly drop the 13C/12C ratio. But that is ALL you can say. You can’t say anything about the total increase in CO2 volume, which depends on the balance between sinks and sources, and ABOUT WHICH WE KNOW LITTLE. You can guess that we have increased total CO2, but the change in ratio does not prove that one way or another…

Reply to  Dodgy Geezer
April 11, 2016 4:30 am

Dodgy Geezer,
See my response here
You can’t say anything about the total increase in CO2 volume, which depends on the balance between sinks and sources
That balance is quite well known: at one side fossil fuel burning (from sales inventories) and accurate measurements of the increase in the atmosphere at the other side. Human emissions are (at least) twice the observed increase in the atmosphere… No need at all to know any individual sink or source in nature.

afonzarelli
Reply to  Ferdinand Engelbeen
April 11, 2016 7:21 am

The mass balance argument does not preclude the possibility that a natural imbalance could be producing a 2 ppm (or 3 ppm…) net addition to the atmosphere and that the anthropogenic equilibrium sink rate could be near 100%. Were that the case, the natural imbalance would be the reason (or cause…) that the rise is happening even though “nature is a net sink for carbon dioxide”…

Reply to  Ferdinand Engelbeen
April 11, 2016 11:54 am

Fonzie,
That is only possible if some natural sink has a preference for human CO2, which – as far as I know – is not the case. Almost all human emissions is reaching the bulk of the atmosphere, directly or indirectly: a tree or lake that captures a human CO2 molecule will capture one less natural molecule.
Capturing by trees and lakes or oceans is in ratio to the total extra CO2 pressure in the atmosphere, locally (for trees) and/or globally (for oceans). As the extra human CO2 contribution globally is about 0.05 ppmv/day, that hardly plays a role in the 110 ppmv extra CO2 pressure in the atmosphere, except in highly concentrated urban and industrial areas.
Except for the small difference in 13C/12C ratio, plants or oceans have no preferences and simply capture CO2 from any source alike. That means that if the natural carbon cycle dwarfs the human emissions, it must have increased a fourfold in the past 55 years: human emissions, increase in the atmosphere and net sink rate all increased a fourfold. The total natural cycle needs to have increased a fourfold too in the same period, or not at all, or you can’t have a fourfold increase in net sink rate from a fourfold increase in human emissions alone.
There is no observation on earth that shows a substantial increase in the natural carbon cycle. To the contrary: the latest estimates of the residence time show a small increase compared to older estimates, which points to a rather stable throughput in an increasing mass of CO2 in the atmosphere…

Scottar
April 14, 2016 12:23 am

My own research has lead to these 2 papers:
http://l4patterns.com/uploads/virtual_vs_reality_report.pdf
Why CO2 has Nothing to Do with Temperatures
By Dr Darko Butina
http://jvarekamp.web.wesleyan.edu/CO2/FP-1.pdf
Carbon Dioxide Absorption in the Near Infrared
The first one shows that the atmosphere acts like a blanket and the principle warming factor is specific heat related. Water vapor primarily act like a moderator and the oceans act like heat reservoirs.
The second paper shows that CO2 does not trap IR radiation in the 15u band but absorbs it either by conventional molecular agitation or energy band excitation. It’s absorbs in the 15u energy band and readmits at a lower energy band in accordance with conservation of energy laws. Since both water vapor and CO2 absorb in the 15u band is why satellites see it as a gray blank. It’s being readmitted at lower energy levels.
Also realize that the incoming solar energy is at a much higher energy level of 63×10^6 Wm^-2 while the black body level is at 240 Wm^-2 (area under the curve).

Scottar
April 14, 2016 12:32 am

My own research has lead to these 2 papers:
http://l4patterns.com/uploads/virtual_vs_reality_report.pdf
Why CO2 has Nothing to Do with Temperatures
By Dr Darko Butina
http://jvarekamp.web.wesleyan.edu/CO2/FP-1.pdf
Carbon Dioxide Absorption in the Near Infrared
The first one shows that the atmosphere acts like a blanket and the principle warming factor is specific heat related. Water vapor primarily act like a moderator and the oceans act like heat reservoirs.
The second paper shows that CO2 does not trap IR radiation in the 15u band but absorbs it either by conventional molecular agitation or energy band excitation. It’s absorbs in the 15u energy band and readmits at a lower energy band in accordance with conservation of energy laws. Since both water vapor and CO2 absorb in the 15u band is why satellites see it as a gray blank. It’s being readmitted at lower energy levels.
Also realize that the incoming solar energy is at a much higher energy level of 63×10^6 Wm^-2 while the black body level is at 240 Wm^-2 (area under the curve).