Guest Post by Willis Eschenbach
I greatly enjoy reading old science. Back fifty years or more ago, they actually did real science, not the “my model says it must be true” kind of thing that we are treated to today. In that regard, I’ve been fortunate to stumble on one of the earliest papers on the greenhouse effect, “The Artificial Production of Carbon Dioxide and its Influence on Temperature” by G. S. Callendar. There were a lot of curious and interesting things in the paper, which I’d heard of but never read, and which I’ll touch on in no particular order.
I was greatly encouraged by the description of Callendar in the header of the paper, where he is listed as the “Steam technologist to the British Electrical and Allied Industries Research Association”. I liked the guy already, he is a hands-on man, someone who describes himself as a “technologist”, and working in industry. What’s not to like? Plus, he wrote the article by himself, no team of 24 “co-authors”.
One of the first things I noticed was that although I’ve at times complained of the long lag time between submission to a journal and eventual publication, this one says:
Manuscript received May 19, 1937-read February 16, 1938
Eight months before it was “read”, and the paper was eventually published in April of 1938.
Moving on, here is his abstract, or “Summary” as it was called in that time and place:
SUMMARY
By fuel combustion man has added about 150,000 million tons of carbon dioxide to the air during the past half century. The author estimates from the best available data that approximately three quarters of this has remained in the atmosphere.
The radiation absorption coefficients of carbon dioxide and water vapour are used to show the effect of carbon dioxide on sky radiation. From this the increase in mean temperature, due to the artificial production of carbon dioxide, is estimated to be at the rate of 0.005°C per year at the present time.
The temperature observations at 200 meteorological stations are used to show that world temperatures have actually increased at an average rate of 0.005°C. per year during the past half century.
Being a numbers man, this interested me because as early as 1938 he’d estimated the total emissions, estimated the airborne fraction, and calculated the global temperature. So of course I had to go check it out, to see how his estimates compare to modern estimates.
The CDIAC has the carbon emissions data. The “past half century” from 1937 would have been 1887 to 1937. The CDIAC data puts the emissions during that time at 38,201 million tonnes of carbon. To convert to tonnes of carbon dioxide, we need to add the weight of the oxygen. The atomic weight of carbon is 12, and the atomic weight of oxygen is 16. The atomic weight of CO2 is 12 + 2 * 16 = 44. So we need to multiply 38,201 million tonnes of carbon times 44/12, which gives us 140,000 million tonnes of CO2, compared to Callendar’s estimate of 150,000 million tonnes … not bad, not bad at all.
As to the “best available data” estimate of the airborne fraction, Callendar says:
I have examined 21 very accurate set of observations (Brown and Escombe, 1905), taken about the year 1900, on the amount of carbon dioxide in the free air, in relation to the weather maps of the period. From them I concluded that the amount of carbon dioxide in the free air of the North Atlantic region, at the beginning of this century, was 2.74 ± 0.05 parts in 10,000 by volume of dry air.
This translates to 274 ppmv in the year 1900. I note that this is significantly less than the value given by the ice core data, which is about 295 ppmv.
The “pre-industrial” value in 1750 is usually set at 274 ppmv. This difference raises lots of interesting questions I won’t go into here. Unfortunately, although the Brown and Escombe 1905 paper is online here, it makes no mention of the “21 very accurate sets of observations”. I wish I had the data, particularly since his error estimate is ±5 ppmv.
I did like his method, though, which appears to consist of looking at the observations and the weather maps at the time of the observations. This would allow him to infer the source of the air being sampled at a given time, and to choose samples from say off of the ocean rather than from the town. Clever. From this he calculates a 6% increase in CO2 by 1937. Curiously, he had no actual figures for the CO2 in 1937, he estimated it. What do the modern ice core records say the increase in CO2 was from 1900 to 1937?
6% …
He then goes on to say:
Since calculating the figures in Table I, I have seen a report of a great number of observations on atmospheric CO2 , taken recently in the eastern U.S.A. The mean of 1,156 “free air” readings taken in the years 1930 to 1936 was 3.10 parts in 10,000 by volume. For the measurements at Kew in 1898 to 1901 the mean of 92 free air values was 2.92, including a number of rather high values effected by local combustion, etc.; and assuming that a similar proportion of the American readings are affected in the same way, the difference is equal to an increase of 6 per cent.
What truly impressed me, though, was the final sentence of that paragraph, which reads:
Such close agreement with the calculated increase is, of course, partly accidental.
Gotta love a scientist as honest as that.
From there he goes into a fascinating discussion of the physics of the absorption of upwelling longwave radiation, and the characteristics of downwelling longwave radiation. This is followed by another most interesting description of how he has estimated the temperature changes since 1900. Not having HadCRUT or Berkeley Earth or GISSTEMP datasets, of course, he had to go out, find the station data, and analyze it.
Surprisingly, he goes on to discuss the “urban heat island” (UHI) effect, saying:
It is well known that temperatures, especially the night minimum, are a little higher near the centre of a large town than they are in the surrounding country districts; if, therefore, a large number of buildings have accumulated in the vicinity of a station during the period under consideration, the departures at that station would be influenced thereby and a rising trend would be expected.
Clearly a man ahead of his time.
How well did he do? Here’s the comparison of his results with those of the Berkeley Earth Surface Temperature dataset.
Comparison, global temperature anomaly estimates of Callendar (1938) and Berkeley Earth Surface Temperature (2014)
Now, I gotta give Callendar full marks for that one. Despite the difference in the linear trends, which may be due to his reducing the trend to adjust for the UHI effect, his results correlate very well (0.84) with the modern estimate.
Then, another surprise. He talks about how the climate system is not static, but instead it responds to changing temperature, saying (emphasis mine):
On the earth the supply of water vapour is unlimited over the greater part of the surface, and the actual mean temperature results from a balance reached between the solar “constant” and the properties of water and air. Thus a change of water vapour, sky radiation and temperature is corrected by a change of cloudiness and atmospheric circulation, the former increasing the reflection loss and thus reducing the effective sun heat.
This is the earliest of the very few examples I’ve found of people expounding the concept that the temperature of the planet is self-correcting, that is to say that the Earth has inherent temperature-regulating mechanisms, and that it naturally balances at a certain temperature, and it corrects itself when it departs from that balance. As I have spent some years investigating, measuring, and writing about just exactly how that system works in practice, I tip my hat to him. In fact, I’m in the middle of writing yet another post about the clouds and the temperature interact to establish that balance.
From there, he segues into a speculation on whether changes in carbon dioxide levels could have caused the ice ages. He states that he doubts CO2 could have done it, saying:
I find it almost impossible to account for movements of the gas of the required order because of the almost inexhaustible supply from the oceans, when its pressure in the air becomes low enough to give a fall of 5 to 8°C in mean temperatures.
Now, here’s the beauty part. I’m so indoctrinated by decades of being inundated with alarmism that I fully expected Callendar to conclude by warning of the dangers of rising CO2, impending Thermageddon, plagues, famines, rains of frogs, and the like. But to my great surprise and pleasure, here’s what he actually wrote:
In conclusion it may be said that the combustion of fossil fuel, whether it be peat from the surface or oil from 10,000 feet below, is likely to prove beneficial to mankind in several ways, besides the provision of heat and power. For instance the above mentioned small increases of mean temperature would be important at the northern margin of cultivation, and the growth of favourably situated plants is directly proportional to the carbon dioxide pressure (Brown and Escombe, 1905): In any case the return of the deadly glaciers should be delayed indefinitely.
You can’t say fairer than that.
My best to all,
w.
PS—A final thought. I was most impressed by a practice which I don’t see in the modern scientific journals. The journal invited comments and questions on the paper from no less than six other people knowledgeable in the field. Then the journal published their comments and questions along with Callendar’s answers to them, not three issues down the line, but at the bottom of Callendar’s study itself.
When I saw that, I had to laugh. Why? Because it’s identical to the format of a blog post. Someone puts up a head post, you read it, and at the bottom of the head post you read other people asking questions and raising issues, and the author of the head post responding to them right there.
How fascinating. The journals have abandoned that format of publishing the article along with the questions and responses at the same time … and instead, it’s become the format of the web.
DATA: Callendar’s paper, THE ARTIFICIAL PRODUCTION OF CARBON DIOXIDE AND ITS INFLUENCE ON TEMPERATURE, is here. When I said above that I “stumbled across” the paper, to be clear I came across it doing what I do from time to time. I go to the AGWObserver and do a search for the words “FULL TEXT”. His content changes, he’s always adding new stuff, and best of all, he tags everything that’s not paywalled. As a working man with no university library to call on, that’s invaluable to me … or if not invaluable, at least valued at the usual price of $39.50 per paper, which adds up very fast. So I was cruising along at the Observer looking at “FULL TEXT” items when I came to Callendar … my great fortune.
AS ALWAYS: If you disagree with someone, please QUOTE THE EXACT WORDS YOU DISAGREE WITH. It’s the easiest and most accurate way for us all to be clear about exactly what you are objecting to. I can defend my words. I cannot defend your paraphrase of my words. If you disagree, I implore you, QUOTE.



The urban heat island effect was never controversial and always accepted. I am sure it was common knowledge. (So obvious in London!) Tyndall discussed it as though it was.
The discussion at the end of Callendar’s paper (as I recall) is due to an account of the discussion following the presentation at the R Met Soc. This was a common format for such accounts of ‘transactions.’ Otherwise, what Willis is describing was famously practiced by Descartes much earlier with the 6 objections and replies. Very useful!
This R Met Soc discussion demolished Callendar’s paper. But Callendar persisted. In a paper not published until 1961 he makes the AGW case by the pattern of the warming — early fingerprinting. By this time there was some interest from the USA (First Plass then Ravelle etc).
In his biography, Fleming has a great pic of Callendar shovelling snow during one of the winters that brought a crisis to his thinking near the end of his life. In his memoirs Lamb says Callendar contacted both him and George Manley to express his concerns about the pause in the warming.
Callendar is an embarrassment to recent AGW because, if they grant AGW before the mid-century pause, then they must account for it.
And how!
Nice Willis,
but Callendar estimated 150,000 million tons, not tonnes
Yes but proper tons not ‘short tons’, so is only 1.5% different.
Thanks, Willis. Great find and wonderful post about it.
Cheers
“Curiously, he had no actual figures for the CO2 in 1937, he estimated it. ” maybe -but the actual CO2 level was widely known in 1937, especially by respiratory physiologists…
Refreshing! Great post Willis. Clearly Callander was a “true” scientist. It’s nice to see him say that about “accidentally” arriving at a matching figure, unlike todays climate scientists who pronounce with alleged great accuracy of their models. Refreshing indeed!
Is this the same G.S.Callendar who when writing a summary of the atmopheric CO2 experiments of the late 1800’s IGNORED all readings above 285ppmv? He then claimed that the 285ppmv figure was the ”correct” atmospheric value for CO2. So totally driven by opinion not data.
I suspect it is. He was wrong in that paper and he is wrong in this. A poor example of ”old science”
John, it is the same Callendar, but he didn’t “ignore” the high readings, he pre-defined that any values which differed more than 10% from the “baseline”, that is CO2 levels measured at the best places by the best methods, must be in error.
Which was completely right to do: many of the historical measurements were taken at places near huge sources and sinks (the middle of Paris e.g.), which have not the slightest connection with the CO2 levels in the bulk of the atmosphere. One can have hundreds of a ppmv more at night and 300 ppmv during the day in the middle of a forest. That is what Callendar expected and what C.D. Keeling measured in the middle of Big Sur state park in California in the early 50’s:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/diurnal.jpg
50 years after Callendar’s paper, his deduced average historical CO2 levels were confirmed by the first ice core measurements…
Since all the CO2 data was derived from the same chemical method, the same used today, then i submitt he was still wrong in his assumptions that the ”correct” co2 level must be 285ppmv. The average CO2 over the past 500Million years is 2500ppmv so 285ppmv is far too low and just above the plant survival rate of 200ppmv.
What were his estimates for the water vapor feedback and the cloud feedback.
The current theory is based on a positive water vapor feedback of +2.0 W/m2/C or a 7.0% increase in water vapor per 1.0C and the positive cloud feedback is +0.7 W/m2/C or a 3.3% decrease in net cloud cover forcing (letting in more solar insolation) per 1.0C.
“The course of world temperature during the next twenty years should afford valuable evidence as to the accuracy of the calculated effect of atmospheric carbon dioxide.” – G.S. Callendar, 1937
The valuable evidence (long pause…..):
http://www.ncdc.noaa.gov/sotc/service/global/global-land-ocean-mntp-anom/201301-201312.png
==================
TonyB
In the 1962/3 harsh winter he came to believe his Greenhouse gas theory was incorrect.
&
Leo G
Also interesting to note that before his death in the 1960s Callendar accepted that the multi-decadal pause in warming had effectively falsified his carbon dioxide theory of global warming. That is poossibly why you hear very little about him from warmists.
==================
Much as I would like this story to be true, my search produced nothing of value.
Any references?
Of course he didn’t predict the volcanic activity of the 1960s, but he correctly predicted the global temperature rise (air temps and ocean heat content) over the next 30 years. If only he had lived a little longer…
Volcanic activity was not responsible for the drop in GASTA from the late 1940s to ’70s. That was a natural cycle, which the biggest eruptions can affect for a year or so, but volcanoes both cool & warm, at different times, depending upon size & latitude.
Just as the warming of the early 20th century matches the warming of the late 20th century, the decline in the early 21st century will mirror the drop in the mid-20th century. Callendar made the same mistake amid the early 20th century warming that CACA advocates made during the late 20th century warming, when rising CO2 happened accidentally to coincide with a natural fluctuation.
Its rise didn’t coincide with the natural fluctuation down during the late ’40s-70s & doesn’t coincide with the current flat temperatures since the late ’90s. The inescapable conclusions are that the CACA hypothesis is falsified & that CO2 isn’t a pimple on the ass of climate change.
Barry:
It is my understanding that ordinary volcanism has no effect on temperatures. Only the extraordinary explosive types which put aerosols into the stratosphere have such an effect and then, only temporarily. I am aware of no such volcano, in the past sixty years, before ’83.
I have his archives on CD. I do not have his biography which I borrowed from the library some years ago.
I am pretty sure it was in the biography (almost at the end) that he made his comment following the 1962/3 winter. As far as I remember the archives do not cover this sort of comment but if I get the chance I will have a quick look this afternoon on the cd archives.
Here is his biography but its highly truncated
http://www.amazon.co.uk/The-Callendar-Effect-Established-Historical/dp/1878220764
tonyb
Well I looked through the cd archives. They are often a very difficult read as much of it is in the form of hand written notes and letters and data entered into notebooks.
Some very inter4esting exchanges with the great and good of the day including Lamb, Manley and Keeling. Interesting letter from Lamb to the Guardian in 1963 commenting about the decade long downturn in temperatures and also from the Met office acknowledging Callendars point that SST’s in 1890 were substantially warmer than in 1910.
I will have to re-read the written biography again sometime but for those interested in the intense period of scientific endeavour from the 1930’s to the 1960’s you could do worse than buy the archives and have a browse through. A little at a time to spare your eyesight.
tonyb
That’s obviously a snippet of a somewhat noisy but distinct sine wave, and we are past the last positive crest I predict.
You are right K. There is no evidence in the public sphere to support this extreme claim. There is some evidence that he was having doubts.
The photomontage heading this thread was from James Fleming’s bio of Callendar published by the American Meteorological Society. I came across one of Callendar’s papers about 25 years ago while involved in an engineering investigation re constraints on infrared absorption/ emission by unsaturated air, and was interested to preorder a copy of Fleming’s book when it was published in 2007.
On page 31 Fleming writes that “his confidence in the theory of climate warming, however, was shaken by the downturn in global temperature in the 1950s and 1960s”. In chapter 5 Fleming discussed Callendar’s puzzlement that the climate did not continue to warm monotonically and his hope that improved measurements of the dispersal of CO2 and more comprehensive temperature measurements would resolve the issue.
Shortly before his death, Callendar speculated in his notes about the reasons for the growing non-acceptance of his theory by his peers.
In 1964 he had an exchange with birdwatcher G Harris (see Weather 19, 264-265 March 1964) which appeared to end with Callendar conceding “a general decline of (European) temperature in recent years remains unaffected by considerations” of author bias, computational errors, and changes in the location of some stations as reasons for the cooling trend of up to 10degC reported by Harris.
Some other refs: Handel M & Risbey J, Climatic Change 21 (1992) 97-255
Weart S, Bulletin Atomic Sci June 1992, 19-27
Now here’s a comment worth repeating:-
Made by G. S. Callendar (Jan. 1961) Temperature fluctuations and trends over the earth. Q. J. Royal Met. Soc. Vol. 87, No. 371, p.2
So, another person who saw a brief correlation between atmospheric CO2 and temperature, tried to establish causation, and realized there was none. The same has happened to many, and in the 70s they were busy correlating things to account for the cooling.
It is sad to witness what has happened to Science. It is even more sad to realize that the Internet has almost completely destroyed objective thinking and real Science.
he didnt use correlation.
it’s basic physics.
The rise in temp due to doubling C02 WITHOUT FEEDBACKS is close to 1.5C
No. It’s NOT “basic physics” – It is ONLY “assumed physics” valid only in a classroom lecture hall.
The AVERAGE rise in an ASSUMED flat-plate earth uniformly radiating as a Perfect Black Body Spherical Object through a Perfect Atmosphere using ASSUMED average whole-planet albedos into space with no feedbacks or ASSUMED amplifications is 1.5 C.
Change any one of those “assumed” theoretical classroom “physics thought experiment” conditions into the real world, and you MUST change the output. 1.5 degree C is an ASSUMED result to make the CAGW scenario “visible” to the politicians who want to believe the simple results so they can write the laws so they can collect their new taxes and use their new power.
sorry RA.
it’s basic physics with reasonable justifiable assumptions.
in short. Using first principles ( no climate models, no paleo,) and using a few simplifying assumptions
a first order approximation of 1.5C is fully justified.
Given that you cannot do controlled experiments on the planet, the approach used by Guy and others is fully justified, properly scientific, and rational.
If you want to do a different first order estimate, then knock yourself out.
Steven Mosher: it’s basic physics with reasonable justifiable assumptions.
in short. Using first principles ( no climate models, no paleo,) and using a few simplifying assumptions
a first order approximation of 1.5C is fully justified.
Please tell us again the basic physics of increased evaporation with increased surface temp or increased downwelling LWIR; or how a “simplifying” neglect of evaporation is justifiable.
Don’t forget the study by Romps et al in Science Magazine: http://www.sciencemag.org/content/346/6211/851. For readers who have not seen it yet, Romps et al calculate that a 1C increase in surface temperature will produce enough of an increase in the evaporation rate to produce an 11% (+/- 5%) increase in lightning flashes. The paper requires thorough debate in public and replication before it can be believed, but it is at least as credible as any calculation of climate sensitivity that ignores non-radiative heat transfer from the surface.
And, it is Agree with Steven Mosher Day as well!
And if one examines the full climate record, there is very little reason to think that there is much more than warming caused by CO_2 WITHOUT FEEDBACKS, because this assumption fits the data remarkably well all across HADCRUT4.
rgb
Steven Mosher
November 14, 2014 at 10:31 am
////////////////////////////////////////////////
”it’s basic physics with reasonable justifiable assumptions.”
No Steven, the assumptions are not reasonable or justified. The very foundation assumption is a critical error. The surface would not be at 255K without radiative atmosphere as it is not a near blackbody, it is a SW selective surface.
Here are five simple rules from empirical experiment showing why the near blackbody assumption for the surface of our ocean planet was so incredibly wrong –
http://i59.tinypic.com/10pdqur.jpg
On top of that the IR emissivity of water is lower than its SW / UV absorptivity.
Our radiatively cooled atmosphere is not raising surface temps from 255K to 288K, it is lowering them from around 312K to 288K.
Quite simple climastrologists got the most “basic physics” of the “settled science” wrong.
rgbatduke, what do you think of this paper?
Don’t forget the study by Romps et al in Science Magazine: http://www.sciencemag.org/content/346/6211/851.
Matthew,
I have no strong feelings about it. Their entire argument boils down to this. Lightning flashes mostly happen during rainstorms (although there are exceptions). One whole class of rainstorm, in fact, produces nearly all lightning flashes — ones with a strong vertical convection and turbulence, a.k.a. “a thunderstorm”. One expects a correlation between rainfall, especially rainfall rate, and lightning frequency. One expects the correlation to be even stronger if one selected the “kind” of rainfall or added other selectors — some parts of the NC piedmont consistently produce more thunderstorms than other parts because of changes in the kind of soil and vegetation, even though the local temperatures don’t vary by that much. Still, one expects a positive correlation — more rain = more lightning, on average.
One of many selectors for thunderstorms (as opposed to “just rain”) is convective atmospheric potential energy (CAPE) — this is basically related to the thermodynamic energy available for driving rapid turbulent updraft and hence lightning. When large CAPE occurs with rain, one is a lot more likely to have a thunderstorm than CAPE alone or rain alone. Hence they use CAPE*Rain as a proxy for lightning rate, generate a scatter plot, and fit it with a linear trend. CAPE yields productivity for small CAPE but is less predictive at large. Rainfall is more predictive at large rainfall. The product gets some fitting-fu from both, and a reasonable linear trend is indeed visible.
The one worrisome aspect of this trend is that the scatter of the scatter plot is rapidly increasing at the high end of the proxy scale. This translates to increasing uncertainty in the fit — it is entirely possible that the fit is not really linear out there, and it is certain that lightning’s distribution isn’t just linear even where it has a linear trend — lots of noise and lots of packing of events into comparatively weak storms. It might be the case that the underlying trend is no longer linear and things are saturating. It is also very much the case that this is only two dimensions of a much higher dimensional dependence, and it is not clear that what one is seeing is a truly separable linear trend at all.
But it is plausible, so let’s grant it. Then in order to extrapolate it we have to do two things:
a) look not at the centroid of the linear trend, but at the approximately gaussian distribution of strikes around the different numbers. Note that the linear trend is not at all reliably predictive — one can have an entire range of values for lightning strike frequency for any given value of CAPE*Rain. At low values this range is small, at large values it is large.
b) Second note the distribution of strikes period. It is highly biased towards low values of CAPE*Rain. Nearly all of the samples in the scatter plot come in the first third of the fit, if not the first quarter. Lots of small storms with a bit of lightning, comparatively few large storms and those that there are vary substantially with respect to how much lightning they produce.
c) Third form the cumulative distribution function. This is the integral (really sum over the discrete data) of the number of lightning strikes that happen in storms of CAPE*Rain less than some value. We have to do this because there is an unwritten assumption that increasing CO_2 will increase rainfall, increase CAPE, or both, and hence will shift the underlying distribution of rainstorms so that there are more storms with more rainfall and higher CAPE (and hence more lightning). We have to use the CDF of the double distribution to compute the change in total lightning strikes, because the latter is the integral of the entire function! We can’t just add a few more violent storms on at the end, we have to consider what happens to the fraction of rainstorms with low CAPE*Rain (it might go up, might go down) and if it does either one, what that will contribute to the total number of lightning strikes (could increase it, could decrease it).
d) Figure out what the GCMs say will happen. This is VERY difficult, because they won’t all say the same thing and some will actually give opposite results to others, regionally. Does global warming cause more droughts? Some models think so. Droughts = less lightning! But say, maybe they increase the hell out of CAPE! Does global warming cause more rain and floods, but more rain associated with non-turbulent fronts (not much increase in CAPE)? Could be more rain but not much more lightning.
Oops. At that point we see the first flaw. The author of the study obviously possesses the data with both Rain and CAPE values for many storms together with their lightning count. Yet instead of just fitting the two dimensional distribution function so that he can optimally project, he uses the product. That product basically assumes that the two phenomena are effectively statistically uncorrelated, but they almost certainly are not! Remember, thunderstorms happen when there is a lot of rain and a lot of CAPE. A different kind of “heat lightning” can happen when there is a lot of CAPE and not much rain. Then, lightning can happen even if there isn’t a lot of CAPE but there is a fair bit of rain (perhaps when there is a wind and lots of lateral but not much vertical turbulence. The distributions are almost certainly not symmetric and the linear trend against the product will almost certainly have less predictive value than the actual 2D joint probability distribution, especially when accounting for the broadening of the distributions. Perhaps the width of the distributions is much narrower relative to a 2D hypersurface, but we are only seeing the hyperboolic projections of that hypersurface formed by CAPE*Rain = constant.
To be very specific, the actual distribution of CAPE=2, Rain = 1/2 may look very different from the distribution of CAPE=1/2, RAIN=2 and both may differ substantially from CAPE=1, RAIN = 1. Yet the model being fit treats all three equally, at the cost of a very wide distribution that looks reasonably symmetric but which might not be at all truly symmetric.
Even without doing this, the authors have to decide on what “the models” say. Since there are a lot of models, they have to either pick the models they are going to listen to or pick all the models. If they pick the ones they are going to listen to, they have to a) say how they are going to select them; and b) say how they are going to figure out what they collectively say at all, since they individually will say different things (unless the selection criterion is “pick only models that say the same thing”).
This is then the point of the second flaw. There is no good way to do either one. I’ve written extensively about the lack of meaning in a superaverage of averages in the context of climate science. Averaging possibly broken models is not guaranteed to give you a good model. It isn’t even likely to give you a better model without some very specific and rather unlikely assumptions about “likely distributions of results subject to non-random errors in physics and programming”. Not averaging means that you can’t reduce your result to “11% per degree C”. Selecting the models by heuristic criteria guarantees heuristic bias. Selecting them randomly means that you have to include models that produce diametrically opposing predictions (flood from one, drought from another). In the end you can’t do better than analyze what each model predicts and present the list of predictions without any bias or attempt to superaverage at all. Some models my predict a reduction in lightning as the warming poles reduce mean CAPE. Others may predict and increase. We cannot possibly say that the average of the two is better than either one, as one of them could be dead right and the other wrong, so that the average is dead wrong.
Finally, the third flaw assumes that the data they have — which is based on samples drawn from many different land surface types operating at their “typical” ranges — can be extrapolated for those regions by using a linear trend obtained from all regions!
That is, there is another critical dimension! Take Florida — lots of thunderstorms. Take New Mexico — a lot less thunderstorms. It is very, very unlikely that increasing the mean temperature of the globe by 1C is going to affect the frequency of thunderstorms in New Mexico and Florida the same way at all, nor is there any good reason to think that the variations per site are even linear in CAPE*Rain with the same slope!
It could be, in other words, that New Mexico is very insensitive to that increase of a degree, and further the case that the soil type is not conducive to thunderstorms period so that little change occurs there. It may have been represented by several points in the scatter plot, but they were nearly all concentrated in the high CAPE, low rainfall region. It could be that they are very sensitive to that increase of a degree — it might change New Mexico from semi-arid near desert to tropical rain forest! The response might be highly nonlinear! But the data fit to form the model preclude regional nonlinearity or regional variation in sensitivity generally.
Put it all together, I don’t think that the result they end up with is implausible as in obviously wrong. It could be right. I just don’t think they’ve done a good job of showing that it is more than plausibly right, and don’t take the “11%” figure at all seriously. I can’t even tell from the paper if they handled the PDF and CDF correctly even before they connected it to the (somehow) selected GCMs and the (somehow) averaged inputs. I suspect that they did something as simple and naive as took the mean rainfall, the mean CAPE, multiplied them, multiplied them by the slope of their linear fit, and said “look, 11% per degree” which is wrong (or at least, makes unstated and possibly indefensible assumptions) in so many ways.
To make a metaphor: There is a very clear connection between the size of a human body and the number of calories taken in by that body. In fact, one can plot it out and for a decent range of sizes I’m certain it will look linear. One can without doubt ascertain the slope of this linear relationship by generating a scatter plot of e.g. height and calorie intake, and I’d expect to be able to input this data into R and fit a linear trend with a decent R^2.
Along comes a climate scientist who wants to see what effect global warming will have on human height. “Aha!” they say. “There is a well known linear trend between calorie intake and height — they are correlated linearly with a slope of (say) 0.5! Increasing carbon dioxide and warmth and water will increase crop yields.” (Which is true, they are increasing crop yields fairly substantially so far!) “Every 70% increase in CO_2 will increase temperature by a degree and will result in a 30% increase in the food supply. I therefore conclude that human height will increase by 0.30*0.5 = 15% as this happens!”
Understanding why this confounds their assertions is key. They’ve established at best a static linear relationship, not partial derivatives at any site let alone average partial derivatives in some sense that can be extrapolated globally. They haven’t even tried to krige their result or consider whether it holds over seawater in the 70% of the Earth covered by oceans, in the Sahara desert, in the tropical rainforest. It is quite possible that what they are observing is not a causal connection between CAPE*Rain and lightning, it is a connection between something else that causes all three, details in e.g. the distribution and flow patterns of the jet stream, modulation of cosmic rays, whatever. Increasing food supply might WELL increase average height, but probably not at all the way the static linear trend “predicts” when using a joint distribution function as a conditional distribution function.
Great entertaining post Willis. It looks like they did better science back before they turned it over to politicians and computer models . . . and it was cheaper.
Willis, did he mention in this or any other paper how high the temperature could go before it stopped being beneficial, or did he expect it to stop at a certain level?
I don’t consider 274 ppmv significantly less than 295 ppmv. It’s about 10%. What’s the uncertainty here? And anomalies of .x C, trends of .0x C/100 yrs. That’s cutting it pretty fine. On a graph of min to max’s these anomalies wouldn’t even appear. Focused on a the size of a pimple on the elephant’s butt.
“…Callendar estimated 150,000 million tons, not tonnes.” 2,204/2,000=10% – no biggie in that cloud of uncertainty.
How come Mauna Loa is the only source of atmospheric CO2? What about all of NOAA’s tall towers, Arctic and Antarctic data? Some of the tall towers were back & forth between 400 ppm years before Mauna Loa. Maybe that’s why IPCC AR5 TS.6 admits uncertainty about CO2 forcing over land.
As pointed out above he would be using proper tons not ‘short tons’ in a presentation to British industry so the difference is only 1.6%.
Also Mauna Loa is not the only source of atmospheric CO2, just the longest, others include: South Pole (1957-), Baring Head (1974-), Point Barrow(1977-), Alert (1984-) etc.
What? A tonne or metric ton is 1,000 kg or 2,204 lbs. That’s 110% of an English ton of 2,000 lbs.
Mauna Loa might be the oldest, but I understand the data is “adjusted” to account for local volcanic activity. Comparison to the tall towers suggests that a single data point at ML is not representative of the global atmosphere.
Short ton
Long ton
Tonne
Some confusion over metrics.
The long ton is British Imperil and is 2240#; the short ton is the 2,000# standard; the tonne is 1,000 kg.
Nope, an Imperial ton as used in the UK is 2240 lbs unlike the US ton, otherwise known as the ‘Short’ ton for obvious reasons.
The mauna loa data is not adjusted, when the flow is from the direction of the volcano the data is not used. Comparison to the S Pole data etc, suggests that the ML data is representative, much less annual fluctuation in the SH due to the dominance of oceans.
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_trend_mlo.png
http://www.esrl.noaa.gov/gmd/webdata/ccgg/trends/co2_trend_gl.png
Before Keeling at the end of the 1950’s, all CO2 measurements were handmade with wet chemicals. Keeling saw the possibilities of a quite new (very expensive) NDIR device, if that was regularly calibrated with accurately known gas mixtures. The main advantage was that it needed no handling and little maintenance. Its accuracy was better than 0.2 ppmv, while the usual wet methods were +/- 10 ppmv, with some better than that, but even more time consuming. Most historical data were not even good enough to be sure that there was a seasonal variation of CO2 levels in the NH, only after 2 years Mauna Loa that was confirmed.
Keeling’s new measurements started at the new South Pole station and a year later on Mauna Loa, but the South Pole continuous measurements were stopped after a few years and replaced by 2-weeks flask sampling. A few years later the continuous measurements started again. That makes that Mauna Loa has the longest continuous measured trend. Meanwhile a lot more stations are in use. See:
http://www.esrl.noaa.gov/gmd/dv/iadv/
Tall towers over land are used to measure fluxes in/out vegetation, urbanization, etc. They don’t reflect CO2 levels in the bulk of the atmosphere, which is above a few hundred meters over land and everywhere over the oceans (over 95% of the atmosphere). The variable amounts of CO2 near ground over land don’t have much influence: according to Modtran, if you increase CO2 to 1000 ppmv over the first 1000 meters, the effect of the increased radiation absorption is less than 0.1°C warming at ground level, all other influences remaining the same… Thus little effect of near-ground elevated CO2 levels.
FE: Seeing as variously located stations have annual CO2 peaks offset by several months, and long term latitudinal lags of several years, how is CO2 transport to be interpreted: as by air or by sea? –AGF
agfosterjr,
The largest changes in CO2 are seasonal, where the NH extratropical forests are the dominant cause. These acts as net sinks in spring-summer and net sources in fall-winter. For the same latitude and altitude band, the mixing of CO2 changes into the rest of the band takes a few days to a few weeks. For different latitude and altitude bands, it needs weeks to months and between the hemispheres it needs 6-24 months… Here the lag for the NH altitudes:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/seasonal_height.jpg
The SH follows the NH with a lag, as the ITCZ allows not more that a 10% air exchange between the hemispheres per year and the bulk of the increase is in the NH. Here the trend over time for different stations in the NH/SH where Mauna Loa and South Pole are at over 3,000 meter and Barrow and Samoa are near sea level:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_trends_1995_2004.jpg
Thus it is a matter of mixing speed by wind and air circulation which gives the lags…
@ur momisugly Ferdinand Engelbeen
November 16, 2014 at 6:24 am
Thanks for that! After I asked I read this:
http://www.gfdl.noaa.gov/bibliography/related_files/rjm9901.pdf
–and quickly jumped to the wrong conclusion. –AGF
You all may be interested that there was a paper published last year which celebrated the 75th anniversary of Callendar’s paper, and does a comparison of his temperature estimates with CRUTEM4:
Blog post: http://www.climate-lab-book.ac.uk/2013/75-years-after-callendar/
And, the paper is open access here: http://onlinelibrary.wiley.com/doi/10.1002/qj.2178/full
Regards,
Ed Hawkins
And, Callendar even has an account on twitter: @guycallendar
Right up to your usual standard Willis.
I noted your point about ‘unworthy’ amatuer scientists having the temerity to comment on the pseudo science of climate.
My own Damascene moment came when I discovered the late great John Daly
Together with your good self and now ‘Professor’ Callander, this makes a Holy Trinity of realistic commentators on the Great Scientific Fraud.
My own background was being raised on a small farm in the west of Ireland, where we were immersed in rural weather lore. Back in the fifties and sixties we had to do our own forecasting. One of the most reliable methods was to look out the thirty odd miles into Galway Bay. If we could see the Arann Islands it was going to rain. If they were invisible then it was raining. We had no multi million pound supercomputers in those days.
Later I went to sea as a Radio Officer in the Merchant Navy. Just like John Daly above. As a “Sparkie” one of my tasks was to help collect, collate, and promgulate weather details – one of the much maligned bucket sea water gatherers.
The coded details being duly despatched to the Met Office at Bracknell by Morse code every 6 hours. This sparked a keen interest in meteroligical matters in my youngish mind.
Then about 5 years ago I discovered the Late John Daly and his outstanding work in opposing the fraudulant disciples of the global warming religion. Then WUWT and yourself.
It is hard to put into words my appreciation of the necessary efforts of the great and the good on web sites like this. Their vast store of knowledge imparted with good grace and good humour. Contrasting that with the Soviet like dictatorial religious beliefs of the opposition is really an unfair contest.
I have no doubt that history will smile graciously on the majority of commentators on sites like this.
Please keep fighting for truth.
Loved the comment about UHI. Last nights NWS forecast showed temperature in Dallas, TX to be a minimum three degrees warmer than all the areas encircling it.
My remote thermometer on the south patio reads 17 F. The thermometer suction cupped to north kitchen window reads 18.9 F. Kitchen window heat effect.
A man with two watches doesn’t know what time it is and a climatologist with one thousand temperature data points can’t use more than two significant figures. An anomaly of 0.2 C is a statistical construct, not a physical measurement.
An anomaly of 0.2 C is a statistical construct, not a physical measurement.
Actually anomalies are predictions.
using the data we have we create a prediction of unobserved temperatures.
These predictions of course are expressed with many digits of precision, not because
the data is that that precise but because the goal is to minimize the error of prediction
The problem with the UHI fixation and adjustments, isn’t whether it is real or not (it certainly is). The question is whether the rate of temperature change in a UHI area is the same as the rate of temperature change outside of the UHI and whether the rate of change in UHI stays the same. Also UHI’s have a nasty habit of cropping up occasionally where they shouldn’t be.
Here is another item, anything that lowers the max and raises the min will result in a higher average temp, which is what the UHI and the ocean does.
We live in the coldest period of the last 10.000 years…(04:09)
So if the earth had warmed up, we had still been in the Little Ice Age!
The comments about the self correcting nature of cloud cover don’t mention what I think is the more remarkable aspect of this, which is that although Callendar recognized that water vapor had a green house effect ( or words to that effect) he discounted any increased warming of the planet from water vapor because of its tendency to condense and to precipitate. As you know, modern climate models depend a great deal on a strongly positive feedback from water vapor to justifiy their predictions of climate doom. It seems that Callendar, using calculations that one can perform on the back of a post card, out did models run on computers with teraflops of power. His instincts about the water cycle are proving to be correct.
for an update, consider this paper: http://www.sciencemag.org/content/346/6211/851 I excerpted some of it and the supporting online material at the blog Climate Etc.
Engelbeen and all,
“Human emissions today are about 3% of total emissions (~9 GtC/year), natural releases are 97%. But natural sinks are 98.5% of total emissions, 1.5% remaining in the atmosphere, (near) all human caused.”
Please propose a mechanism for this effect. I submit that there is none, completely impossible for this to happen more than one year in a row. Seriously, Mother Nature does not do arithmetic…
Michael: Oh, let’s see, off the top of my head:
Added emissions increase the partial pressure of CO2 in the atmosphere. This alters the balance at the ocean surface, causing more CO2 to dissolve into the water, reducing the levels in the atmosphere somewhat.
Added emissions increase the partial pressure of CO2 in the atmosphere. This permits most plants, which evolved their photosynthetic mechanisms under far higher concentrations and so are now CO2 “starved”, to absorb more CO2 from the atmosphere by photosynthesis reducing the levels in the atmosphere somewhat.
You say, “Mother Nature does not do arithmetic…” You’ve obviously never used an analog computer, which exploits Mother Nature’s physical laws to do arithmetic (and more, including calculus and differential equations).
Michael, there was a dynamic, temperature controlled equilibrium between natural emissions and natural sinks before humans emitted huge quantities of CO2 in the atmosphere. Based on O2 and δ13C measurements, for vegetation the seasonal in/out is ~60 GtC. The ocean surface gives ~50 GtC in/out over the seasons and the more permanent exchange with the deep oceans is ~40 GtC out of the tropical upwelling zones and back down into the deep in polar waters.
In first instance very little happened with the extra CO2 humans emitted, as there is only more uptake of CO2 by the oceans if the CO2 partial pressure in the atmosphere increases (Henry’s law). Thus the first emissions increased the pCO2 level in the atmosphere (ppmv CO2 is about the same as pCO2 pressure in μatm, minus the % water vapor), which pushed a little more CO2 into the oceans (and reduced the outgassing in the tropics). The speed at which the extra CO2 above the old (temperature controlled) equilibrium is removed depends of the extra pressure difference air-water and the effect that the extra pressure difference on the CO2 fluxes has.
In short, humans emit ~9 GtC/year nowadays. Some 1 GtC/year (in mass, not the original molecules) is absorbed by vegetation due to the increased CO2 level in the atmosphere, 0.5 GtC/year by the oceans surface (which is then saturated) and ~3 GtC/year goes into the deep oceans. Here an overview:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em2.jpg
where 1 ppmv = 2.12 GtC
Thus as you can see, nature is a net sink of about halve the human emissions, just by coincidence as the pCO2 pressure increased with 110 μatm, which causes an extra uptake of ~2.15 ppmv/year which gives an e-fold removal time of 110 / 2.15 = 51 years or a half life time of ~40 years if humans should stop all emissions today.
The important point is that the increase in natural sinks is smaller than the current (increasing) human emissions per year, that is why humans are fully responsible for the increase…
Carbon or CO2? They are not interchangeable terms. GtC C yields 3.66 GtCO2 But not all carbon ends up as CO2.
Nick,
Carbon is used in the scientific world because while CO2 is near the only carbon component in the atmosphere (besides tiny amounts of CH4 and even smaller amounts of CFC’s). In the oceans it is 1% CO2, 90% bicarbonates and 9% carbonates. In plants it is a host of carbohydrates and other stuff…
In all cases the total amount of carbon in different forms must remain the same: the mass balance of carbon must fit, in whatever reservoir and in whatever form it is captured or released… Using all carbon components as carbon equivalents makes the calculations a lot easier…
In the current atmosphere 1 ppmv CO2 equals 2.12 GtC.
“Michael, there was a dynamic, temperature controlled equilibrium between natural emissions and natural sinks before humans emitted huge quantities of CO2 in the atmosphere.”
Horse manure. Mother Nature does not do the same thing every spring nor fall. Trees, grass, vegetation in general, plankton, foraminifera, all grow and die and rot and sink with tremendous variation every year. You guys have not thought this through…
Michael,
Human emissions are ~4.5 ppmv/year (~9 GtC/year). Natural variability is +/- 1 ppmv/year (+/- 2 GtC/year). That is all.
See the above graph in my previous reaction of the changes in the natural sink capacity (nature is a net sink over the past 55 years).
Thus the natural variability caused by differences in net uptake by plants and oceans (mainly temperature driven) is less than halve the current human emissions…
Arrhenius and Callendar both thought the increase would be beneficial.
For folks who are interested in how you can calculate sensitivity without climate models,
here is a nice approach that grew out of discussions on Lucia’s blackboard.
http://www.stat.physik.uni-potsdam.de/~pikovsky/teaching/stud_seminar/Model_CO2.pdf
This paper pulls a surface temperature of 288 K (15 C) for the earth out of thin air. When thinking about earth’s surface temperature why is it that no one considers the temperature of the ocean abyss to be a part of earth’s surface temperature? The general understanding seems to be that deep ocean temperatures result from down welling ocean currents at the North Pole. To me, that suggests the temperatures of the deep ocean ARE a part of the earth’s surface temperature. Since the weight of earth’s atmosphere only amounts to the weight of 33 feet of water. I would suggest that ignoring the temperature of the oceans is a huge oversight if one is trying to calculate climate sensitivity by compare theoretical surface temperatures to measured ones.
>his results correlate very well (0.84) with the modern estimate.
Don’t items with rising trends always correlate well?
Good post.
About this: I was most impressed by a practice which I don’t see in the modern scientific journals.
that practice (inviting comments) is common in statistics journals. An excellent example in climate science was the set of comments on the article by McShane and Wyner, published in Annals of Applied Statistics, available here. Every issue of the Journal of the American Statistical Association has invited comments on at least 1 article. Perhaps readers in other fields can chime in with examples.
Back fifty years or more ago, they actually did real science, not the “my model says it must be true” kind of thing that we are treated to today.
there were modelers 50+ years ago whose models predicted the existence of things not yet observed. Rosalind Franklin, you’ll recall, helped to elucidate the structure of DNA molecules through her model based calculations. And there are today good modelers who are as responsible as those modelers in checking their models against all relevant data: Eugene Izhikevich, for example, and his many explorations with his “quadratic integrate and fire” model of the Hodgkin-Huxley model of the squid giant axon (generalized to many other kinds of neurons over the years) (reference in his book Dynamical Systems in Neuroscience [sorry no page number, really, as I have misplaced my copy, but the book has an index.]) There is no need for your gratuitous insult to all of the scientists and modelers since the glory days of your youth. A small number of people have overweening confidence in the predictions of some models; even that is not new.
You have it backwards wrt Rosalind, she made the measurements, Crick and Watson were the modelers. That’s why Watson disliked her, she told him his model was rubbish because the chemistry was wrong, she was right so they had to stop working on the project for a while.
Franklin didn’t model. She was an X-ray crystallographer. She was an experimentalist who took pictures which elucidated the DNA puzzle.
milodonharlani and phil, consider this: She was an X-ray crystallographer.
X-ray crystallography depends on the model of x-ray scattering. Franklin did more than just take pictures. Of course it is also true that Crick and Watson were modelers.
Nowadays of course we have “pictures” of proteins from crystallography, and pictures of body parts from magnetic resonance imaging, all of which depend on the solutions of simultaneous non-linear equations. There are models nested within models.
Being an X-ray crystallographer doesn’t make one a modeler, according to your weird thought process any application of physics or physical chemistry is a model! Rosalind was a superb experimentalist who produced by far the best images of the scattering pattern at that time. The Braggs at Leeds had determined how X-rays are scattered a century ago. By the time Rosalind did her experiments it was known that helices generated ‘X’ shaped patterns, therefore the observed ‘X’ shaped pattern of ‘photo 51’ indicated that DNA was helical. I demonstrate that in my undergraduate class every fall! What Rosalind didn’t know (and Dot Hodgkin inadvertently misled her about), was that the missing layer line indicated a double helix. Francis Crick however knew that and as soon as he saw Rosalind’s photo he knew that it was a double helix. From the measured parameters of the photo he was able to calculate the exact spacing of the double helix. If using the known physics of the catering of electromagnetic radiation is modeling then any application of known physics is modeling.
Franklin provided observations which had nothing whatsoever to do with models.
Watson & Crick didn’t build a model. They reconstructed objective reality based upon observations thereof, testing their hypotheses as to structure against further evidence.
Phil: Being an X-ray crystallographer doesn’t make one a modeler, according to your weird thought process any application of physics or physical chemistry is a model!
Only when the calculations follow from a model, such as calculating the flight path of a satellite or interplanetary probe, or Lisa Meitner’s calculations that explained the results of Hahn and Strassman’s experiment (the calculations that Niels Bohr accidentally released to the world through conversation while the paper was in review.)
milodonharlani: Watson & Crick didn’t build a model. They reconstructed objective reality based upon observations thereof, testing their hypotheses as to structure against further evidence.
Crick and Watson certainly did build a model; it is accepted as an accurate model of reality, but they certainly built a model. What they did not do was reconstruct actual DNA molecules, something that has been done by others since then, based on the accuracy/reality of their model.
Here’s their model!
http://www.thehistoryblog.com/wp-content/uploads/2013/05/Watson-Crick-DNA-model.jpg
As well as explaining the X-ray diffraction photos their structure for DNA was able to explain Chargaff’s rules (an experimental result, he was very ticked off at not getting a share of the Nobel), and gave a mechanism for replication. “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”