Dispelling myths about global warming

CO2 did not drive the rapid warming of the 20th century.

Story submitted by Stan Robertson

The difference between a good idea and a bad idea is often a quantitative matter. For example, many people would think it a good idea to replace internal combustion engines with electric motors. But if the intent is to reduce the burning of fossil fuels then switching to electric motors would not help unless the electricity was generated without burning fossil fuels. Some people think that it has been a good idea to use corn to produce ethanol for a fuel, however, I am not one of them because the energy return on investment is either negative, or minuscule at best.  From the standpoint of greenhouse gas emissions, it is a horrendous loser. It may be a biofuel and cleaner burning, might help ameliorate ozone problems and etc, but considering that nearly a gallon of oil is consumed in addition to the gallon of ethanol produced and burned, it is a quantitative loser. (Not that I care at all about the CO2.)

One of the ideas that seems to be widely believed is that human produced greenhouse gases, chiefly CO2, has dominated the warming of the earth in the last century. It is a simple quantitative matter to show that this is completely false.

According to the calculations of the UN IPCC, a doubling of the atmospheric concentration of CO2  (with an accompanying rise of other greenhouse gases) would reduce the outgoing infrared radiation from the earth by a net 2.7 watt/m^2 at the top of the atmosphere. This is known as the “climate forcing” that will occur along with a doubling of the CO2. This is a relatively straightforward, but messy calculation. I have repeated the IPCC calculation for CO2 and obtained a larger number, but after including the IPCC adjustments for other greenhouse gases and the effects of sulfate aerosols accompanying coal burning, we agree. It is important to note that the surface temperature increase that will accompany the CO2 is proportional to the logarithm of the CO2 concentration. Thus while CO2 concentration is increasing exponentially with time, the temperature only increases linearly.

In order to maintain equilibrium with the incoming UV/VIS radiation received by the earth, the surface temperature would need to increase enough to allow it to radiate an additional 2.7 watt/m^2 at the top of the atmosphere after any CO2 doubling. At a nominal surface temperature of 15 C  (288 K), the earth surface radiates about 390 watt/m^2 on average, but the radiation that exits the top of the atmosphere is only 240 watt/m^2. Thus the earth would need to produce an additional (390/240)x2.7 watt/m^2 = 4.4 watt/m^2 at the surface in order to offset the direct effect of doubling the atmospheric CO2. At 288 K, the earth radiates an additional 5.4 watt/m^2 per 1C  of temperature rise. Thus the direct effect temperature increase of a CO2 doubling would be 4.4/5.4=0.8 C.

At the present 0.5% per year rate of increase of CO2 it will take about 140 years to double its concentration. But as we all know, a 0.8 C temperature increase in 140 years is not the result that the UN IPCC is alarmed about. The IPCC climate models include large positive feedback effects that raise their expected temperature increase into the range 2 – 4.5 C, with their most probable value at about 3 C.

There are four main arguments against this: (1) We have already had half of a 2.7 watt/m^2 climate forcing since pre-industrial times. That has been accompanied by only 0.8 C temperature increase.  As shown below, there are reasons for believing this to be due primarily to natural causes. (2) There is no evidence that confirms the existence of any large feedback effects since the end of the last deglaciation. (3) The rate of temperature increase within the past century has been within the bounds of normal climate variability and (4) as shown below, the heating effect of CO2 has been quantitatively inadequate to explain the actual warming that has occurred in the last century.

There have been two periods of rapid warming that account for most of the warming that occurred in the last century, as shown below.

Let’s examine the first of these rapid warming periods first. By 1944, the atmospheric CO2 concentration had increased from the pre-industrial level of about 280 ppm up to 310 ppm. At that time the concentration was increasing at a rate that would require about 600 years to double. The fraction of a doubling climate forcing that would have occurred by 1944 would have been log(310/280)/log(2)=0.15 and this would have contributed at a rate of 0.15×2.7 watt/m^2 per 60 decades, or 0.0068 watt/m^2 per decade. It’s direct warming effect at the surface would thus be only (390/240)x(0.0068 watt/m^2 per decade)= 0.01 watt/m^2 per decade. This would have raised the temperature by (0.01 watt/m^2 per decade) /( 5.4 watt/m^2 /C) = 0.002 C per decade. This is such a pitifully small fraction of the 0.174 C per decade rate of heating that occurred 1917-1944 that it is pretty clear that CO2 had nothing to do with the warming of the first half of the last century.  Even the IPCC climate modelers concede this point.

But there is still more to be learned from that period. Apparently some natural phenomenon allowed the earth to absorb energy at a significant rate and produce the temperature increase of the first half of the century. Let’s see how much that might have been. To begin, the earth would have had to take in enough heat to at least produce the additional surface radiation that would accompany a temperature rise of 0.174 C per decade 1917-1944. This would be (5.4 watt/m^2/C)x(0.174C/decade) = 0.94 watt/m^2 per decade. This is already 94X the CO2  heating rate.

But, in addition, as shown by both the ARGO buoy system and heat transfer calculations, at least 700 meters of upper ocean can respond to heating on a time scale of a decade. The additional amount of heat required to raise its temperature by 0.174 C per decade would be c*d*0.174C, where c= 4.3×106 joule/m^3/C is the heat capacity of sea water and  d= 700 m, or 5.2×10^8 joule/m^2. Dividing by the number of seconds in 10 years, this would be an average of 1.7 watt/m^2 per decade. But since it would start at zero, it would have to end at 3.4 watt/m^2 per decade in order to attain this average. This should be added to the 0.94 watt/m^2 per decade surface radiation losses by the end of the warming period. So the total heating rate would have to ramp up by 4.3 watt/m^2 per decade to provide the warming that actually occurred in either of the rapid warming periods.  This is 430 times the direct CO2 surface heating for 1917-1944.

Since essentially the same rate of temperature increase occurred 1976-2000, we can compare 4.3 watt/m^2 with the heating that might have been caused by CO2  in the last part of the last century. From 1944 to 2000, the CO2 concentration increased from 310 ppm to 370 ppm, with a doubling time of about 140 years. The corresponding climate forcing that would have caused, at the surface, would be (390/240)x(log(370/310)/log(2))x(2.7 watt/m^2)/14 decades = 0.08 watt/m^2 per decade.

Due to the higher rate of growth of CO2 concentration in the second half of the 20th century, this is 8X as large as the direct surface heating effect caused by CO2 in the first half. Nevertheless, it is still some 54 times smaller than the rate of heating that actually occurred.

These straightforward calculations make it painfully obvious that CO2 forcing is not what drove the two periods of rapid heating during the last century. Until there is some understanding of the natural causes of these rapid warming periods and their inclusion in the climate models, there is no reason to believe the models.  This is simple first year physics.

===========================================================

Stan Robertson, Ph.D, P.E, retired in 2004 after teaching physics at Southwestern Oklahoma State University for 14 years. In addition to teaching at three other universities over the years, he has maintained a consulting engineering practice for 30 years.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
139 Comments
Inline Feedbacks
View all comments
March 27, 2013 1:26 am

Gail Combs says:
March 26, 2013 at 4:42 pm
Gail,
We have been there several times too. In summary:
CO2 is well mixed in over 95% of the atmosphere. It is only highly variable in the first few hundred meters over land. CO2 levels measured in the bulk of the atmosphere don’t change with more than +/- 8 ppmv over a year (that is about 2% of the scale), the huge seasonal exchanges (+/- 20% of all CO2 in the atmosphere) included. Any scientist worth his/hers money calls that well mixed.
Well mixed doesn’t mean that any release or capturing of CO2 at any point on earth is mixed in instantly all over the earth. It only says that changes at any point are mixed in over a reasonable period of time. Which is the case for CO2.
Lucy Skywalker (whom I did meet a few years ago) still believes the late Jaworowski, whose knowledge ended in 1992, firmly refuted in 1996 by the work of Etheridge e.a. on three Law Dome ice cores. Anybody who denies that the average age of enclosed gas bubbles is younger than of the surrounding ice layer in ice cores, in my opinion has stopped learning about ice cores. And declaring that CO2 levels migrate from lower to higher levels is as good as closing the door for any credibility forever. Others like Glassmann are constantly misinterpreting what is said by others, so that any real discussion with him is impossible.
Then the Japanese satellite data: what is published unfortunately are flux data, not the absolute CO2 levels (although they must have them too). Of course there are huge fluxes within a year over the seasons, but that says next to nothing about the change in total CO2 levels.
Any global temperature change leads to changes in total CO2 of the atmosphere. Over the seasons some 5 ppmv for a change of 1°C (mainly caused by vegetation in the NH). Over (very) long time spans that is 8 ppmv/°C. That is all. We are now at 100 ppmv increase for some 1°C increase since the LIA, at an incredible fixed ratio with human emissions over the past 110 years. If you know of any natural process that can deliver 92 ppmv CO2 in the atmosphere in lockstep with human emissions, I am very interested…

March 27, 2013 1:40 am

Francis X. Farley says:
March 26, 2013 at 8:18 pm
While I agree with the fate of human CO2 (as cause of the increase in total mass, not as the fate of individual molecules), the amount of energy released by burning fuels is really futile compared to what the sun sends to us. The amount of water and energy released is less than 0.01% of insolation and the natural water cycle, if I remember well from my calculations in the past. Hardly of influence on the global balance of both energy and water.
Even so, local releases in towns are local heat islands and cause upwind conditions wich may affect local -extreme- rainfall over towns. But there is no global trend at all in extreme weather, be it rain, drought, tornado’s, hurricanes,…

Gail Combs
March 27, 2013 5:40 am

Ferdinand Engelbeen, we will have to wait and see what happens as the earth descends into a cooling cycle. However after seeing what Hansen did to the temperature data and what Mauna Loa does with their CO2 data:with outliers and without outliers, I think we will have a mile of ice sitting on Chicago before these so called scientists actually give us any real data.
The lying and fabrication of ‘scientists’ is reaching the point where even the general public is noticing, thank goodness. I can not believe, based on all the other evidence available that you still defend the CO2 part of the scam. The fact that a prominent scientist like Dr. Jaworowski was FIRED rather than allowed to further investigate ice core measurement methodology used for CO2, tells me the ‘Consensus’ had something to hide and were not interested in actual science.

Gail Combs
March 27, 2013 6:07 am

Ferdinand Engelbeen says:
March 27, 2013 at 1:26 am
….. In summary:
CO2 is well mixed in over 95% of the atmosphere. It is only highly variable in the first few hundred meters over land….
>>>>>>>>>>>>>>>>>>>>>>>>
And there you are WRONG. You forgot about the volcanoes: Map “…Kilauea volcano in Hawaii has been erupting nearly continuously since 1983….” and Weekly Volcanic Activity Report

IG reported that during 13-17 March seismicity at Tungurahua was high. On 13 March ash plumes rose 1-3 km above the crater, and generated ashfall in Choglontus and Puela. The next day nearly continuous emissions of gas and ash rose 500 m. Explosions produced ash plumes that rose 3 km and blocks rolled 500 m down the flanks. On 15 March ash plumes drifted SE and W. An explosion generated an ash plume that rose 4 km and drifted E. A pyroclastic flow occurred near the crater.
On 16 March the eruptive activity at Etna changed from Strombolian explosions to lava fountaining, with the highest jets rising 600-800 m above the crater rim. Several lightning flashes within the eruptive cloud were observed….. http://www.volcano.si.edu/index.cfm

The Role of Explosive Volcanism During the Cool Maunder Minimum
Abstract
Understanding of the natural climate variability is crucial for evaluating the anthropogenic contribution to global warming. In particular, external forcing factors such as solar irradiation changes and aerosol forcing from explosive volcanism need to be captured accurately in order to detect and quantify the emerging signal. The short instrumental period limits our options to estimate the magnitude of external forcing through absence of the full range in magnitudes of the forcing factors as well as by lack of their low frequency representation. Thus, we are forced to use proxies to expand our record. Reconstructions of solar irradiance have often employed sunspot observations as a measure of solar activity. A striking feature has always been the Maunder Minimum, a multi-decadal period where the sunspots almost entirely disappeared. It is generally associated with reduced solar irradiance. Unusually cold conditions in Western Europe, especially during the late Maunder Minimum from 1675-1705, have often been used synonymous for the Little Ice Age. This link between the solar irradiance and temperatures during the Maunder Minimum has been applied for estimating either the magnitude of the low frequency solar irradiance changes while assuming a particular climate sensitivity, or conversely, to estimate the climate sensitivity assuming a magnitude of solar irradiance change. In doing so, other potential causes of the cool conditions were ignored. Interestingly, the climate conditions during the Maunder Minimum don’t remain cold over the entire period but exhibit a number of very cold, pulse-like episodes of a few years length. Here, the role of explosive volcanism superposed on solar irradiance changes during the late Maunder Minimum is evaluated. Using the fully coupled NCAR Climate System Model different ice core based volcanic forcing series are applied and combined with solar irradiance reconstructions. Not only temporal radiative balance impacts of the forcings are analyzed but also the spatially characteristical evolution of the signals. These fingerprints are then verified by a series of high resolution proxy reconstructions of European and Northern Hemisphere climate. Through this comparison of model with proxy data we quantify the volcanic cooling during this period and highlight the danger of estimating the climate sensitivity when omitting other factors.

Lester Via
March 27, 2013 7:52 am

A. Scott says:
March 26, 2013 at 6:27 pm
“lets take the 1971-1991 average of $2.28 a bushel and compare to the 2011 price of $5.18. That would be a 127% increase from 1971 to 2011, a 40 year period. That would be an average annual increase of just over 3%.”
————————————————————————————————————————–
When expressing an increase as an average annual percentage over a long time period, it generally means relative to the prior year rather than the beginning number. This allows the increase to be compared to the average rate of inflation over the same time period. When done in this manner the annual increase in the price of corn is 2.07% rather than “just over 3%”.

Lester Via
March 27, 2013 8:21 am

But then, using the average price of corn for the period of 1971-1991 and comparing it to the 2011 price and calling it a 40 year period doen’t seem kosher either. A 30 year period (1981 to 2011) woud be closer- in which case the average annual increase in price would be 2.77%, still relatively low

Lester Via
March 27, 2013 9:00 am

But then again – a little research at http://futures.tradingcharts.com/hist_CN.html shows corn prices at the end of 1971 to be $1.18 and $5.56 at the end of 2011- an average annual increase of 3.33%. But the price at the end of 2012 was $7.10 a price jump of 27.7% in one year.
It seems commodity prices can be used to make whatever point one wants to make simply by selecting dates.

March 27, 2013 11:34 am

It is amazing how people cling to ideas that are clearly wrong. In my lab, I use a 250W Weller heat gun that emits hot air at about 400C. Once stable, the metal nozzle will get up around that temperature. Can you visualize me holding this heat gun in one hand and a thermometer in the other? I want to use the heat gun to emulate “back radiation” and heat up the thermometer by 33C. I can’t use air flow convection to assist my heating, because that is an atmospheric cooling mechanism, not an atmospheric heating mechanism, so I will hold the thermometer level or closer to the floor than the heat gun. The metal nozzle radiates IR, of course it does, so that’s my source. How close does the thermometer have to be to the IR radiating metal nozzle to increase its temperature by 33C? You tell me: is it even physically possible? Can I do it without the nozzle and thermometer touching and enabling heating via conduction? If you think it’s physically possible, imagine “back radiating” CO2 all around you having the same effect. It would be readily apparent. In some cases, it would be lethal. Think, people, think.
Imagine introducing a CO2 molecule between the metal nozzle and the thermometer. Set it free. What can it do to increase the thermal coupling between the nozzle and thermometer? As soon as you set it loose, it will race toward the ceiling and carry thermal energy with it, so, clearly and inarguably, it’s an agent of cooling. Want to convince me otherwise? Easy, just show me your lab test results.

March 27, 2013 1:08 pm

Gail Combs says:
March 27, 2013 at 5:40 am
Dear Gail, the CO2 graphs you did send were not from Mauna Loa. The noisy one is from Neuglobsow ( http://www.igb-berlin.de/locations.html ), north of Berlin (Germany) midst a natural park, where CO2 levels go skyhigh at night if there is inversion and a lot lower during the growing season on sunny days… The second is from Mace Head, coastal (Ireland, http://macehead.org/index.php?option=com_content&view=article&id=46&Itemid=27 ). The difference between the first station and the second is exactly the difference between measuring in the 5% of the atmosphere where the sources and sinks are huge and the air masses are not mixed fast enough to level the differences, or measuring in 95% of the atmosphere where the mixing most of the time is adequate enough to level off the differences within a reasonable time frame.
The real, unaltered hour by hour data from Mauna Loa (MLO) and the South Pole (SPO) can be compared to the “cleaned” data, where the outliers are removed to make daily and monthly averages:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/co2_mlo_spo_raw_select_2008.jpg
Note the difference in noise between MLO and SPO, but also compare the scale of both to the scales used in Mace Head and the Neuglobsow CO2 levels.
The variability at MLO is mostly between +4 ppmv (with downwind from the volcanic vents) and -4 ppmv (with upwind from vegetation in the valleys, mostly in the afternoon) around the seasonal and long term trend. That is all. These outliers from known local origin are not used for daily, monthly and yearly averages. But is doesn’t make any difference in the average or trend over a year. SPO has not such problems, because far away from volcanoes and no vegetation for thousands of km. But the harsh conditions make that more mechanical problems arise. The net result is that SPO shows exactly the same trend, but a lag of about 18n months with the CO2 levels of MLO. So where is the manipulation?
About volcanoes: CO2 emissions from land volcanoes are not of interest here, but the injection of enormous amounts of SO2 into the stratosphere is of interest: The Pinatubo, a one in over 100 years event, caused an extra cooling of maximum 0.6°C over some three years, due to sulphate and other aerosols. The extra CO2 released had less effect than the cooling: the CO2 increase shows a dip, not an extra increase of CO2 in the atmosphere. That dip is within the natural variability of +/- 1 ppmv around the trend which is about 2 ppmv/yr nowadays.

richardscourtney
March 27, 2013 2:05 pm

Ferdinand:
I am writing this post as a courtesy to show that I am not ignoring your post addressed to me at March 26, 2013 at 3:48 pm.
I stated my view as clearly as I could in my post at March 26, 2013 at 4:35 am which your post answers.
As you say in introduction to your view in your post

We have been there for several years now, but for new readers not familiar with our discussion, I will give my opinion again in short:

Indeed, we have been debating the matter for over a decade in several places and interested people can search the WUWT files to obtain complete knowledge of our different views.
The basis of our difference is your assertion of the ‘mass balance argument’ and my rejection of that as being a circular argument. I see no purpose in repeating that again here when interested people can see our previous debates on WUWT.
As you know, Arthur Rorsch, Dick Thoenes and I published a paper which says the same as Salby later also said. You dispute our findings and Salby’s later but very similar findings. I dispute your finding. And I am willing to allow others to assess your, our and Salby’s analyses for themselves.
Richard

March 28, 2013 12:06 am

Lester Via – I used the USDA FG Yearbook, which contains data including price received at the farm per bushel. I used the 1971 to 1991 average to try to somewhat filter out volatility in the older number. As you note an approach such as your generally shows a smaller average annual increase than I showed.
In any event we both reach similar conclusions. Those that attack ethanol because is taking food from poor people by reducing export supply and driving up prices are simply not accurate. Whether the average annual increase is 2% or 3% is immaterial to those claims. The facts are corn prices on average have barely increased in 40 years. If we were to factor in inflation corns average prices if I recall have actually fallen.
By averaging the start and not doing so – using the speculation inflated price of 2011 – we again overstate the true increase. If we used say the 1971-1991 average vs the 2005-2011 average, which is inflated by a couple sets of spike years – we would be comparing $2.27 to $4.04 – a 77% total – or appx 1.9% average annual increase.
The current corn price you note is completely irrelevant to the discussion – it is based on the poor harvests last year due to the drought and the markets concerns over more of the same for this years crop. With investor speculation of another year of lower production and little change in demand – the price is being bid up as a result.
Has little to do with corn used for ethanol as I’ve shown above. In the case of a true shortage ethanol producers simply cut back production and use less corn – exactly as they did in 2012’s compromised crop year.

March 28, 2013 9:50 am

Actually “Intensity” or “radiant intensity” as it applies to electro-magnetic radiation has SI units of Watts per steradian.
Never is it Watts per metre sqared.

You mean never as in “all the time”, as in nearly every textbook on electromagnetic radiation ever written? Including the ones I’ve written? You mean never as in not here:
http://webpages.ursinus.edu/lriley/courses/p212/lectures/node26.html
http://en.wikipedia.org/wiki/Poynting_vector
(units of watts/m^2, intensity commonly defined as the magnitude of the Poynting vector, SI units in common with sound intensity: http://en.wikipedia.org/wiki/Sound_intensity)? Never as in one of the most common definitions of received power through any given surface is that the power is the flux of Poynting vector.
You are quite correct that it is also often given as watts per steradian, basically the Poynting vector over r^2. You are completely and categorically incorrect when you state that average power or average intensity have no meaning. They are all that most devices for measuring radiation intensity — which do not have a clue about steradians or solid angles, and depend entirely on integrated flux of the Poynting vector through a detector surface — ever measure, as outside of the comparatively low frequency RF part of the spectrum we cannot measure real-time variations in \vec{S}.
Again, physics textbooks galore, all the way up to graduate level textbooks, contradict you. And what possible virtue is there in asserting that average power is meaningless quite aside from the fact that it is a false statement? To enable you to make an argument that every camera, every optical transducer in the universe contradicts? Even your eye responds to average power, not instantaneous power, because it takes a certain average number of photons to trigger rhodopsins in retinal cells. Every digital camera. Photographic film. Photocells. You tell me what optical frequency radiation devices can respond to electromagnetic fields varying on a timescale of 10^{-15} seconds.
rgb

April 1, 2013 9:18 am

Not sure of the importance or relevance to the debate in the comments. I have checked the math and the first section checks out. However in doing the research to validate the post (I plan to replicate it on my blog) I found the IPCC’s listed value for forcing at 3.7 not 2.7. The original calculated value is 4 however they adjusted it as a fudge factor to 3.7. This would alter the original posted value of .8/140 years to roughly 1.01/140 years. an almost arbitrary difference. It may also explain Stan’s confusion over discrepancies between his original calculation.
As for the second part the math looks fine past the logarithmic calculation. Im a bit iffy on how the log was used to get the fraction of the difference. This doesn’t invalidate the approach but If someone could post a quick explanation of why it was used in this way (log(310/280)/log(2)) it would be much appreciated.

1 4 5 6