Delta T and Delta F

Guest Post by Willis Eschenbach

The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing. Forcing is defined as the net downwelling radiation at the top of the atmosphere (TOA). According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!

I find this theory very doubtful for a number of reasons. I went over the problems with the mathematics underlying the claim in a post called “The Cold Equations“, for those interested. However, I’m not talking theory today, what I want to look at is some empirical data.

The CERES dataset contains measurements of upwelling radiation at the top of the atmosphere. It also has various other subsidiary datasets which are calculated from both the CERES data, other satellite data, and ground measurements. These include the upwelling thermal (IR) radiation from the surface. I apply the Stefan-Boltzmann equation to that upwelling IR data in order to calculate the surface data. I’ve checked this data against the HadCRUT surface temperature data, and they agree very closely with the exception of certain areas around the poles. I ascribe this to the very poor coverage of ground weather stations around the poles. This has forced the ground datasets to infill these areas based on the nearest stations. Even with that polar difference, however, the standard deviation of difference between the CERES and the HadCRUT monthly data is only 0.08°C, extremely small. the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.

Now, this lets us compare changes in the net TOA forcing imbalance with the changes in the surface temperature. For this kind of study we need to remove the effects of the seasons. We do this by subtracting the full-dataset average for each month from the data for that month. For each month, this leaves the “anomaly”—how much warmer or colder it is that month compared to the average.

For example, here’s the temperature data, with the top panel showing the raw data, the middle panel showing the annually repeated seasonal variations, and the bottom panel showing the “anomaly”, how much warmer or cooler the globe is compared to average.

ceres decomp allt2.jpg

Figure 1. Raw data, seasonal changes, and anomaly of the CERES surface temperature dataset. Note the upswing at the end from the latest El Nino. The temperature has dropped since, but the CERES data has not been updated past February 2016.

According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa. However, it turns out that that is not the case:

ceres delta forcing vs delta temperature anomalies.png

Figure 2. Changes in TOA radiation (forcing) ∆F versus changes in surface temperature ∆T. Delta (∆) is the standard abbreviation meaning “change in”. In this case they are the month-to-month changes. The background is a hurricane from space. I added it because I got tired of plain old white.

As you can see, in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T. Go figure.

Now, I can already hear some folks thinking something like “But, but, that’s far too short a time period for that small a change to have an effect … I mean, one watt per square metre over a month? The Earth has thermal inertia, it wouldn’t respond to that …”

So lets take a look at a different scatterplot. This time we’ll look at change in total surface energy absorption (shortwave plus longwave) versus change in temperature.

ceres delta surfabs vs delta temperature anomalies.png.jpg

Figure 3. Changes in surface energy absorption versus changes in surface temperature ∆T.

So the objection that the time span is too short is nullified. A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree.

Finally, is this just an artifact because we’re using CERES data for both surface temperature and total surface energy absorption? We can check that by repeating the analysis, but this time we’ll use the HadCRUT surface temperature data instead of the CERES data …

ceres delta surfabs vs delta HadCRUT anomalies.png.jpg

 

Figure 4. As in Figure 3, but this time using HadCRUT surface temperature data.

While as we’d expect there are differences when we use the different surface temperature datasets, in both of them the surface clearly is able to change temperature from a difference of one watt per square metre over a month.

So we are left at the end of the day with Figure 2, showing that there is no significant relationship between changes in TOA forcing and surface temperatures.

Note that I am NOT claiming that this method can determine the so-called “climate sensitivity”. I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.

Best to all,

w.

As Usual: When you comment, please QUOTE THE EXACT WORDS YOU ARE DISCUSSING so we can all be clear just what you are referring to.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

225 Comments
Inline Feedbacks
View all comments
Bruce of Newcastle
December 18, 2017 3:15 pm

What happens in graphs #2, #3 and #4 when you include the time-variant lines between the points?

That is what Spencer and Braswell do (eg see Fig. 3a in the linked PDF of the paper). The regression trend is one thing but the feedback response is quite different, as they demonstrated.

Greg
Reply to  Bruce of Newcastle
December 18, 2017 4:31 pm

Spencer and Braswell paper points out the problem but say they can’t see a way to untangle the two. I can. At least a first approximation.

“On Determination of Tropical Feedbacks”

https://climategrog.wordpress.com/2015/01/17/on-determination-of-tropical-feedbacks/

Greg
Reply to  Willis Eschenbach
December 20, 2017 5:44 am

Sorry Willis, probably too late in coming back in this comment. The “robust” method does not help because it is not the outliers which are the cause of the problem, it is the fact that you have an error laden abscissa not a controlled variable. READ the article I linked, it explains it in detail and shows the huge errors caused by ignoring that OLS is only valid with minimal x errors, NOT for scatter plots.

If you want ‘robust’ invert axes and then try to work out which value you want to believe 😉

afonzarelli
December 18, 2017 5:55 pm

Remember that TSI only varies by .25 watts per meter squared during the solar cycle over the surface of the earth. (the earth being a sphere) We should actually expect 1 watt/m2 to produce .3C without feedbacks. A warming of only .1C per 1 watt/m2 would equal an ECS of just .6C…

afonzarelli
Reply to  afonzarelli
December 18, 2017 6:12 pm

(Ha! “jinx”, willis! i’m going off the stated 1.1C per doubling of co2; that is, 1.1C divided by 3.7 watts/meter squared)…

Reply to  afonzarelli
December 19, 2017 9:16 pm

alfonzarelli,

The Stefan-Boltzmann sensitivity is given by 1/(4eoT^3), where e is the emissivity between 0 and 1, o is the SB constant and T is the temperature in degrees K.

The sensitivity of an ideal BB at 288K is about 0.18 K per W/m^2. The sensitivity of an ideal BB emitting 240 W/m^2 (255K) is about 0.27 K per W/m^2. The sensitivity of an non ideal BB (gray body) with an emissivity of about (255/288)^4 = 0.61 and whose temperature is 288K is 0.30 K per W/m^2.

The gray body sensitivity of 0.3 represents the path from the surface to space and sets the maximum possible sensitivity as it represents the minimum rate of cooling. The black body sensitivity of the surface at 0.18 represents the minimum as it represents the maximum rate of heating. The actual sensitivity is somewhere in between and most likely closer to the lower limit.

The SB sensitivity is equivalent to the ECS. The TCS will be lower since it takes applying the W/m^2 of forcing for 5 time constants to achieve about 99% of the SB effect. Being applied for 1 time constant would result in about 63% of the final value. This is why the value measured by shorter term changes is less than the ECS value.

The reason I use 2.5 degree slices to slice up data for scatter plots is that the difference in insolation between slices has been the same for many, many time constants, thus the relative behavior of adjacent slice averages is representative of the ECS, while absolute differences between months is more indicative of the TCS.

zemlik
December 18, 2017 6:55 pm

clueless alert.
Is the idea that radiation arrives from the Sun, hits the atmosphere, some bounces off and some gets through, of that which gets through some of it bounces off the surface, hits the atmosphere and bounces back to the surface.
More CO2 means more bouncing back down to the surface causing greenhouse warming but if more CO2 does this not mean that of the radiation that arrives from the Sun more bounces off into space ?

zemlik
Reply to  Willis Eschenbach
December 19, 2017 2:17 pm

thanks guys

Reply to  zemlik
December 18, 2017 8:44 pm

“Does this not mean that of the radiation that arrives from the sun more bounces off into space?”

No. Absorption of radiation is based on the wavelength. In clear sky (no clouds), the atmosphere is nearly transparent to visible light, which is mostly what we receive from the sun. There is a bit of UV that is absorbed by ozone, but in simple terms it almost all gets to the surface.

The wavelengths of radiation that a surface emits is based on its temperature. These wavelengths can be found using a planck curve calculator such as this:
http://www.spectralcalc.com/blackbody_calculator/blackbody.php

So for bodies around the temperatures we see on earth that will emit in the infrared range (once a body gets up to about 780K it will begin to glow because the wavelengths are beginning to get into the visible length). The atmosphere isn’t transparent to infrared the same was it is visible. Some wavelengths, like the ones absorbed by co2, don’t make it more than a few meters from the ground before being absorbed, while others do have a clear shot to space from the ground.

Radiation that is absorbed by a gas will be emitted by the gas at nearly the identical wavelength that it was absorbed at. Emission can happen in any direction. So if we take a large sample, a thin layer 10 meters off the ground for example, and could see just it’s emitted radiation, we would see that approx half is emitted up, the other half down toward the surface.

More co2 means more opportunities for radiation to be absorbed and stay in the system rather than leaving. And since the rate of incoming more or less remains constant, but the outgoing has been reduced, the surface will warm. A warmer surface emits more radiation and will this restore balance, but now at an increased temperature.

This is a very simplistic explanation of the greenhouse effect.

M.W. Plia.
Reply to  Brad Schrag
December 19, 2017 10:41 am

Ok Brad, at this site we are all familiar with the greenhouse effect. IMHO too many (on the warm side) infer large climate change from an effect estimated in tenths of a degree. They are most irritating when they insist extrapolation of short term trends along with comparison of proxy to instrumental data is valid as evidence, it isn’t…they are not being scientific.

The issue is the phenomenon’s magnitude. There is yet an answer, what say ye?

Reply to  M.W. Plia.
December 19, 2017 10:53 am

A reader was asking about the greenhouse effect, so not everyone here is familiar.

What I know from my reading is a doubling of co2 amounts to 3.7Wm-2 of radiation being received at the surface. The implications of that are very complicated, and I’m not going to even attempt to get into that because I don’t know it well enough.

I would agree the forcing and changes seem minute, and in hindsight of where the planet has been before it often can seem irrelevant. We’ve been here before, we’ll be fine.

What I see as concerning is the rate at which we are approaching these changes. In the past com levels may have been higher but it was a gradually process that would have allowed for adaptive changes within the ecosystems. Changes are happening much quicker and as such these systems won’t adapt as quickly.

I agree there is yet an answer. How long do you want to wait to find out though?

M.W. Plia.
Reply to  Brad Schrag
December 19, 2017 1:20 pm

Thanks for the reply Brad.

You say: “Changes are happening much quicker and as such these systems won’t adapt as quickly.” So I have to ask: since when?

To compare current rates of change with previous rates of change means you are comparing proxy with instrumental data. And, as we all know, the proxy data lack the temporal resolution required for a valid comparison with the current instrumental record.

When it comes to “man-made global warming” and “alternative energy” my interest is centered on what is and isn’t known. Specifically where the science ends and the supposition begins. It is my observation, when it comes to media, the uncertainties surrounding this issue are not properly addressed.

Regards, M.W.

M.W. Plia.
Reply to  Brad Schrag
December 19, 2017 2:10 pm

And Brad, what about natural variability?. In millennial (proxy reconstructions) terms we are warming, the recent neoglacial of the current interglacial appears to have ended 150 years ago. This is revealed in the glacial retreats of both hemispheres.

The “Little Ice Age” (on average 2° colder and wetter than today) began over 700 years ago with the end of the multi-centennial “Medieval Warm Period” (average 2° warmer/dryer). Duration and temperature estimates of these two periods vary. As the LIA tightened its grip the Viking settlements of the North Atlantic became inhospitable. Sea levels lowered, ice flows hindered navigation, crops withered, farm animals died and the Norse went home. The favourable climate from several decadal warming trends during the LIA and superior sailing skills may have facilitated the European discovery of and settlement in the Americas. A 70-year period of low sunspot activity, named the “Maunder Minimum”, beginning in 1645 coincides with the LIA’s coldest decades.

Estimates of the lowest LIA sea levels are as much as 2 ft. below todays. Thermal expansion of the ocean’s water at the MWP’s peak place estimates of sea levels as much as 1 ft. higher than today’s. A portion of the .08°C mean temperature increase of the past 150 years is attributed to the recovery (which continues to this day) from the LIA. Sublimation from nightly drier air is reducing the planet’s mountain glaciers to their previous “normal” positions. Glacial retreat moves quickly in comparison to glacial advance. Retreating ice is now revealing remnants of previous climate periods.

End moraines of glacier advances prior to the MWP indicate the glaciers of the LIA are the longest of the Holocene. The MWP/LIA is the last of four cycles of minor glacial advance and retreat in the past 6,000 years, perhaps linked to solar activity (sunspots) cycles. The colder “Dark Ages” and the “Roman Warm Period” were the previous cycle. We may now be at or past the start of the warm half of the next millenial cycle, the “Current Warm Period” which, if like the last, may have legs for another century or two.

I realize our opinions on this topic may differ, I hope I’ve been helpful.

Regards, M.W. Plia.

Reply to  M.W. Plia.
December 19, 2017 4:10 pm

Yes, so what were the forcing that drove the warming and cooling of the MWP/LIA? Often times people will attribute them to changes in solar activity. Generally speaking, the solar activity for these periods fluctuated by approx 2Wm-2 from MWP to LIA.

A doubling of co2 leads to 3.7Wm-2. This makes the 2W that took us from MWP to LIA seem trivial. Of course this is all proxy and conjecture. What do you think to be the forcing for mwp and lia

Reply to  zemlik
December 18, 2017 8:46 pm

No. Different wavelengths.

December 18, 2017 7:14 pm

You of course did not invent it but you go along with it. My remarks are really meant for whoever invented this stupidity. If it is IPCC as you suspect it says a whole lot about them. For one thing, it is obvious that they y have no working scientist who knows what to do with data. Not all data have meaning and if you know your field you can instantly spot the difference. Before there were computers I had to plot spectrochemical data on a variety of pre-printed graph papers. But graphs like yours were never made because data of that type was obviously useless trash, The whole lot of graphs you show, pretending to show some aspect of science, is nothing but trash and should never have been shown in a scientific article anywhere. I knew how to dispose of it long ago but they are now elevating that trash pile, trying to resuscitate its denizens, and pretend they are doing science. Arno Arrak

December 19, 2017 9:14 am

Willis
Great post – great comments
Have a great Christmas

Editor
December 19, 2017 9:25 am

Willis ==> It is no surprise that ∆T and ∆F are not linearly related. T (temperature) in a property of matter (air in this case) relating to its energy level that can be measured as relative sensible heat (obviously — just to set the stage). Temperature itself is not a “thing” and is not an inherent property like “mass”. Of the mathematical formulas for heat transfer, we can say this:

“The equation [Boltzman Transport Equation] is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle velocity and position. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.”

The key point is that heat transfer in a fluid (atmosphere) is itself a non-linear process and thus it is highly unlikely that global average 2-meter-above-surface air temperature would be linearly related to TOA radiation. Way too many other energy interactions, themselves mostly non-linear, in between.
Good to have the point nailed down, though. Thank you.

December 19, 2017 12:33 pm

The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.

I have not seen that in my readings. A citation and exact quote would be helpful.

Nick Stokes
Reply to  Willis Eschenbach
December 19, 2017 4:17 pm

Willis,
IPCC:
“The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.”

Yes. But you say something quite different

“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”,/i>

Not a sustained RF, and not an equilibrium response. And then further:
“According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa.”
This is nothing like the IPCC statement. The point is, as I said above, temperature at any time depends on the forcing history. The definition of ECS is deliberately framed to ensure that there is only one effective item in that history – a step change in the distant past. It is aacknowledged that this is very hard to achieve in observation, and even in modelling. But that is what the IPCC definition refers to. It does not mean that you can get the temperature in 2050 by a rough characterisation of the history of 33 years.

All your other examples are referring to this concept of equilibrium climate sensitivity.

Reply to  Willis Eschenbach
December 20, 2017 12:32 am

Willis, thank you.

I don’t give much weight to the comments of Steve Mosher, but the others are clearly substantial.

I regret that in my readings I have not made a searchable data base or bibliography. It’s one of my perennial new year’s resolutions.

Greg
Reply to  Willis Eschenbach
December 20, 2017 5:51 am

Willis, IIRC it is the Gregory & Foster 2015paper which is the only mainstream acknowedgement of the OLS bias problem. They do look at other methods but hide it in an appendix and don’t refer to it in abstract.

BTW Nick is correct the dT vs dRad is the long term response and will not be seen in short term data where it is dT/dt which is driven by rad. This is basic physics.

You need to be looking at timescales greater than several “time constants” for the global climate to get the dT dRad relationship.

Mat
Reply to  Willis Eschenbach
December 20, 2017 2:14 am

Willis, this is an odd time to spit the dummy. He has quoted you, been polite, and made a very clear point which shows your argument and logic to be lacking. And your response is to never speak to him again? That says more about you than him.

Greg
Reply to  Mat
December 20, 2017 6:00 am

It is also an “odd” time to give up because Nick Stokes actually knows his stuff and Willis should be listening and learning and checking who is correct instead of “giving up”.

Seems like Willis is having trouble finding a logical reply to Nick’s point. Saying I can’t find a reply so I will ignore you is not impressive. I won’t criticise Nick for not admitting he is wrong until I can show he is wrong. The other reason for not admitting you are are wrong is not being wrong.

A C Osborn
Reply to  Willis Eschenbach
December 20, 2017 7:45 am

Mr Eshenbach, describes himself to a “T” in that comment.
He uses it regularly when he does not have the answers and repitition of his position has failed.
Next will come the put downs, the invective and finally downright insults.
Just read through any of his posts to see the pattern.

December 19, 2017 5:34 pm

In order to test whether this difference mattered, the Oxford team re-analyzed Chandra data from close to the black hole at the center of the Perseus cluster taken in 2009. They found something surprising: evidence for a deficit rather than a surplus of X-rays at 3.5 keV. This suggests that something in Perseus is absorbing X-rays at this exact energy. When the researchers simulated the Hitomi spectrum by adding this absorption line to the hot gas’ emission line seen with Chandra and XMM-Newton, they found no evidence in the summed spectrum for either absorption or emission of X-rays at 3.5 keV, consistent with the Hitomi observations.

The challenge is to explain this behavior: detecting absorption of X-ray light when observing the black hole and emission of X-ray light at the same energy when looking at the hot gas at larger angles away from the black hole.

In fact, such behavior is well known to astronomers who study stars and clouds of gas with optical telescopes. Light from a star surrounded by a cloud of gas often shows absorption lines produced when starlight of a specific energy is absorbed by atoms in the gas cloud. The absorption kicks the atoms from a low to a high energy state. The atom quickly drops back to the low energy state with the emission of light of a specific energy, but the light is re-emitted in all directions, producing a net loss of light at the specific energy – an absorption line – in the observed spectrum of the star. In contrast, an observation of a cloud in a direction away from the star would detect only the re-emitted, or fluorescent light at a specific energy, which would show up as an emission line.

I just read this, and thought it might be worth adding to the conversation.
This explains why they see a co2 spectrum in outgoing IR. Even if it isn’t doing anything besides lighting up because it is exposed to 15u from condensing water vapor. 

December 19, 2017 11:26 pm

Because I did not noticed any real figures about the emitted radiation fluxes, I attach here my figure, which shows that the emitted radiation flux of the Earth’s surface is essentially linear over a very broad temperature range:
comment image

Below is a figure showing a very linear relationship between the radiation forcing at the TOA and the surface temperature change showing that the IPCC simple climate model dT = CSP*RF is relevant:
comment image

Greg
Reply to  aveollila
December 20, 2017 8:38 am

how did you measure the outgoing IR and the mean global temp last time the COw was 1370? Total BS. you don’t even say what this is . model? proxy? WTF

Reply to  Greg
December 20, 2017 9:16 am

That too shows surface cooling is well regulated. I thought it was saying something else, which started this reply.
There is a continuous flux out the optical window to space. This morning, T Zenith was -65F, air temps were 38, dropped almost 21F over night as the clouds went away, but the sky wasn’t very clear. I Looked to see if I was missing a good night for astrophotography, and I wasn’t.
On clear nights that cooling slows or stops, while, ~30% of the SB flux is going straight to space, those losses are being supplied by sensible heat from condensing water in the atm.comment image

Reply to  Greg
December 20, 2017 9:18 am

Oh, but for the same reason the first chart is right, the second is wrong.

Greg
December 20, 2017 6:20 am

Willis ,thanks for R^2 figures ,something I requested as soon as I read this.

Could you provide a table of the data used in the scatter plots. I have a couple of things I’d like to look at but don’t have the free time to do all that from the raw data.

regards.

Nick Stokes
Reply to  Willis Eschenbach
December 20, 2017 9:12 pm

Willis, it’s not all about you. My post was about two recent WUWT articles. I said

“The threads in question are at WUWT. One is actually a series by a Dr Antero Ollila, latest here, “On the ‘IPCC climate model’”. The other is by Willis Eschenbach Delta T and Delta F.”

It’s right there in the heading.

Yes, you didn’t refer to it explicitly as a model. You said:
“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”

That sounds functionally the same as Dr Ollila’s “IPCC climate model”. And just as unsourced.

Reply to  Willis Eschenbach
December 21, 2017 2:52 am

Willis, be nice. Poor Nick can’t help himself, his viewpoint is “mangled” by his own bias. And as we know, it’s not possible for Nick to ever be wrong.

Pity him.

Nick Stokes
Reply to  Willis Eschenbach
December 21, 2017 3:10 am

Willis,
I wrote the post and title first. The tweet came later. If I had known that “model” was a sensitive term to you, I would have associated it more specifically with Dr Ollila in the tweet. But WUWT has been running now a series by Dr O about an IPCC model, proclaimed in the title of his latest post, “On the ‘IPCC climate model’”. You are using the same equation, derived from the same IPCC definition. You assign to it the same functionality. I was writing about both. I’m sorry if it seems important to you that I should not also refer to your use as a model. I will try not to do it again.

catweazle666
Reply to  Willis Eschenbach
December 21, 2017 1:48 pm

Willis, you know the old saying about the relationship between the amount of flak you’re taking and your proximity to the target?

Plus, here’s an instructive piece from the much-missed MemoryVault that might be relevant, as true today as when it was first posted.

https://libertygibbert.com/2010/08/09/dobson-dykes-and-diverse-disputes/

davidbennettlaing
December 25, 2017 4:40 pm

Delta T and delta F are all very well, but they’re actually rather like arguing about how many angels can dance on the head of a pin if the assumptions underlying the calculations are bogus. consider the following:

Is climate science “settled,” or perhaps “unsettling?” Since 1998, the elevated, but essentially flat temperatures of the so-called “global warming hiatus” (and one El Nino event), have shown no correlation whatever with steadily rising atmospheric CO2. This is extremely damaging to the credibility of the once almost universally trusted mechanism of CO2/warming. Despite this inconvenient reality, most of us still cling to this theory, failing to realize that it actually has no hard-data support in the peer-reviewed literature. Feldman, et al., 2015, is often cited as “proof” of the link, but even this “landmark” paper uses correlations and theoretical arguments rather than hard data which, of course, is scientifically indefensible. A realistic search for an alternative to this long-trusted link that better reflects what is actually happening to global temperature clearly seems warranted. The question is, what mechanism might better account for these actual, real-world observations? First, we might consider when warming has actually occurred.

The only episode of global warming during the past 50 years that can be clearly identified occurred from 1975 to 1998, when global temperatures shot up dramatically by almost a centigrade degree, making this an obvious first place to look for an alternative mechanism. This also happens to be the same period during which anthropogenic CFCs were freely introduced into the atmosphere. This was stopped in the 1990s by the Montreal Protocol, which banned further CFC production because it was found that the chlorine in CFCs was released by photodissociation on polar stratospheric clouds, whereupon it would destroy stratospheric ozone, thus depleting Earth’s protective ozone layer. This depletion, in turn, would permit greater irradiation of Earth’s surface by ionizing solar ultraviolet-B radiation, whose normal ozone-destroying function was taken over by anthropogenic chlorine. Concern at the time was limited to severe sunburn and genetic defects from UV-B, but if this powerful radiation could cause severe sunburn and genetic defects, it could certainly also cause global warming. It’s hardly unreasonable, moreover, to expect that significant climatic effects should have resulted from so large a disruption of such a major part of Earth’s atmospheric system.

Why, then, have we had two decades of elevated temperatures? Simply because most of the chlorine that we introduced to the atmosphere is still up there, and still destroying ozone catalytically, that is, the chlorine is not itself destroyed in the process, but a single chlorine atom can destroy over a hundred thousand ozone molecules in a cyclical process. Hence, the ozone shield is still depleted and likely to remain so for several more decades. Therefore, assuming the foregoing warming mechanism is valid, the so-called “hiatus” should be in effect at least through mid-century.

Why is CO2 not a likely warming agent? Because despite its well-documented absorption of a portion of Earth’s heat radiation, absorption and emission actually happens within a waveband (13 to 17 microns wavelength) that corresponds to temperatures well below those of Earth’s surface, an important fact unfortunately ignored by climate scientists, and as is well-known, cooler objects (here, CO2) can’t transfer heat to warmer ones (here, Earth’s surface). The fact is that back-radiation from CO2 is simply too weak (it emits only as a line spectrum) and too “cold” to have a significant greenhouse effect in the Earth environment.