# Delta T and Delta F

Guest Post by Willis Eschenbach

The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing. Forcing is defined as the net downwelling radiation at the top of the atmosphere (TOA). According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!

I find this theory very doubtful for a number of reasons. I went over the problems with the mathematics underlying the claim in a post called “The Cold Equations“, for those interested. However, I’m not talking theory today, what I want to look at is some empirical data.

The CERES dataset contains measurements of upwelling radiation at the top of the atmosphere. It also has various other subsidiary datasets which are calculated from both the CERES data, other satellite data, and ground measurements. These include the upwelling thermal (IR) radiation from the surface. I apply the Stefan-Boltzmann equation to that upwelling IR data in order to calculate the surface data. I’ve checked this data against the HadCRUT surface temperature data, and they agree very closely with the exception of certain areas around the poles. I ascribe this to the very poor coverage of ground weather stations around the poles. This has forced the ground datasets to infill these areas based on the nearest stations. Even with that polar difference, however, the standard deviation of difference between the CERES and the HadCRUT monthly data is only 0.08°C, extremely small. the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.

Now, this lets us compare changes in the net TOA forcing imbalance with the changes in the surface temperature. For this kind of study we need to remove the effects of the seasons. We do this by subtracting the full-dataset average for each month from the data for that month. For each month, this leaves the “anomaly”—how much warmer or colder it is that month compared to the average.

For example, here’s the temperature data, with the top panel showing the raw data, the middle panel showing the annually repeated seasonal variations, and the bottom panel showing the “anomaly”, how much warmer or cooler the globe is compared to average.

Figure 1. Raw data, seasonal changes, and anomaly of the CERES surface temperature dataset. Note the upswing at the end from the latest El Nino. The temperature has dropped since, but the CERES data has not been updated past February 2016.

According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa. However, it turns out that that is not the case:

Figure 2. Changes in TOA radiation (forcing) ∆F versus changes in surface temperature ∆T. Delta (∆) is the standard abbreviation meaning “change in”. In this case they are the month-to-month changes. The background is a hurricane from space. I added it because I got tired of plain old white.

As you can see, in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T. Go figure.

Now, I can already hear some folks thinking something like “But, but, that’s far too short a time period for that small a change to have an effect … I mean, one watt per square metre over a month? The Earth has thermal inertia, it wouldn’t respond to that …”

So lets take a look at a different scatterplot. This time we’ll look at change in total surface energy absorption (shortwave plus longwave) versus change in temperature.

Figure 3. Changes in surface energy absorption versus changes in surface temperature ∆T.

So the objection that the time span is too short is nullified. A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree.

Finally, is this just an artifact because we’re using CERES data for both surface temperature and total surface energy absorption? We can check that by repeating the analysis, but this time we’ll use the HadCRUT surface temperature data instead of the CERES data …

Figure 4. As in Figure 3, but this time using HadCRUT surface temperature data.

While as we’d expect there are differences when we use the different surface temperature datasets, in both of them the surface clearly is able to change temperature from a difference of one watt per square metre over a month.

So we are left at the end of the day with Figure 2, showing that there is no significant relationship between changes in TOA forcing and surface temperatures.

Note that I am NOT claiming that this method can determine the so-called “climate sensitivity”. I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.

Best to all,

w.

As Usual: When you comment, please QUOTE THE EXACT WORDS YOU ARE DISCUSSING so we can all be clear just what you are referring to.

## 225 thoughts on “Delta T and Delta F”

1. Gerry Parker says:

Willis, what about the heat generated by the earth? You don’t have to go very deep for the temperature to become unbearable- this has to represent a significant source (albeit small compared to the sun).

• paqyfelyc says:

Just check wikipedia; it is crap on political issues as GW, but still reliable on hard fact, such like heat generated by Earth core. Not just small: Insignificant, and buffered so that it is steady.
Photosynthesis is ~2 order of magnitude more significant, Human generated heat ~1 order of magnitude, and both still insignificant.

• RWturner says:

Geothermal heat flux estimates are poorly constrained with large errors but have consistently increased over time as new data is gathered and is still grossly underestimated in my opinion due to ongoing inherent bias from sampling.

Still lacking are good comprehension of hydrothermal fluid circulation volumes through vent fields on the abyssal plane, which were dismissed as merely inconsequential on the most cited geothermal heat flux estimates. The estimated number of these vent fields continues to increase significantly as the sea floor is explored and as many as 400% more than are currently considered could be active.

The nature of fluid flow in the oceanic crust biases the estimate downward. Fluid recharge takes places over a very large area of the sea floor and slightly depresses heat flux downward over large areas, whereas heat is released through very small hot spots. Finding and estimating the number of these point sources of heat is difficult with random sampling of such a bipolar system and biases the estimates downward. This would be like studying total pollution in a system without accurately accounting for point sources.

Resolution of heat flux from the Earth show areas up to 450 mW/m^2 near volcanoes and mid ocean ridges and clearly displays this bias. Yellowstone is a good example, the park as a whole is estimated to contribute 150 mW/m^2 with the resolution of models, but in reality, local features have many orders more heat contributions: Yellowstone Lake heat flow alone is up to 30 W/m^2 in spots, the geothermal basins emit in the hundreds of W/m^2, and the steam from geysers is thousands of W/m^2. They may be relatively small in area, but when the point sources are up to thousands of times higher than the background (millions of times for lava), and in the case of the ocean floor actually suppress the background flux, they become significant and complex to model.

Groundwater discharge, especially on continental margins, is also a significant source of heat flow that is currently poorly constrained and will almost certainly increase geothermal heat flux estimates.

I’m not saying that, even as these estimates are made more accurate, they will come close to the energy contribution from Sol, I’m saying it is significant enough to be considered and does significantly contribute to the Earth climate system through ocean circulation.

• a small change in the ocean vertical circulation is all that is required to massively change the climate on a global scale.

given the 800 year deep ocean conveyor turnover and the tidal pumping action due to orbital mechanics along with wind induced upwelling from el nino there is no reason to believe the ocean turnover rate is constant.

• GregK says:

It should be almost constant except in the immediate vicinity of volcanic activity…and that should average out to an almost constant as well.
Geographically variable but constant over periods of a million years or so

2. rbabcock says:

Just curious as to why the CERES data hasn’t been updated since Feb 2016?

• It has been updated since then, up to February 2017 for the surface data, July 2017 for the latest version of the TOA data. Maybe Willis hasn’t checked for a while. It’s never been a monthly update thing.

• rbabcock December 18, 2017 at 5:52 am

Just curious as to why the CERES data hasn’t been updated since Feb 2016?

paulski0 December 18, 2017 at 7:34 am

It has been updated since then, up to February 2017 for the surface data, July 2017 for the latest version of the TOA data. Maybe Willis hasn’t checked for a while. It’s never been a monthly update thing.

Good question, good answer. It was updated a month or so ago, but I’ve been adventuring in the Solos on a slow connection. I’ll update when I return.

Thanks,

w.

• tony mcleod says:

Willis your ‘adventuring’ link didn’t work. Curious as to whether you’ve been through Marovo lagoon. I stayed at Matikuri Lodge a few years ago, beautiful spot.

• Thanks, rbabcock, I fixed the link. It’s an oddity of wordpress, hard to link to another wordpress blog. You have to specifically include the https:// …

w.

3. paqyfelyc says:

The weakness of these graph and this demonstration is that Temperature lags forcing, from ~2 hours on a daily scale to ~1-2 month on a yearly scale.
And disregarding this, will result in a scattered plot even where a perfectly aligned, but laggy, causation.
For practical example: search “Lissajous curve”
Bottom line: this proves that that some lag exist. which we already know.

• Paul Penrose says:

Even if there is a significant lag, shouldn’t it be the same for TOA and at the surface? These plots would seem to indicate otherwise. Is the a plausible physical explanation for there to be a big difference in lags?

• paqyfelyc says:

“Even if there is a significant lag, shouldn’t it be the same for TOA and at the surface? ”
For instance the figure 2 is ∆T Vs ∆ TOA on a monthly basis. In North hemisphere The highest ∆ TOA is in ~June (high solar input, not so high surface output because temperature still didn’t peak) while the highest ∆ T is in ~august, when ∆ TOA already dropped at ~April value.
So, basically, for each monthly ∆ TOA, you have 2 different ∆T values (the one with higher insolation=higher input + higher temperature=higher output than the other), and for each monthly ∆ T, you have 2 different ∆TOA (the one with higher insolation and ground reservoir heat sink, and the one with lower insolation but ground reservoir heat source).
The physical explanation should be obvious.

• pochas94 says:

﻿﻿It has to do with heat capacities and volumetric mass. The stratosphere has a very low volumetric heat capacity, so responds almost immediately to forcing. The ocean has a comparatively high volumetric heat capacity so takes a long time to respond to a radiative forcing. The 800 year lag between a surface temperature change and CO2 released from the ocean comes to mind. The response time to a change in forcing is a composite of time constants for many different objects and will depend on exactly what thermometer you are reading.

• The response time to a change in forcing is a composite of time constants for many different objects and will depend on exactly what thermometer you are reading.

Except cooling at night responds instantly, the water vapor/air pressure/temp relationship is a hard boundary, and it is active, every night adjusting the output of radiation from condensing water as it liberates the heat of evaporation to local conditions. Much of that radiation goes back into evaporating water, but this “lights” up the lower atm in 15u ir, plus whatever other water lines are radiating. It stops radiating when air temp are higher than dew points, or it runs out of water vapor to condense. Those places have very low dew points, and are either very cold all the time, or have great ranges in daily temp because they get very cold at night. That what the non-condensing ghg’s keep warm, and it doesn’t. Temps fall like a rock at night in the desert.

• paqyfelyc December 18, 2017 at 5:54 am

The weakness of these graph and this demonstration is that Temperature lags forcing, from ~2 hours on a daily scale to ~1-2 month on a yearly scale.
And disregarding this, will result in a scattered plot even where a perfectly aligned, but laggy, causation.
For practical example: search “Lissajous curve”
Bottom line: this proves that that some lag exist. which we already know.

Actually, no. A cross-correlation graph (not shown) indicates that the greatest correlation is with no lag.

w.

• paqyfelyc says:

OK, then, but then you SHOULD had shown this, this is important in the demonstration that “in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T”

• paqyfelyc December 19, 2017 at 1:57 am

OK, then, but then you SHOULD had shown this, this is important in the demonstration that “in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T”

paqyfelyc, first, no matter how many graphs I put into a post, I can guarantee that someone like you will start whining about how I didn’t do their favorite analysis. If I do CEEMD, they want Fourier. If I do CCF, they want ANOVA. If I do ANOVA they want multiple linear regression. And if I do multiple linear regression, they want CEEMD. I don’t write to please the howling mob, there’s no way to do that. I write to explain what I’ve found in the best way I know how … and for that I take endless abuse from charming folks like you.

Second, writing a post like this is a balancing act. If it’s too long or contains too many graphs, some people go “TL;DR” … and if it is short and to the point, other people go “Not enough substance”. There are dozens of other graphs I could have put in here, but those are the ones I chose. Don’t like it? Sue me.

One thing I can guarantee. 99.3% of the whiners are people too lazy to go get the data as I did and do the analysis themselves. If you’re so damned interested in cross-correlation in the CERES dataset, you might consider putting your money where your mouth is. Of course, it will take you a year or so to write all the computer programs to handle the CERES data, but hey, you can do that, right?

Finally, when someone asks politely instead of telling me what I SHOULD do, I’ll often oblige them. For example, A C asked me politely about 24 hours ago if I’d do a cross-correlation, and I did just that and posted it up here.

Unfortunately, you were too busy being aggrieved to pay attention to the thread, so you didn’t see the graph, and now you look like a jerkwagon for not seeing what was in front of your nose, and then trying to bust me for something I already did.

Next time try asking me politely rather than telling me what I SHOULD do. It goes a lot farther.

Tanktainer ship docking by the boat, gotta run …

Regards,

w.

• paqyfelyc says:

“no matter how many graphs I put into a post, I can guarantee that someone like you will start whining about how I didn’t do their favorite analysis”
You have a point.
Well, maybe not the graph, but at least a sentence stating that you did the analysis to cope with lag, since you did it.
I confess I didn’t imagine you answered somewhere else in the thread at a question I asked in the second comment of the thread, with a graph you mentioned as “not shown” (i guess it was then when you wrote that, and you showed it later)
Anyway.
Don’t misunderstand me. I am picky when it comes to demonstration

4. rd50 says:

These graphs omitted the most important number.
What was omitted is the R squared value for the linear regression analysis.
This is such a basic requirement. First year students statistical analysis.
Don’t present the results if you don’t have this value.

• R squared is nothing more than the percentage of the variance of that can be accounted for by the
linear correlation. Typically the rho value is what is reported. I don’t see any particular reason for reporting rho. If you want, examine the scatterplots, calculate rho and report it yourself.

• rd50 says:

You just flunk the exam for a first year student in statistical analysis. Sorry, don’t plot a linear curve if you don’t have R squared.

• paqyfelyc says:

R is required when you claim a fit to linear regression, not so much when no fit is obvious like here.

• Ben of Houston says:

I would agree that an R2 value would be more useful than a “p” value, but that’s a matter of preference. Willis has always been idiosyncratic.

You could ask for the source of the graph and do it yourself if it’s bothering you. From eyeballing it, it looks somewhere in the neighborhood of an R2 of 0.4.

• rd50 says:

To Ben of Houston:
Your eyeballing is very good and this value of 0.4 indicates that there is NO relationship.

• Ben of Houston says:

No, 0.4 is actually a pretty good relationship for something in a complicated environment. At my job, some process variables have a correlation of 0.3 or even less with their control due to the sheer number of different confounding variables in the mix. This doesn’t mean there’s no relationship; it means that there is a lot of noise.

In fact graphs 2 and 3 have a fairly clear relationship that can be seen easily. Are you meaning graph 1? That one has something in the neighborhood of “0”.

• rd50 December 18, 2017 at 6:08 am

These graphs omitted the most important number.
What was omitted is the R squared value for the linear regression analysis.
This is such a basic requirement. First year students statistical analysis.
Don’t present the results if you don’t have this value.

rd50, there is no need to be nasty, it just makes you look petulant. I wrote this at night in a thunderstorm after climbing both masts on the boat. I posted it despite poor internet here in the Solomon Islands, took about an hour. So sue me. I’m sorry it doesn’t meet your high standards, but next time dispense with the insults andjust ask. I’m happy to answer questions if you’re not all pissy about it.

I sometimes forget about R^2 because it’s generally obvious to me from the scatter. In any case:

R^2, Figure 3, 0.35

R^2, Figure 4, 0.25

Happy now?

w.

PS—Internet has not improved, so if I’m slow to answer …

• rd50 says:

Happy now? The answer is yes and there is NO correlation.

• Paul Penrose says:

RD50,
Judging correlation based solely on a fixed R^2 value is not a good idea. So much depends on the process(es) you are measuring and the quality of the data. In this case, perhaps it is enough to show the correlation in figures 2 and 3 are much better than figure 1 to make the case that something is amiss with the forcing theory.

• rd50 says:

To Paul Penrose.
So now you want to fudge basic statistical analysis.
Look at the numbers: 0.35 and 0.25.
These numbers were omitted.

• Paul Penrose says:

RD50,
I don’t want to “fudge” anything, but the fact is, there is no “standard” R^2 value that one can use in every case to judge if a trend is meaningful or not, which is what you are implying. Just applying simple rules without understanding the underlying data and purpose of the analysis is voodoo, not statistics.

• rd50 says:

If it was so obvious to you that there was NO correlation, why did you added the straight line to what was and still is, nothing else but a scatter-plot?

• rd50 December 19, 2017 at 4:56 pm

If it was so obvious to you that there was NO correlation, why did you added the straight line to what was and still is, nothing else but a scatter-plot?

Quote what I said or go away. I have no idea where you think I said something, nor am I going to guess.

w.

5. Tom Johnson says:

In mathematics simulations, the common approach is to use a linear approximation. The linear equation is the simplest equation to fit to a situation. It is highly unlikely to be a perfect fit. Almost all situations are non-linear. Many years ago I was quite surprised, in learning about photographic emulsions, to learn that the response was generally non-linear. In fact, great effort was invested to isolating the portion of the response which WAS linear. This was of particular interest in photographing images of the sky, which is populated by very bright objects in a dark background. And that led me into the non-linearity of our senses, as in the example of our hearing (which is logarithmic).
Some of the most interesting mathematical simulations are the logistic approximations, the “S-curve”, which models the saturation of a market with a new product.
Also of interest is feedback, which is used to explain how an electric motor stalls under an excess load.
I strongly suspect that there is a lot of feedback in our atmosphere. The existence of feedback is strongly suggested by the persistence of life through some remarkable changes in climate, from ice ages to swamp forests. Feedback can most easily be compared to the constant engine speed on a garden tractor. Increased load opens the carburetor, decreased load closes the carburetor, the engine maintains a constant RPM. No throttle is used. Such feedback loops must govern the water vapor (and other greenhouse gas) content of the atmosphere. I don’t know the details; I suspect that no one knows all the parts of the feedback mechanism.
There is, of course, some lag in the response. You can feel this in a tractor. You can see it when using an electric drill. it is for reasons of feedback that I am suspicious of forcing. I believe the forcing is in the opposite direction; the Greenland ice records show a lag of up to 1000 years between temperature change and carbon dioxide concentration. And as for cloud cover–where is the model for that?

• Such feedback loops must govern the water vapor (and other greenhouse gas) content of the atmosphere. I don’t know the details; I suspect that no one knows all the parts of the feedback mechanism.

I do. And there is a negative feedback from water vapor to increasing non-condensing GHG’s.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
WV controls the cooling rate, why early in the evening it cools multiple degrees/hour, and by dawn, cooling has slowed significantly under clear skies.

• Intelligent Dasein says:

Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point, so as the temperature approaches the dew point for a given humidity (which tends to happen around dawn), cooling slows dramatically as the latent heat of vaporization is released. This is all quite apparent if you look at diurnal graphs plotting both temperature and dew point together. The dew point forms an absolute floor for air temperature; and while it’s easy for daytime temperatures to rise above the floor, it’s much harder for nighttime temperatures to push the floor lower on their way down.

• “Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point, so as the temperature approaches the dew point for a given humidity (which tends to happen around dawn), cooling slows dramatically as the latent heat of vaporization is released. This is all quite apparent if you look at diurnal graphs plotting both temperature and dew point together. The dew point forms an absolute floor for air temperature; and while it’s easy for daytime temperatures to rise above the floor, it’s much harder for nighttime temperatures to push the floor lower on their way down.”

And because of this, it is a regulator, one that has no dependents on CO2.

• Trick says:

”Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point..”

Never? No. Dew condenses on a surface at that surface temperature (not the air temperature) so there is no fundamental law supporting this point. Here is a counter example in which air temperatures can drop below the dew point: in air that is rising so rapidly that condensational warming doesn’t keep up with adiabatic cooling. Everything takes time, including condensation.

• I’m going to guess that would form fog. I see a little depression of dew point, and pressure as well. The dew point depression is it being burned off. But remember there’s quite a column of wet air above the surface, it’s a lot of energy.
Thus also show up the the dew point gradient with latitude, the further from the sun, in general the lower it is when discussing land.

• Trick says:

”I’m going to guess that would form fog.”

Good guess, if happens near surface, if way above then forms cloud bottoms.

Lapse rate of the dew point of a parcel is roughly around 1.8C/km.

Combine that with dry T lapse rate of 9.8C/km and you get a rule of thumb clouds form by lifting air a vertical distance 1/8 of the dew point depression at that height. This was long ago figured out in an 1841 paper by meteorologist J. Espy in that bases of all clouds forming by the cold of diminished pressure will be as many hundreds yards high as the dew point in degrees is below the T of the air that time. Today replace Espy’s 100 with 75 yards.

Clouds form on ascent when the temperature of a parcel happens to decrease more rapidly than its dew point.

• And I figure out stratocumulus clouds were from warm humid troposphere air, was pushing into a cold(dry?) layer, that would be an illustration of this effect.

• Clyde Spencer says:

My experience with modeling dynamic systems is that even with all the parameters being linear, the interlinked feedbacks result in a highly non-linear output. Earth is even more complicated because of the abrupt phase changes in water, at discreet temperatures, absorbing and releasing large amounts of energy.

6. HankHenry says:

“the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.”

I always wonder what people think when they say “surface temperature.” Think about what things look like from 20,000 miles and in cross section. I’ve started to think of the earth’s surface temperature as the temperature of the ocean abyss and the surface air temperature as something evanesant. The ocean compared to the atmosphere is something like 350 to 1 and the latest refrain of the Jeramiahs is ” the missing heat is in the ocean”. I wonder how much energy does it take to refrigerate a whole ocean to something like 4 Celsius.

7. As you can see, in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T. Go figure.

That’s because water vapor is regulating surface temps, which is what makes it not correlate to forcing.
And 1W/m^2 is of course significant even daily. You can even see the effect as the length of day changes by secs and minutes per day.
Here’s about 50 years of N America’s daily average change in temp.
https://i1.wp.com/micro6500blog.files.wordpress.com/2015/06/1950-2010-d100_0.jpg

• As you can see there is a very strong signal between daily delta T

• A C Osborn says:

Many years ago on another Forum there was a dsicussion on was the Temperature Series a “Random Walk” and one poster in the USA had a lot of local Weather Station data including Pressure and Moisture.
His correlations between those and Temperature were far better than CO2 and he insisted that what you say is correct.
Added to that of course is Cloud Cover, which has already been shown to have a major affect.
Along with Temperature build up lag as specified by paqyfelyc.
Can the data be offset by 2 months to see what happens?

• Yep, looks like Tim figured it out as well.

• A C Osborn December 18, 2017 at 8:31 am Edit

Can the data be offset by 2 months to see what happens?

Sure. Here’s the cross-correlation graph of ∆ TOA vs ∆ surface temp:

w.

• Bill Illis says:

I think the lags are anywhere between 0 months and 4 months.

Why 0 months. Well, we are talking about energy moving through either molecules or a free path back to space. Molecules which only hold onto energy in a time-frame of hours (the max I have ever calculated is 44 hours) but it seems like 2 or 3 hours to 12 hours is the real max. If the energy is going straight out to space, it is less than 1 second at the speed of light in an atmosphere.

Why 4 months. Well, we see this type of lag in two different scenarios.

First, we have a lag with respect to the ENSO which varies between 2 months to 4 months with 3 months being the most common. There is more than forcing involved here because we need water to move 1000s of kms, then clouds to form systematically over a month or so, and then we need atmospheric circulation to move the extra/less-than-normal energy to the rest of the planet. Not the same as forcing but similar in some manner. Sometimes the ENSO lag is just 2 months and sometimes 4 months but most often 3 months.

Secondly, we have the seasonal lags. The surface temperatures lag behind the solar forcing by about 30 days. Some places on the planet are a little less than this and some a little more but a good round number is 30 days (1 month). And then we have oceans and water bodies. They have a longer lag which is most often about 30 days to 80 days. 1 month to about 2.5 months.

So Willis just needs to use these different lags, 0 lag, 12 hours, 1 month, 2 months, 3 months and 4 months. That covers off the experience of RealEarth(tm).

800 years for the deep ocean is also worth thinking about but if global warming takes 800 years, well something else will probably happen in the interim and one could ignore it.

• There are NO labels on the graph, and so I do NOT know, for sure, what I am looking at, micro.

• Basically, I get it — water regulates temp — who knew? (^_^) [sarc]

… AND water regulates temp in such a way that CO2-heating is a non-issue, if an issue at all.

• Y Axis is Delta Tmin/day * 100, X Axis is month of the year. Signals listed across bottom by year of collect surface station data.
BTW, I have made all of the data I produce available on Source Forge. By Area per day, Area by year, plus insolation, and enthalpy for each, along with this type of seasonal slope analysis, including comparing a known change in temp, with a known insolation. All based on the Air Forces dataset.

8. Greg says:

Willis, gender-ambiguous Nuttercelli is attempting to “debunk” cloud feedback research over at the Guardian today. This may interest you though he does not name you it seems you are part of the target group.

https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/dec/18/scientists-have-beaten-down-the-best-climate-denial-argument

Clouds are one possible exception, because they both act to amplify global warming (being made of water vapor) and dampen it (being white and reflective).

Great start for a “debunking” effort.

• Greg says:

He is also full of unsubstantiated waffle about positive feedbacks in the Arctic, despite sea ice extent for the last two years being indistinguishable from what it was a decade ago in 2007.

He thinks that studies that rely on observational estimations of ECS are “cherry-picking” the use of that method. He seems to think “other methods” ( other than observations ) are being “ignored”.

• He is also full of unsubstantiated waffle about positive feedbacks in the Arctic, despite sea ice extent for the last two years being indistinguishable from what it was a decade ago in 2007.

He’s an idiot. Open arctic water is a net cooling system. When they talk about the effects of albedo, they are always putting the Sun directly over head. It is never overhead in the arctic (and antarctic). And for a month or two, for about 6 hr’s a day around solar noon. The rest of the day, that open water is radiating to a -60 or -80F sky. As long as it isn’t cloudy, the surface always radiates to space!

• Greg says:

He’s not an idiot, he is a lair. He knows it’s BS but is happy to publish this kind of crap in the complicit lefty Guardian because it’s “for the cause”. He is an activist zealot, masquerading as a scientist.

Shame he does not know what clouds are made of before he starts trying to lecture everyone about their effects.

• JonA says:

Have to ask Greg, have you ever read one of Dana’s ‘articles’ before?

• Greg says:

JonA
December 18, 2017 at 9:35 am

“Have to ask Greg, have you ever read one of Dana’s ‘articles’ before?”

Yes, and it almost always results in a letter of complaint to the readers’ editor about non factual BS being presented as science, and how that undermines the credibility of their title for all other reporting. You either have integrity or you don’t. Once credibility is lost it takes years to regain what you can lose in minutes.

The Guardian used to be top quality UK paper , now it is nothing but an online campaign platform.

9. Murphy Slaw says:

That post was good! Thanks.

10. Why assume it is linear…other than general human laziness and the simplicity of linear “curve” fitting and comparisons?

Why assume only 1 factor instead of 3 or 10?
Why assume zero elasticity?
How would we determine lags?

• Greg says:

It’s not just the line fitting which motivates linearisation of everything. If you can’t make linear assumptions you can not make climate models because the maths is intractable.

Thus even if it is well known that a system is non linear, it is possible to do at least something with it, if you can approximate the non-linear behaviour to be approximately linear over a small range of study.

Then there is a whole other problem that is the trend for fitting ‘trends’ to everything even when there is no reason to think it may be linear , approximately linear, or anything else because you have not even thought why you want to fit a straight line except that OLS is the only hammer you know how to use.

• Greg says:

.. well you don’t actually know how to use it correctly, but you are ignorant enough not to realise that you don’t know how to use it. Heck, there’s button in Excel , you know how to do that !

• The CAGW has to assume a linear projection of every trend (Arctic sea ice mass or volume, forect fires, polar bear numbers, penguiun wings, whale poops, tree ring temperatures between 1100 and 1970, to the “recorded” daily temperatures since 1910, since 1915, since 1945, since 1970, since 1998 … because their only authorized driver for climate is CO2 levels, and over the short term, CO2 levels are actually increasing near-constantly. Therefore, they MUST relate EVERYTHING mentally to that near-linear increase in CO2.

• mib8 December 18, 2017 at 7:08 am

Why assume it is linear…other than general human laziness and the simplicity of linear “curve” fitting and comparisons?

Hey, don’t look at me, I’m not the one assumed it was linear.

I say that the basic assumption, that temperature follows TOA forcing, is wrong whether you claim it’s linear or parabolic …

w.

• Jim Masterson says:

>>
. . . wrong whether you claim it’s linear or parabolic …
<<

How about chaotic with an unknown strange attractor or attractors?

Jim

11. Emissivity is the rock on which the ship of CAGW founders.

What is sweetest is that it is IR emission by CO2 itself that annihilates CO2 warming.
Increasing atmospheric concentration of CO2 might increase the ERL (equilibrium radiative level) to a higher altitude. Where it is colder and thus the IR emission less – everything else being equal.

But it is not equal. The air’s emissivity of IR is increased by the same CO2. This cancels the effect of the higher altitude of the ERL and lower temperature. So the net result is no change.

Ilia Prigogine’s nonlinear thermodynamics dictate that in a complex open and dissipative heat engine like the atmosphere, small perturbations like increase of a trace gas like CO2 will result simply in a rearrangement of emergent dissipative structures, negating any change to global parameters such as “temperature”.

• mkelly says:

In 1954, Hoyt C. Hottel conducted an experiment to determine the total emissivity/absorptivity of carbon dioxide and water vapor11. From his experiments, he found that the carbon dioxide has a total emissivity of almost zero below a temperature of 33 °C (306 K) in combination with a partial pressure of the carbon dioxide of 0.6096 atm cm. 17 year later, B. Leckner repeated Hottel’s experiment and corrected the graphs12 plotted by Hottel. However, the results of Hottel were verified and Leckner found the same extremely insignificant emissivity of the carbon dioxide below 33 °C (306 K) of temperature and 0.6096 atm cm of partial pressure. Hottel’s and Leckner’s graphs show a total emissivity of the carbon dioxide of zero under those conditions.
http://www.biocab.org/Overlapping_Absorption_Bands.pdf

Hottel charts are used in engineering especially for combustion chambers if you have mixed gases with WV and CO2. CO2 always reduces the emissivity of straight WV.

• ” However, the results of Hottel were verified and Leckner found the same extremely insignificant emissivity of the carbon dioxide below 33 °C (306 K) of temperature and 0.6096 atm cm of partial pressure.”
Again, just an elementary misreading of basic engineering charts.

• ptolemy2 says:

If CO2 emissivity is zero below 33C then why are we talking about CO2 at all in regard to climate and temperature?

• One must be careful to distinguish the emissivity of the entire atmosphere from the emissivity of CO2 alone. Modtran is very helpful. The atmosphere radiates at the Planck curve to 300 meters. You begin to see a dent in the curve in the CO2 bands at 400 meters. Accordingly, at 300m, when you bump CO2 from 400 to 800ppm, there is NO change in the upward flux.

Despite the small dent, there is NO change in the upward flux between 300 and 400 meters altitude at 400ppm, but at 800ppm, you lose 1.88 W/m2 and the dent increases.

So, at 300 meters, the emissivity of the entire atmosphere is unchanged in the CO2 400-800ppm range, but at 400 meters it is. Unless water, which overlaps, is causing this dent in the CO2 bands without showing up in any of the other water bands, we must say that the emissivity of CO2 has decreased at 400 meters when CO2 is increased to 800ppm. CO2 is keeping that 1.88 W/m2 and not radiating it up.

The effective radiative level is a can of worms. It is essentially the optical depth of the atmosphere, or the mean free path of escape for each wavelength. It varies by latitude and everything that effects optical depth. The ERL for CO2 bands is 12km ~220K in the tropics; 9km ~230K subarctic summer. CO2 concentration essentially the same.

Does increasing CO2 raise or lower the ERL? Got me.

• WB Wilson says:

Exactly, ptolemy2. Complex self-organizing dissipative systems. Like Willis’s thunderstorms, hurricanes, tornados, et al. Spontaneous and scale-invariate formation whenever potential gradients arise, be they gravity, temperature or any form of energy. The End of Certainty.

12. Greg says:

Willis, I don’t see a great difference in the degree of correlation in any of these graphs. Marginally better in Figure 3 but nothing to write home about. It would be helpful if you gave stats for fig2 to allow an objective comparison.

What are the correlation coefficients for the three graphs?

I would also invite the reader to estimate the slope of fig 2 whilst trying to ignore the fitted line. It is clearly much steeper than the fitted result. This is a classic example of regression dilution, where least squares fitting gives a spuriously low estimation due to the fact that there is significant non linear variability in BOTH datasets.

Plot the data with the axes inverted and you will get a very different answer for the slope. Here is an example of the effect using synthetic data:

https://climategrog.files.wordpress.com/2013/11/ols_scatterplot_regression2.pnghttps://climategrog.wordpress.com/2014/03/08/on-inappropriate-use-of-ols/

In view of your previous article, it may be very interesting to do the scatter plots for tropics and extra-tropical regions separately. Not only with the answers vary considerably for what you get with the full dataset, I expect that you would get a much better correlation for each subset.

Best regards.

13. Willis,
“The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.”
Well, to quote your exact words,
“please QUOTE THE EXACT WORDS YOU ARE DISCUSSING”

I think you are creating a straw man. Who is assuming that?

• I guess it depends on how linear you want it, but I think it’s fair to say that the mainstream view is that climate (when looking at global average scale) should respond approximately linearly to forcing.

• Greg says:

The very concept of ECS or TCS assumes linearity, since it is defined as the change for a doubling of CO2: any doubling, not just being twice what it is today or twice what is is assumed to have been at some poorly defined “pre-industrial” era.

• Greg,

The very concept of ECS or TCS assumes linearity

Basically, yes. Though that is generally understood as a first order approximation to addressing the issue. It has long been recognised in climate science that GCMs do not actually produce the same amount of warming for CO2 doubling from different levels and paleo work indicates that CO2 sensitivity is likely to be “state dependent”.

• Greg,
“The very concept of ECS or TCS assumes linearity”
No it doesn’t. It’s a derivative estimate. The fact that TCS is not the same as ECS implies lack of linearity. As does the fact that TCS has to be defined under particular circumstances; one common one is that of change after compounding 1% over 70 years. If it was linear, there would be just one definition.

Then there is the question – who assumes even that there is a derivative. People try to find one, but acknowledge that they aren’t brilliantly successful.

• paqyfelyc says:

” who assumes even that there is a derivative”. Any people talking about TCS and ECS as relevant and valid concepts (as opposed to: people talking about them as irrelevant and wrong concept).

TCS has as much relevancy as temperature response of a bowl of water outdoor at the second/minute time scale when a cloud mask the sun.
ECS has as much relevancy as the same bowl temperature response at the century time scale when it is put in a cave (and, actually, dried up soooooooo… long ago)

• Nick,

“The fact that TCS is not the same as ECS …”

The reason for your confusion is a consequence of the metric. Defining the sensitivity as an incremental metric expressed as degrees per W/m^2 is intrinsically non linear because of the T^4 relationship between emissions and temperature where in the steady state, emissions and total forcing are the same. The sensitivity then must have a 1/T^3 dependence on the surface temperature.

If instead, the sensitivity is expressed as the equivalent metric of W/m^2 of surface emissions per W/m^2 of forcing, the relationship is nearly exactly linear as shown by this scatter plot of the monthly averages of surface emissions vs. post albedo input power (total forcing). The larger dots represents the average relationship over 3 decades of data. Note that in this case, the incremental and absolute sensitivity are exactly the same.

The TCS will be smaller than the ECS only because of a finite time constant different from the integration period. However; the system responds far faster than the IPCC requires and the distinction between these two is just more noise to add confusion. If not for the excessive obfuscation and misrepresentation found throughout ‘consensus’ climate science, the controversy would have self corrected decades ago, the IPCC would have never been formed and Hansen would have been the last of the alarmists, instead of the first.

BTW, I would like to see you try and make a case for why it makes sense to use a non linear metric of sensitivity (degrees per W/m^2) when an equivalent linear metric (W/m^2 of emissions per W/m^2) is far more representative of how the system actually responds to forcing.

• Greg says:

If it was linear, there would be just one definition.

The supposed linearity is in relation to the forcing not the CO2 in ppmv. ECS 1% per year for 70 years is an exponential increase resulting in a doubling. TCS is based on an instant doubling.

Since the relation of ppm to forcing in logarithmic, these are not the same. That is the non-linear bit. The assumption that dT to dRad is linear is not contrary to that.

You are straw-manning the alleged straw man.

• Greg
“ECS 1% per year for 70 years is an exponential increase resulting in a doubling. TCS is based on an instant doubling.”
It’s the other way around. But the log of CO2 is irrelevant. Willis is talking about the relation between ∆T and ∆F. Actually, as I’ve noted in he next sub-thread, the issue with ECS and TCS is even more radical. A statement
∆T = λ ∆F
could only make sense with equilibrium values. Try to put it into words for any other case. The transient response you might expect to an increase ∆F is a rate of temperature increase d∆T/dt. That is reflected in the TCR definition. Increase ∆F by regular steps for 70 years, and the ∆T cut off at 70 years is the TCS. If you didn’t change ∆F further but followed ∆T, your TCS number would keep on going up.

As far as straw-manning goes, I have just asked for an actual quote of someone assuming ∆T = λ ∆F. I don’t believe it is done. But a quote of whatever makes people think they assume that might be quite informative, if carefully read.

• paqyfelyc says:

“I have just asked for an actual quote of someone assuming ∆T = λ ∆F. I don’t believe it is done.
You don’t “believe” it ???? What belief has to do with it? This is math, for got sake, simple math. You just cannot end up to a ∆T = λ ∆F formula without assuming linearity in the first place. Period.
But I guess this blows your mind, so you just return to the same music as a scratched disc.

• paqyfelyc says:

YOU are creating a straw man.

• I’m asking the question any skeptic should ask. Who assumes that, and what do they say?

• paqyfelyc says:

You are asking a straw man question, so as to have play words about vocabulary.
The fact is, IPCC writes ∆T= CST x ∆F, and your whole miserable argument will be to that is not an explicit assumption, but a result, a diagnostic; as if that result wasn’t the necessary result of all the linearization (and elephant trunk wiggling, done out of fitting hundreds of parameters) done in models, that is, an implicit assumption.

• The problem here is that no scientists have any expectation that such a simple proposition as
ΔT = λ*ΔF
is true, yet that is the proposition tested here. What they do entertain is the proposition that ΔT depends on the history of F, possibly linearly by convolution with a response function over time. That is why statements like
ΔT = λ*ΔF
are always associated with a scenario, like
1. ECS, F has a once only increment ΔF, then ΔT is the change when you finally reach equilibrium
or
2. TCS (one variant) F increases linearly over a century by CO2 doubling via 1%/year compounding increases
The TCS definition is still loose, because there will be a dependence on the history before the ramp of F begins.

The reason is that when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output. None of these propositions can be tested just by matching instantaneous values of T and F.

• ” when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output.”

This is called tuning, and this is precisely the problem with the models. A scenario was defined [the post 1976 warming]. This scenario was “put into the convolution over time”. The output was unambiguous, but wrong.

• “This is called tuning, and this is precisely the problem with the models”
I’m not talking about GCMs. I’m talking about the definitions of ECS and TCS. No tuning is involved.

• paqyfelyc says:

1. ECS, F has a once only increment ΔF, then ΔT is the change when you finally reach equilibrium
or
This is call a derivative, and you need differentiability to begin with. So, you need some clue that differentiability is not an unreasonable assumption, so you make a graph like figure 2, and you observe ….
that it IS unreasonable.

2. TCS (one variant) F increases linearly over a century by CO2 doubling via 1%/year compounding increases
“The TCS definition is still loose, because there will be a dependence on the history before the ramp of F begins.”
Which a complicated way to acknowledge that the definition has simply no meaning. without explicitly acknowledging it, in order to use it nonetheless.

“The reason is that when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output. None of these propositions can be tested just by matching instantaneous values of T and F.”
Let’s translate that in simple words, will you?
“The reason is that when you define a scenario (storytelling) you are in a fantasy land were everything becomes possible and nothing can be proven wrong. None of these propositions can be tested in the real world.”
Hell, YES. You just got it.

• Nick Stokes December 18, 2017 at 7:24 am Edit

Willis,

“The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.”

Well, to quote your exact words,
“please QUOTE THE EXACT WORDS YOU ARE DISCUSSING”

I think you are creating a straw man. Who is assuming that?

Nick, I not only quoted the exact words underlying that assumption, I gave a mathematical critique of them in the post I linked to above called “The Cold Equations”.

It’s also obvious from the definition of the climate sensitivity, which is:

∆T = λ ∆F

But you knew that …

w.

• Willis,
“Nick, I not only quoted the exact words underlying that assumption”
What you quoted was Stephen Schwartz saying:

“The Ansatz of the energy balance model is that dH/dt may be related to the change in GMST [global mean surface temperature] as
dH/dt = C dTs/dt (3)
where C is the pertinent heat capacity.”

Then you looked up what Ansatz meant, and found that it wasn’t a whole lot. It isn’t an assumption. It’s basically a trial guess; we’ll see how that works out. Then he goes on to discuss the time scales that are associated with the Ansatz, and the value of C. H is actually heat content, so there is some fuss aligning that with forcing. In fact, that is where the heat capacity C comes in. It’s a function of time scale. So (3) isn’t a linear equation. It may work out to be approximately linear in certain circumstances. That’s why they specify it subject to the ECS and TCS scenarios (with different numbers in each case).

An analogy is heating a swimming pool. If you turn up the heat, for a while something like (3) will apply (TCS). The temperature rises according to the extra heat and the heat capacity. But eventually it approaches a new stable temperature. That is determined not by capacity but by loss rates (ECS). Eq (3) doesn’t help any more, because the derivatives go to zero. It’s a different regime. And climate scientists are trying to capture both regimes, and the transition.

• Willis,
To expand on that, the Schwartz equation that you cited in the earlier link isn’t at all the same. You are now quoting
∆T = λ ∆F
which is the definition of equilibrium sensitivity. But the Schwartz Ansatz is the equivalent of
d(∆T)/dT = λ ∆F

The swimming pool example shows what is happening. Suppose you have a pool in a uniform environment, heated and at steady temperature. Then you increase the burn rate (∆F), and keep it there. The response ∆T will look like this:

https://s3-us-west-1.amazonaws.com/www.moyhu.org/2017/12/pool.png

It rises with a rate determined by pool heat capacity (mass) but settles at a temperature determined by losses. You can get a parameter from both of those (red line dT/dt at 0, T(5)-T(0) (blue)), and they are different. The whole curve basically scales with ∆F, and so do the parameters. The first is like TCR, except they would usually define it as the average ∆T up to t=1, say. The second is ECS. Neither of these is enough on its own, and is not a model, but if you can pin down both, you have a reasonable approx of the whole picture.

• Sorry, second equation should be d(∆T)/dt = 1/C ∆F

• “eventually it approaches a new stable temperature. That is determined not by capacity but by loss rates (ECS).”

No, ECS is determined by the ratio of capacitance (absorptivity), to loss (radiation to space).

• “No, ECS is determined by the ratio of capacitance (absorptivity), to loss (radiation to space).”
Absorptivity isn’t a capacitance. It’s more like a resistance. And steady radiation to space has to be unchanged, matching solar (unless albedo changes). ECS is the rise in temperature needed to overcome the greater resistance to radiate the absorbed solar back to space. While a big component of TCS is due to flux into the ocean. This allows TOA flux to drop, to the surface temperature needed to get it emitted is less. Until flux into the ocean tapers off.

• LdB says:

Nick/Willis, if that is the logic there is a massive flaw. Again deal with the physics you aren’t dealing with anything remotely like heating a pool at some constant rate or even some strange shaped graph. The first basic should be obvious to everyone, there is a 365 day cycle imposed on the heating/cooling as the distance Earth/Sun changes.

Any equilibrium will always have precession, it has to by the very nature of the setup. Lamor worked that out on the atom in 1897 under classical physics and even when we fully understood quantum spins it is subject to the behaviour and enhanced the background. There are countless precessions already know for Earth from Tidal force, Obliquity and beyond.

The question opened to both groups is why aren’t you working thru thermodynamic behavior thru precession mathematics as that is how the equilibrium will work, it won’t be anything like linear or even “tuneable” without the proper form of the mathematics.

I am sort of perplexed by all the discussion surely everyone realizes what the format of the solution will look?

If you want to work with your swimming pool example try putting a 365 day cycle on it from some min value to some max value and some wave shape. From that calculate your d(∆T)/dT which will now have a lamor behaviour embedded in it and what you need if you want to approximate the behaviour somewhat correctly.

• Mat says:

Nick Stokes, I really have to commend you. Your contributions are clear, to the point, polite, and usually devastating to the OP. I don’t know how you have the energy.

• A C Osborn says:

1. turn the Heating on & off in 12 hour (on Average) cycles with incremental increase in energy at the start of the heating cycle.
2. add in some cloud cover.
3. add in varying winds from varying direction of varying hot/coldness.
4. Varying atmospheric pressure.

14. Tom Halla says:

Figures 2, 3, and 4 all look like statistics textbook illustrations for no or low correlation situations. Trying to come up with a model to explain a relationship that isn’t there would be an exercise in postmodern science.

• Greg says:

Agreed. Pretty my point above. Also the “slope” is a spurious value.

• Greg says:

Willis’ previous post showed neg. correlation in tropics and +ve elsewhere. Mixing all the colours in you paint pot usually ends up producing liquid diarrhoea colour.

Land and sea have sensitivities which vary by a factor of two and temperatures are not additive physical quantities.

If you can’t add, you can’t do averages and you can’t do a linear regression or any of the other stuff. It’s all invalid.

• If you can’t add, you can’t do averages and you can’t do a linear regression or any of the other stuff. It’s all invalid.

True, but you can convert it into a flux, then do all the math you want, then convert that back to temp. It actually increases average temps by ~1.2F iirc

15. I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.

There’s a difference between the expected relationship and your expected relationship. You haven’t shown what the equivalent relationship is in climate models.

• paqyfelyc says:

Well, ∆T= CST x ∆F is in the IPCC report. Meaning the IPCC says it is [the equivalent relationship, in climate models], whether it really is or not in the models of the ensemble (some, or all of them).
You may not trust the IPCC, however.

• The relationship in that equation isn’t what’s being investigated by the above scatter plots. If anyone believes it is they are very much mistaken. TOA flux observations include forcings but most of the variability is due to feedbacks/atmospheric dynamics. What do you think you get if you do the same test as above using respective climate model data?

• paqyfelyc says:

“What do you think you get if you do the same test as above using respective climate model data?”
Don’t know, and don’t care. I have poor opinion (to say the least) of climate models that are not even able to explain massive climate changes like glacial/interglacial, or smaller one like MWP or LIA.
The expression “(climate) model data” is unclear. Data is data; it is real (even though uncertain). Model is model; virtual. But model data????

16. David Wells says:

Einstein once observed “Two ways a scientist can get things wrong. First the Devils leads him by the nose with a false hypothesis. Or second his thinking is erroneous and sloppy”. Co2 will make the earth boil and fry you have my word, scientists say.

17. A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree
Which is the same result as we get for the 1 W/m2 variation over a typical solar cycle: one-tenth of a degree.

• “Lends credence to your estimate.”
Sounds like you are both trying to quantify the ratio between a forcing and a change in temperature. As scientists do. But they don’t assume that a ratio in one set of circumstances can be applied in another. That is something that has to be established.

• paqyfelyc says:

“That is something that has to be established”, indeed. And where exactly the IPCC modelers do that ? What make them so sure that some ∆T= CST x ∆F relationship applies no matter what ? And, even, that some ∆T is bound to happen is some ∆F occurs ?

• That is something that has to be established
And Willis’ graph does just that.

• The situations are very similar: in both cases an extra Watt/m2 is applied, so a similar rise in T is expected [and established].

• “What make them so sure that some ∆T= CST x ∆F relationship applies no matter what ?”
Nothing. There is no such relationship assumed in GCMs.

• paqyfelyc says:

“There is no such relationship” EXPLICITLY “assumed in GCMs.”
Math is cruel: you cannot end up with such relationship if it is not implicitly assumed in the first place.
So, what you say, Nick?
Are GCMs assuming the relationship, despite you denying it?
Or are they not, meaning they don’t end up to the relationship, despite you saying they do ?
Where are you wrong?
You choice is not whether you are wrong or not. You are, one way or the other.

• paqyfelyc says:

@ lsvalgaard December 18, 2017 at 8:13 am
“And Willis’ graph does just that.”
No, Willis’ graph does the very opposite: it shows NO correlation “between a forcing and a change in temperature” (well, as I observed above, it isn’t enough to show the absence of correlation, but, still)

• “you cannot end up with such relationship if it is not implicitly assumed in the first place.”
So did we end up with such a relationship? That seems to be the illogic of hese criticisms – look, they get linearity, so they must have assumed linearity, and besides, they didn’t get linearity, so they are wrong.

That is my point here; Leif and Willis get similar ratios in slightly different circumstances, which Leif says confirms something. That’s fine, but it goes against Willis’ contention that someone is wrongly assuming linearity.

• paqyfelyc says:

blah blah blah Nick. More play words, mote in others eyes when there is beam in yours.
The fact is, IPCC writes ∆T= CST x ∆F, and whether it is explicitly stated or not this assumes linearity, this DOES “assume that a ratio in one set of circumstances can be applied in another” . Math do not allow a linear result without linearity assumption.

• John Edmondson says:

How about the 9% change in solar radiation between perihelion in January and aphelion in July? That’s 123w/m2. That would be 12 degrees? What am I missing?

• ohn Edmondson

How about the 9% change in solar radiation between perihelion in January and aphelion in July? That’s 123w/m2.

1408 w/m^2 Jan 3-5 at closest point, 1316 in early July at furthest distance, so the 123 w^2 is a high assumption.
Those who argue in support of Trenberth’s flat earth approximation also use the greater speed of the earth when closer to the sun, the slower speed when further away to claim that the yearly total of both arcs becomes identical.

• John,

The average difference at the same scale as the 240 W/m^2 of post albedo forcing and including reflection by albedo is about 15 W/m^2. The effect of this becomes obscured because perihelion is closely aligned with the winter solstice and the solar variability aligns with seasonal change.

In 11K years when perihelion aligns with the summer solstice, the N hemi difference between summer and winter will become larger as the difference in the S becomes smaller. The asymmetries between hemispheres are so large that even with the current alignment of perihelion with the seasons, the N hemisphere already has about twice as big of a temperature difference between seasons!

However; the difference will be no where near what the IPCC sensitivity would predict. Unfortunately, we can’t wait 11K years to get this right …

• LT says:

“A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree”

The only problem with that statement is that it cannot be true, the Earth receives a change of 90+ W/M2 over a six month period every year, which means that along the Equator you would see semi Annual differences approaching 9 Degrees C. But there are no such differences to be found.

https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=112644560000&dt=1&ds=7

• Crispin in Waterloo but really in Beijing says:

Another way of looking at the 90 W change with the seasons is the hemispherical difference. If the Northern Hemisphere gets 90 watts less in summer than the Southern Hemisphere, we can expect a summertime high temperature difference of 9 degrees, is that correct?

Latitude for latitude over the oceans or land, winters should be 9 degrees colder in the South because, hey, insolation. If a massive change like 90 W can’t pitch up a signal how on earth, literally, can 1 Watt.

Willis I think you might have better luck finding a signal in 91 watts (the sum) than an average change of 1.

• it is less than the range water vapor can regulate. It’s as simple as that.
The evidence that proves it has to be regulated is to explain all of these behaviors, while there is a continuous loss to space under clear skies, even when temps are not falling.

Conservation of energy requires a source, and the only thing that fits that evidence is water vapor condensing, which we already knows it’s doing, and it ramps up as air temp nears dew point, again the proper feedback to cancel cooling.

And because that is tied to air temp, it’s a darn good temperature regulator when you compare temps to thermal ground, 3k.

• Greg says:

Which is the same result as we get for the 1 W/m2 variation over a typical solar cycle: one-tenth of a degree.

With the abysmal correlation of these noisy data the OLS fitted slope can not be given any credibility. Mk I – eyeballing the data, I would estimate the slope to be about twice that. Typical regression dilution error caused by ignoring that you do not have an error-free abscissa.

When you have correlation that bad, don’t even bother doing OLS to an estimation of the slope. It won’t work.

The correlation is abysmal because land and sea; tropics and extra tropics do respond in the same way to radiative forcing. So dumping it all together just makes a muddy brown mess, not a valid analysis.

Maybe that was the point that Willis was trying make.

• paqyfelyc says:

+1

18. 1. The delay in Ceres and other data updates. Do you know why, with the computing power of Google at their fingertips, updates take so long?

2. Why do both warmists and skeptics use linear trends when semi-sinusoidal/curves are obvious? The cycles are clear indication s of natural processes running that must be considered as also, perhaps, having a longer term natural up/down component.

• T^4 curves are the most important when it comes to understanding the sensitivity. While approximately linear over a small range of T, it’s not even close to linear over the range of T found on the planet and most definitely is not linear through the origin.

19. Curious George says:

How do you handle clouds? Could the CERES “surface” temperature be sometimes the temperature of the top of clods?

• Curious George says:

Sorry, the top of clouds. The top of clods is what we hope to measure.

20. Steve Zell says:

So, if there is no correlation between temperature and infrared radiation at the top of the atmosphere (first graph), but there is a correlation between 0.07 and 0.09 C / (W/m2) for total “shortwave + longwave” (UV + visible + IR) radiation (second and third graphs), that would exonerate CO2 as a culprit in “global warming”, since CO2 absorbs well in two narrow bands of the IR spectrum, but is nearly transparent in the UV and visible range.

There are many other factors that could alter the total radiation balance of the earth in the “shortwave” (UV+visible) spectrum, such as fluctuations in solar radiation and cosmic rays, or possible changes in the ozone layer (which partially absorbs UV), but CO2 concentrations would have very little effect, since it only absorbs in the longwave, low-energy part of the spectrum.

I recommend that Willis Eschenbach remove the hurricane photo from the background of his graphs. We wouldn’t want any alarmists to make some kind of misleading connection between the information in the graphs and the frequency of hurricanes…

21. LT says:

In the middle of fall and spring Earth receives as much as a 0.5 W/m2 difference in TSI everyday for a couple of months as Earth approaches or departs from Perihelion, and there is no perceptible daily change in temperature that corresponds with that magnitude of change in TSI. But let a Volcano such as Pinatubo blast 20+ Metric Tons of SO2 into the stratosphere and change the forcing long enough and it has an effect. There is ample proof that the Earths climate is highly buffered by convection, conduction and various thermal sinks and does not immediately respond to daily changes of even .5 W/m2.

• LT, look for my post up thread. where I show exactly this data for N America.

• Jim Masterson says:

The semiannual difference in TSI due to orbital distance is quite impressive. (See http://lasp.colorado.edu/home/sorce/data/tsi-data/) The six hour measurements are 1316.0477 W/m^2 on 07/10/2017 0.125 and 1407.4700 W/m^2 on 01/03/2017 0.625. That comes to a total change of TSI during the last year of 91.4223 W/m^2 (if I did my subtraction correctly).

Jim

• LT says:

Hey Jim,

Yes, it is absolutely astounding the forcing difference that occurs daily due to the eccentricity of Earth’s orbit. Any climatic changes that are related to the variability of the Solar Cycle cannot be attributed to changes in TSI, it would have to be something like Svensmark’s GCR theory or UV irradiance differences changing the transparency of the Stratosphere.

• LT December 18, 2017 at 9:26 am

In the middle of fall and spring Earth receives as much as a 0.5 W/m2 difference in TSI everyday for a couple of months as Earth approaches or departs from Perihelion, and there is no perceptible daily change in temperature that corresponds with that magnitude of change in TSI. But let a Volcano such as Pinatubo blast 20+ Metric Tons of SO2 into the stratosphere and change the forcing long enough and it has an effect. There is ample proof that the Earths climate is highly buffered by convection, conduction and various thermal sinks and does not immediately respond to daily changes of even .5 W/m2.

Not sure where you are getting your figures. The oddity of the eccentricity of the earth’s orbit is that despite the large difference in TSI between perihelion and aphelion, the northern and southern hemispheres receive exactly the same amount of solar energy over the year. This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?

w.

• LT says:

“This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?”

It is wonderful, however, the FACTs are that the Earth receives 15+ W/M2 more energy at the top of the atmosphere at the end of October than it does at the beginning of October pretty much every year. Where is the large temperature gradient that should occur during every October at the equator, according to the simplified equations presented?

Don’t you agree that is a significant change in forcing for one month?

• LT says:

“The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% … barely enough to measure, lost in the flood.”

I may not be the smartest guy in the world, but explain to me why I see over 1 W/m2 between max and min on the graph below.

And The number you are presenting is not correct. You are quoting a 1 AU adjusted average number, which is not what Earth receives, it changes every day. Taking the 1 AU adjusted TSI number is good for measuring the Suns output but it serves absolutely no purpose whatsoever for measuring what Earth actually receives. The solar cycle clearly has some effect on the Earths atmosphere, for it clearly shows up in correlations.

https://wattsupwiththat.files.wordpress.com/2017/12/2017_solar_tsi.png

• afonzarelli says:

LT, the earth is a sphere (divide by 4)…

• Jim Masterson says:

>>
Willis Eschenbach
December 18, 2017 at 3:47 pm

This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?
<<

So let’s check this with actual measurements using that same TSI site (http://lasp.colorado.edu/home/sorce/data/tsi-data/). Adding up the daily values from equinox to equinox (excluding the day of the equinox since the Sun is neither in the Northern Hemisphere nor the Southern Hemisphere on that day) and dividing by the number of days, we get the following:

21Sep2017-21Mar2017…1333.108 W/m^2…185days
19Mar2017-23Sep2016…1389.951 W/m^2…178 days

21Sep2016-21Mar2016…1333.404 W/m^2…185 days
19Mar2016-24Sep2015…1390.304 W/m^2…178 days

22Sep2015-21Mar2015…1334.06 W/m^2…186 days
19Mar2015-24Sep2014…1390.605 W/m^2…177 days

The numbers aren’t exactly counterbalanced–are they?

Jim

• LdB says:

You can easily settle the argument … hint the ISS see the TSI without an atmosphere.

• LdB says:

Try comparing it to the published TSI from NASA 🙂

• Jim Masterson says:

>>
. . . hint the ISS see the TSI without an atmosphere.
<<

So the ISS with an orbital altitude of between 330 km and 435 km has no atmosphere, but a satellite with an orbital altitude of about 645 km does. That’s good to know LdB.

Jim

• LdB says:

You missed the point 🙂

• “there is no perceptible daily change in temperature that corresponds with that magnitude of change in TSI”
The reason it is imperceptible is that you have nothing to measure it against. It happens every year in exactly the same way. Since perihelion is early Jan, it would make the SH summer hotter than otherwise. But what is otherwise? SH has much more ocean. Summers are different to NH anyway, and the perihelion just adds to the effect, but not in a way we can easily unravel.

And in spring/autumn, the season is changing rapidly because of solar inclination. Orbital distance is a small added effect, but again, there is no easy way to unravel it. Both are part of the seasonal change we observe.

• LT says:

Yes Nick, and that is exactly my point, back of the napkin equations about TSI and their effect on Earths temperature are a pointless endeavor, TSI at the top of Earths atmosphere is never a constant. And this simplistic idea that it averages out each year is just unsubstantiated guessing.

• LT says:

And it does not happen every year in exactly the same way, if Perihelion occurs during solar max versus solar min there are differences in TSI, Stratospheric Chemistry as well as planetary field strength, there are no constants.

• LT December 18, 2017 at 5:11 pm

TSI at the top of Earths atmosphere is never a constant. And this simplistic idea that it averages out each year is just unsubstantiated guessing.

No, it’s actually mathematically derivable due to the fact that both insolation and gravity fall off as the square of the distance.

w.

• LT December 18, 2017 at 5:18 pm

And it does not happen every year in exactly the same way, if Perihelion occurs during solar max versus solar min there are differences in TSI, Stratospheric Chemistry as well as planetary field strength, there are no constants.

The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% … barely enough to measure, lost in the flood.

w.

• The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% …
0.06%

• “both insolation and gravity fall off as the square of the distance.”

Precisely. The macro scale forces of nature seem to do this. The nuclear scale forces (strong and weak), seem to fall off at Euler’s constant to the power of the negative distance.

• But in the context of TSI, distance is the distance from the sun. In the context of earth spectrum downwelling IR, distance is altitude.

• LT says:

“LT, the earth is a sphere (divide by 4)…”

Afonzarelli,

Total Solar Irradiance is measured in Watts Per Square Meter, at the Peak of this Solar cycle there was a maximum of 1361.8 Watts Per Square meter reaching the top of the atmosphere and currently Earth is receiving 1360.4 at the top of Earths Atmosphere, and those number are adjusted to 1 AU, there is no division by 4. On a clear day and a small zenith about 1000 watts Per Square Meter Reach the Surface. I have no idea what Willis and Lsvalgaard are talking about. I guess when you cross plot enough things you start losing touch with reality. The Eccentricity of Earths orbit causes Earth to receive around 3.3% more energy during December and 3.3% less during the month of June.

Hope that helps…

22. I’ve been saying for decades that the IPCC’s assumption of approximate linearity is wrong and that the supporting theory is the SB Law which requires emissions and temperature to go as T^4 which must also be true for the steady state relationship between solar forcing and the surface temperature. The data is absolutely clear as this scatter plot shows.

The Y axis is temperature in degrees K and the X axis is emissions in W/m^2. Each little dot is the 1 month MEASURED average surface temperature vs. the emissions at TOA for each 2.5 degree slice of the planet. The larger dots are the 3 decade averages (from satellite data) for each slice and a curve passing through them represents the steady state absolute and incremental relationship between the surface temperature and the planets emissions. Since in the steady state, emissions are equal to incident energy, this is also a proxy for the relationship between the surface temperature and variable solar input (forcing). As scatter plots go, this relationship has the tightest distribution around the mean of any other pair of climate variables I’ve examined. The next tightest relationship is between the average surface temperature and the average water column. Many other scatter plots of satellite data are here:

The most significant difference between adjacent slices is solar forcing and since in the steady state, solar forcing is equal to planet emissions, delta X is exactly equal to forcing, per the IPCC definition. The deltaT corresponding to this deltaF is the slope of the averages thus the line passing through the averages in the scatter plot is a proxy for the sensitivity as a function of temperature.

Also shown are plots of the SB Law. The black line is the SB Law for an ideal BB while the green line is the SB Law for a non ideal BB with an emissivity of 0.62. The slope of this line at the average temperature of the planet is about 0.3C per W/m^2 and less than the 0.4C per W/m^2 at the low end of the IPCC’s estimate.

The emissivity of 0.62 is not an arbitrarily fit constant, but is the measured ratio between average planet emissions (240 W/m^2) and the average surface emissions (390 W/m^2 @ 288K). It should be undeniable to all that the atmosphere takes a nearly ideal BB surface and makes it look like a non ldeal BB from space (also called a gray body), moreover; there’s nothing else that it could look like unless the planet ignores first principles physics! It’s compelling when the laws of physics align with the data, unfortunately, this alignment is not aligned with the IPCC’s narrative.

While the slope of this relationship is approximately linear over a small range of T, it’s not linear through the origin as the IPCC assumes. This is how they get their sensitivity which is represented as the blue line
which is plotted to the same scale as the measured data and is a line drawn from the intersection of average surface emissions and average surface temperature through the origin.

The magenta line represents the slope of the measured relationship between power albedo solar input and the surface temperature which approaches the sensitivity of an ideal BB at the surface temperature. Note that the T^4 relationship is unconditionally independent of the equivalent emissivity.

• Not ‘power albedo’, but ‘post albedo’.

• duwayne says:

co2isnotevil, I’m not an expert, but your analysis makes sense to me. Does the lack of response from other posters indicate concurrence?

• ” … indicate concurrence?”

Either that or a lack of understanding about what this means. The way that climate science has been framed by the IPCC and its self serving consensus is flawed at the core and the junk science arising from it has contaminated the thinking of many, including many skeptics, which misleads them away from the obvious truth. I can’t think of anything more intuitively, theoretically and practically obvious than the macroscopic average behavior of the planet obeying the macroscopic laws of physics.

My analysis measures the average relationships between variables extracted from satellite data and then offers an explanation for how the measured relationships arise by conforming to the known laws of physics. The key result of this work is the scatter plot I showed earlier which is repeatable evidence for the smoking gun that falsifies the entire range of climate sensitivity presumed by the IPCC.

23. Hokey Schtick says:

Voilà, not voilá.

24. Thanks, fixed. Haven’t spoken French at work in thirty years, it fades …

w.

25. These random or semi-random dot diagrams are wprthless. No way does a straight line represent them. Do not try to pas off such random data off as science. Arno Arrak

• Arno Arrak December 18, 2017 at 2:47 pm

These random or semi-random dot diagrams are wprthless.

They are called “scatterplots”, and far from being “worthless” they are routinely used to explore correlations.

No way does a straight line represent them. Do not try to pas off such random data off as science. Arno Arrak

Hey, don’t bust me, I’m NOT the one that claims that a straight line represents them. That would be the IPCC and the current climate paradigm which insists that temperature is a linear function of forcing … I’m just exploring their claim.

I implore you, try to follow the story a bit better next time before accusing me of trying to “pass off random data as science”, it just makes you look like a noob, not doing your reputation any good …

w.

• Greg says:

Willis, if you want to demonstrate the lack of correlation, a correlation coeff would be a good statistic to provide. Could you provide that for the three graphs?

• Hey, Greg, we’re a full service website. The table is here … It’s a CSV file called “Willis’s Data For Greg.csv”.

Doing this reminds me that although it’s not practical to post my full data and code, because the data is 13GB and the code is thousands of lines and not just user-unfriendly, it’s user-aggressive … I can and will start posting the resulting datasets that are used for the graphs with each post.

Best to you,

w.

26. Bruce of Newcastle says:

What happens in graphs #2, #3 and #4 when you include the time-variant lines between the points?

That is what Spencer and Braswell do (eg see Fig. 3a in the linked PDF of the paper). The regression trend is one thing but the feedback response is quite different, as they demonstrated.

27. Bruce, just tried that. There’s no pattern at all, it just looks like an exploding star …

w.

28. Greg December 18, 2017 at 4:20 pm Edit

Which is the same result as we get for the 1 W/m2 variation over a typical solar cycle: one-tenth of a degree.

With the abysmal correlation of these noisy data the OLS fitted slope can not be given any credibility. Mk I – eyeballing the data, I would estimate the slope to be about twice that. Typical regression dilution error caused by ignoring that you do not have an error-free abscissa.

Mmm … I thought the same so I used a “robust” linear analysis which basically ignores outliers … same outcome.

The correlation is abysmal because land and sea; tropics and extra tropics do respond in the same way to radiative forcing. So dumping it all together just makes a muddy brown mess, not a valid analysis.

Maybe that was the point that Willis was trying make.

Not only that, but there is an active response to temperature that varies both place to place and time to time …

w.

• Greg says:

Sorry Willis, probably too late in coming back in this comment. The “robust” method does not help because it is not the outliers which are the cause of the problem, it is the fact that you have an error laden abscissa not a controlled variable. READ the article I linked, it explains it in detail and shows the huge errors caused by ignoring that OLS is only valid with minimal x errors, NOT for scatter plots.

If you want ‘robust’ invert axes and then try to work out which value you want to believe 😉

29. Let me note in passing that we’d expect a change of 0.18°C from a 1 W/m2 change (or vice versa) from Stefan-Boltzmann at earth surface temperature … the results above show about half of that. That plus the scatter in all three charts shows that there are other important variables.

w.

30. afonzarelli says:

Remember that TSI only varies by .25 watts per meter squared during the solar cycle over the surface of the earth. (the earth being a sphere) We should actually expect 1 watt/m2 to produce .3C without feedbacks. A warming of only .1C per 1 watt/m2 would equal an ECS of just .6C…

• afonzarelli says:

(Ha! “jinx”, willis! i’m going off the stated 1.1C per doubling of co2; that is, 1.1C divided by 3.7 watts/meter squared)…

• alfonzarelli,

The Stefan-Boltzmann sensitivity is given by 1/(4eoT^3), where e is the emissivity between 0 and 1, o is the SB constant and T is the temperature in degrees K.

The sensitivity of an ideal BB at 288K is about 0.18 K per W/m^2. The sensitivity of an ideal BB emitting 240 W/m^2 (255K) is about 0.27 K per W/m^2. The sensitivity of an non ideal BB (gray body) with an emissivity of about (255/288)^4 = 0.61 and whose temperature is 288K is 0.30 K per W/m^2.

The gray body sensitivity of 0.3 represents the path from the surface to space and sets the maximum possible sensitivity as it represents the minimum rate of cooling. The black body sensitivity of the surface at 0.18 represents the minimum as it represents the maximum rate of heating. The actual sensitivity is somewhere in between and most likely closer to the lower limit.

The SB sensitivity is equivalent to the ECS. The TCS will be lower since it takes applying the W/m^2 of forcing for 5 time constants to achieve about 99% of the SB effect. Being applied for 1 time constant would result in about 63% of the final value. This is why the value measured by shorter term changes is less than the ECS value.

The reason I use 2.5 degree slices to slice up data for scatter plots is that the difference in insolation between slices has been the same for many, many time constants, thus the relative behavior of adjacent slice averages is representative of the ECS, while absolute differences between months is more indicative of the TCS.

31. zemlik says:

Is the idea that radiation arrives from the Sun, hits the atmosphere, some bounces off and some gets through, of that which gets through some of it bounces off the surface, hits the atmosphere and bounces back to the surface.
More CO2 means more bouncing back down to the surface causing greenhouse warming but if more CO2 does this not mean that of the radiation that arrives from the Sun more bounces off into space ?

• zemlik, CO2 doesn’t absorb radiation of the frequencies the come from the sun. It absorbs the “thermal infrared” that comes from the earth.

And on my planet, the only stupid questions are the ones you don’t ask.

w.

• zemlik says:

thanks guys

• “Does this not mean that of the radiation that arrives from the sun more bounces off into space?”

No. Absorption of radiation is based on the wavelength. In clear sky (no clouds), the atmosphere is nearly transparent to visible light, which is mostly what we receive from the sun. There is a bit of UV that is absorbed by ozone, but in simple terms it almost all gets to the surface.

The wavelengths of radiation that a surface emits is based on its temperature. These wavelengths can be found using a planck curve calculator such as this:
http://www.spectralcalc.com/blackbody_calculator/blackbody.php

So for bodies around the temperatures we see on earth that will emit in the infrared range (once a body gets up to about 780K it will begin to glow because the wavelengths are beginning to get into the visible length). The atmosphere isn’t transparent to infrared the same was it is visible. Some wavelengths, like the ones absorbed by co2, don’t make it more than a few meters from the ground before being absorbed, while others do have a clear shot to space from the ground.

Radiation that is absorbed by a gas will be emitted by the gas at nearly the identical wavelength that it was absorbed at. Emission can happen in any direction. So if we take a large sample, a thin layer 10 meters off the ground for example, and could see just it’s emitted radiation, we would see that approx half is emitted up, the other half down toward the surface.

More co2 means more opportunities for radiation to be absorbed and stay in the system rather than leaving. And since the rate of incoming more or less remains constant, but the outgoing has been reduced, the surface will warm. A warmer surface emits more radiation and will this restore balance, but now at an increased temperature.

This is a very simplistic explanation of the greenhouse effect.

• M.W. Plia. says:

Ok Brad, at this site we are all familiar with the greenhouse effect. IMHO too many (on the warm side) infer large climate change from an effect estimated in tenths of a degree. They are most irritating when they insist extrapolation of short term trends along with comparison of proxy to instrumental data is valid as evidence, it isn’t…they are not being scientific.

The issue is the phenomenon’s magnitude. There is yet an answer, what say ye?

What I know from my reading is a doubling of co2 amounts to 3.7Wm-2 of radiation being received at the surface. The implications of that are very complicated, and I’m not going to even attempt to get into that because I don’t know it well enough.

I would agree the forcing and changes seem minute, and in hindsight of where the planet has been before it often can seem irrelevant. We’ve been here before, we’ll be fine.

What I see as concerning is the rate at which we are approaching these changes. In the past com levels may have been higher but it was a gradually process that would have allowed for adaptive changes within the ecosystems. Changes are happening much quicker and as such these systems won’t adapt as quickly.

I agree there is yet an answer. How long do you want to wait to find out though?

• M.W. Plia. says:

You say: “Changes are happening much quicker and as such these systems won’t adapt as quickly.” So I have to ask: since when?

To compare current rates of change with previous rates of change means you are comparing proxy with instrumental data. And, as we all know, the proxy data lack the temporal resolution required for a valid comparison with the current instrumental record.

When it comes to “man-made global warming” and “alternative energy” my interest is centered on what is and isn’t known. Specifically where the science ends and the supposition begins. It is my observation, when it comes to media, the uncertainties surrounding this issue are not properly addressed.

Regards, M.W.

• M.W. Plia. says:

And Brad, what about natural variability?. In millennial (proxy reconstructions) terms we are warming, the recent neoglacial of the current interglacial appears to have ended 150 years ago. This is revealed in the glacial retreats of both hemispheres.

The “Little Ice Age” (on average 2° colder and wetter than today) began over 700 years ago with the end of the multi-centennial “Medieval Warm Period” (average 2° warmer/dryer). Duration and temperature estimates of these two periods vary. As the LIA tightened its grip the Viking settlements of the North Atlantic became inhospitable. Sea levels lowered, ice flows hindered navigation, crops withered, farm animals died and the Norse went home. The favourable climate from several decadal warming trends during the LIA and superior sailing skills may have facilitated the European discovery of and settlement in the Americas. A 70-year period of low sunspot activity, named the “Maunder Minimum”, beginning in 1645 coincides with the LIA’s coldest decades.

Estimates of the lowest LIA sea levels are as much as 2 ft. below todays. Thermal expansion of the ocean’s water at the MWP’s peak place estimates of sea levels as much as 1 ft. higher than today’s. A portion of the .08°C mean temperature increase of the past 150 years is attributed to the recovery (which continues to this day) from the LIA. Sublimation from nightly drier air is reducing the planet’s mountain glaciers to their previous “normal” positions. Glacial retreat moves quickly in comparison to glacial advance. Retreating ice is now revealing remnants of previous climate periods.

End moraines of glacier advances prior to the MWP indicate the glaciers of the LIA are the longest of the Holocene. The MWP/LIA is the last of four cycles of minor glacial advance and retreat in the past 6,000 years, perhaps linked to solar activity (sunspots) cycles. The colder “Dark Ages” and the “Roman Warm Period” were the previous cycle. We may now be at or past the start of the warm half of the next millenial cycle, the “Current Warm Period” which, if like the last, may have legs for another century or two.

I realize our opinions on this topic may differ, I hope I’ve been helpful.

Regards, M.W. Plia.

• Yes, so what were the forcing that drove the warming and cooling of the MWP/LIA? Often times people will attribute them to changes in solar activity. Generally speaking, the solar activity for these periods fluctuated by approx 2Wm-2 from MWP to LIA.

A doubling of co2 leads to 3.7Wm-2. This makes the 2W that took us from MWP to LIA seem trivial. Of course this is all proxy and conjecture. What do you think to be the forcing for mwp and lia

32. You of course did not invent it but you go along with it. My remarks are really meant for whoever invented this stupidity. If it is IPCC as you suspect it says a whole lot about them. For one thing, it is obvious that they y have no working scientist who knows what to do with data. Not all data have meaning and if you know your field you can instantly spot the difference. Before there were computers I had to plot spectrochemical data on a variety of pre-printed graph papers. But graphs like yours were never made because data of that type was obviously useless trash, The whole lot of graphs you show, pretending to show some aspect of science, is nothing but trash and should never have been shown in a scientific article anywhere. I knew how to dispose of it long ago but they are now elevating that trash pile, trying to resuscitate its denizens, and pretend they are doing science. Arno Arrak

• Arno Arrak December 18, 2017 at 7:14 pm starts out abysmally …

You of course did not invent it but you go along with it.

Arno, this is why you really should follow my request and QUOTE THE WORDS YOU ARE DISCUSSING. I have no idea who “you” is. I have no idea what “it” is. And I’m not going to either research or guess to find out. I stopped after your first sentence.

w.

33. Nope. Didn’t get it, not going to look for it. If you want an answer QUOTE WHAT THE HECK YOU’RE REFERRING TO. You sure you wrote it on this post?

Also, could you dial back on the ad hominem attacks? Where I got my knowledge is immaterial. I’m self-taught in science … so sue me.

w.

(Nathaniel is banned) MOD

34. Nathaniel December 19, 2017 at 12:05 am
(He is banned) MOD

Despite being a dab hand at MIG, TIG, stick, gas, brazing, and underwater welding, I’ve never taken a basic welders course … so what? How much underwater welding have you done?

w.

35. Ozonebust says:

Willis
Have a great Christmas

36. Kip Hansen says:

Willis ==> It is no surprise that ∆T and ∆F are not linearly related. T (temperature) in a property of matter (air in this case) relating to its energy level that can be measured as relative sensible heat (obviously — just to set the stage). Temperature itself is not a “thing” and is not an inherent property like “mass”. Of the mathematical formulas for heat transfer, we can say this:

“The equation [Boltzman Transport Equation] is a nonlinear integro-differential equation, and the unknown function in the equation is a probability density function in six-dimensional space of a particle velocity and position. The problem of existence and uniqueness of solutions is still not fully resolved, but some recent results are quite promising.”

The key point is that heat transfer in a fluid (atmosphere) is itself a non-linear process and thus it is highly unlikely that global average 2-meter-above-surface air temperature would be linearly related to TOA radiation. Way too many other energy interactions, themselves mostly non-linear, in between.
Good to have the point nailed down, though. Thank you.

37. The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.

I have not seen that in my readings. A citation and exact quote would be helpful.

• Thanks, Matthew. Here are a few of an uncountable number. Let’s start with the IPCC, WGI, Chapter 8, p 664

The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.

As you might expect, this definition is echoed all over the web.

Climate sensitivity lambda = Delta Teq / Delta F atmos.
University of Washington Atmospheric Sciences

Let’s estimate ECS for our zero-dimensional model. We know that the warming for any given radiative forcing RR is

Temperature change = climate sensitivity x forcing

ΔT = λ ∆Q
Jagiellonian University Poland

lambda = delta T/ delta F

it captures the temperature response to a change in forcing.
Steven Mosher

The predicted change in the average planetary surface temperature is

ΔT ≈ [0.3 K·(W·m–2)–1] (2.2 W·m–2) ≈ 0.7 K
American Chemical Society

When radiative forcing F is applied to the TOA, the energy budget equation with the net TOA radiation, called N, may be written in the simplest form as
N=F−λΔT
where λ is termed the climate feedback parameter and represents how much energy is lost to space in accordance with the unit increase of the global mean surface temperature T (e.g., Gregory et al. 2015;
Progress in Earth and Planetary Science

Note that the units of lambda are K(Wm-2)-1. If the sensitivity
parameter λR is known, we can estimate the global mean surface
temperature change expected from a given forcing:
∆Ts = λR ∆Q

Cal Tech

The climate sensitivity parameter, λ (pronounce “lambda”), is more general and isn’t necessarily connected with CO2. It is the change of the near-surface air temperature, ΔT, that you obtain from a unit change of the radiative forcing, RF – which is the net irradiance measured at the tropopause (the boundary between troposphere and stratosphere, i.e. the upper boundary of the atmosphere where “weather occurs”):
ΔT = λ . RF
Lubos Motl

Climate sensitivity is inversely related to the feedback factor $\lambda$:
$\label{eq_climatesensitivity}\Delta T_{2x} = \frac{\Delta Q_{2x}}{\lambda}$
Description, MAGICC program

Gotta say, I’ve been surprised at people asking me for examples of that, they’re everywhere …

Best regards,

w.

• Willis,
IPCC:
“The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.”

Yes. But you say something quite different

“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”,/i>

Not a sustained RF, and not an equilibrium response. And then further:
“According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa.”
This is nothing like the IPCC statement. The point is, as I said above, temperature at any time depends on the forcing history. The definition of ECS is deliberately framed to ensure that there is only one effective item in that history – a step change in the distant past. It is aacknowledged that this is very hard to achieve in observation, and even in modelling. But that is what the IPCC definition refers to. It does not mean that you can get the temperature in 2050 by a rough characterisation of the history of 33 years.

All your other examples are referring to this concept of equilibrium climate sensitivity.

• Willis, thank you.

I don’t give much weight to the comments of Steve Mosher, but the others are clearly substantial.

I regret that in my readings I have not made a searchable data base or bibliography. It’s one of my perennial new year’s resolutions.

• matthewrmarler December 20, 2017 at 12:32 am Edit

Willis, thank you.

I don’t give much weight to the comments of Steve Mosher, but the others are clearly substantial.

You’re welcome, Matt. As to Mosh, I have a different view of him than most folks, perhaps because I’ve met him. He’s a very smart guy who has the unfortunate habit of posting cryptic comments which might be true but it’s hard to impossible to tell … as a result, I do pay attention to what he says.

Best regards,

w.

• Greg says:

Willis, IIRC it is the Gregory & Foster 2015paper which is the only mainstream acknowedgement of the OLS bias problem. They do look at other methods but hide it in an appendix and don’t refer to it in abstract.

BTW Nick is correct the dT vs dRad is the long term response and will not be seen in short term data where it is dT/dt which is driven by rad. This is basic physics.

You need to be looking at timescales greater than several “time constants” for the global climate to get the dT dRad relationship.

38. Nick Stokes December 19, 2017 at 4:17 pm

Willis,
IPCC:

“The assumed relation between a sustained RF and the equilibrium global mean surface temperature response (∆T) is ∆T = λRF where λ is the climate sensitivity parameter.”

Yes. But …

Nick, you might have noticed that I’ve pretty much given up discussing things with you. Why? Because in all of the time we’ve interacted you’ve never admitted that you were wrong and someone else is right. You always start out with “Yes, but …” and go from there.

It’s why people gave you your nickname some years ago, “Racehorse” Nick Stokes, after the famous Texas lawyer “Racehorse” Haynes. He famously said:

“Say you sue me because you claim my dog bit you,” he said. “Well now, this is my defense: My dog doesn’t bite. And second, in the alternative, my dog was tied up that night. And third, I don’t believe you really got bit. And fourth, I don’t have a dog.

That’s the level of your discussion. No matter what anyone says, Racehorse Nick is never, ever wrong, and if he is wrong, he doesn’t have a dog …

Sorry, but your schtick has ground me into the ground. I’ve given up. You’ll have to address your statements about the dog to someone else, I’m tired of them. Don’t worry, though … there are still plenty of people here who I’m you can fool.

Just don’t count me among them. Address any future comments to them. You’ve pulled your dog act one too many times, I’m over it. Talk to someone else, there’s a good fellow.

w.

• Mat says:

Willis, this is an odd time to spit the dummy. He has quoted you, been polite, and made a very clear point which shows your argument and logic to be lacking. And your response is to never speak to him again? That says more about you than him.

• Greg says:

It is also an “odd” time to give up because Nick Stokes actually knows his stuff and Willis should be listening and learning and checking who is correct instead of “giving up”.

Seems like Willis is having trouble finding a logical reply to Nick’s point. Saying I can’t find a reply so I will ignore you is not impressive. I won’t criticise Nick for not admitting he is wrong until I can show he is wrong. The other reason for not admitting you are are wrong is not being wrong.

• Mat, if you think arguing with Nick about what “sustained” means will get you anywhere, I encourage you to go for it. My point is simple. If ∆T = lambda ∆F in some kind of “long run”, then it has to do so when averaged over shorter time scales as well. Yes, you won’t get the same result, but you will get A result.

In this case, I’m looking at 16 YEARS of data. Are you (and Nick) seriously claiming that there will be a response at say 100 years, but NO response at 16 years? Really? I assuredly can’t prove that there would be, but after sixteen years we should see something …

But if you think I’ll discuss that with Nick? Fugeddaboudit. He’ll just tell me “I don’t have a dog”. I’m sorry you don’t like it but the man didn’t get his nickname by accident, and I’m not the man who gave him the nickname. You’re free to discuss things with him. I’ve given it up.

Nor is this particular issue the reason. As I said, and as you guys didn’t seem to notice …

Nick, you might have noticed that I’ve pretty much given up discussing things with you.

Not that “I’m giving up”, that I “pretty much have given up”. I’ve been carefully limiting the topics I engaged with him on to ones where there is a clear answer one way or another … and that’s not true about the question “If we should see an effect in 100 years, will there be an effect in 16 years”? No way to prove that, although I clearly think the answer is “yes” …

Finally, what people seem to ignore in discussing these things is that generally, we’re talking about exponential curves. The exponential response is why it’s supposed to take a hundred years to equilibrate. It’s characteristic of heat transfer.

As an example, suppose you shine a heat lamp on a block of iron. At first, it heats quickly, then over time the heating slows down.

My point is that even with things that don’t reach equilibrium for say a century,
most of the change happens in the first decades … something which Nick seems to think doesn’t occur, and something I’m unwilling to debate with him—he’ll just say his dog doesn’t bite.

Seriously. I’ve seen Nick clearly shown to be completely wrong by guys smarter than me, with all of them agreeing, and at the end Nick just smiled and said “I don’t have a dog”. People don’t get nicknames by accident … engage with him at your own risk.

I trust this clarifies my position.

w.

• A C Osborn says:

Mr Eshenbach, describes himself to a “T” in that comment.
He uses it regularly when he does not have the answers and repitition of his position has failed.
Next will come the put downs, the invective and finally downright insults.
Just read through any of his posts to see the pattern.

39. In order to test whether this difference mattered, the Oxford team re-analyzed Chandra data from close to the black hole at the center of the Perseus cluster taken in 2009. They found something surprising: evidence for a deficit rather than a surplus of X-rays at 3.5 keV. This suggests that something in Perseus is absorbing X-rays at this exact energy. When the researchers simulated the Hitomi spectrum by adding this absorption line to the hot gas’ emission line seen with Chandra and XMM-Newton, they found no evidence in the summed spectrum for either absorption or emission of X-rays at 3.5 keV, consistent with the Hitomi observations.

The challenge is to explain this behavior: detecting absorption of X-ray light when observing the black hole and emission of X-ray light at the same energy when looking at the hot gas at larger angles away from the black hole.

In fact, such behavior is well known to astronomers who study stars and clouds of gas with optical telescopes. Light from a star surrounded by a cloud of gas often shows absorption lines produced when starlight of a specific energy is absorbed by atoms in the gas cloud. The absorption kicks the atoms from a low to a high energy state. The atom quickly drops back to the low energy state with the emission of light of a specific energy, but the light is re-emitted in all directions, producing a net loss of light at the specific energy – an absorption line – in the observed spectrum of the star. In contrast, an observation of a cloud in a direction away from the star would detect only the re-emitted, or fluorescent light at a specific energy, which would show up as an emission line.

I just read this, and thought it might be worth adding to the conversation.
This explains why they see a co2 spectrum in outgoing IR. Even if it isn’t doing anything besides lighting up because it is exposed to 15u from condensing water vapor.

40. ferdberple December 18, 2017 at 11:10 pm

a small change in the ocean vertical circulation is all that is required to massively change the climate on a global scale.

given the 800 year deep ocean conveyor turnover and the tidal pumping action due to orbital mechanics along with wind induced upwelling from el nino there is no reason to believe the ocean turnover rate is constant.

Thanks, ferd. Let’s put some numbers on that. The deep ocean conveyor goes from about 75°N or so to the equator. That’s about … hang on … about 8000 km in 800 years. That means that the currents are moving on the order of 10 km/year, which is about a metre per hour.

By comparison, the gulf stream in parts moves about a metre per second

At that rate, even if the overturning is not constant, it is far too slow to make much of a difference on human timescales …

My best to you,

w.

41. Because I did not noticed any real figures about the emitted radiation fluxes, I attach here my figure, which shows that the emitted radiation flux of the Earth’s surface is essentially linear over a very broad temperature range:

Below is a figure showing a very linear relationship between the radiation forcing at the TOA and the surface temperature change showing that the IPCC simple climate model dT = CSP*RF is relevant:

https://wattsupwiththat.files.wordpress.com/2017/12/temperature-change-versus-olr-change.jpg

• Greg says:

how did you measure the outgoing IR and the mean global temp last time the COw was 1370? Total BS. you don’t even say what this is . model? proxy? WTF

• That too shows surface cooling is well regulated. I thought it was saying something else, which started this reply.
There is a continuous flux out the optical window to space. This morning, T Zenith was -65F, air temps were 38, dropped almost 21F over night as the clouds went away, but the sky wasn’t very clear. I Looked to see if I was missing a good night for astrophotography, and I wasn’t.
On clear nights that cooling slows or stops, while, ~30% of the SB flux is going straight to space, those losses are being supplied by sensible heat from condensing water in the atm.
https://i0.wp.com/micro6500blog.files.wordpress.com/2017/09/irandair-sept-7-2017.png

• Oh, but for the same reason the first chart is right, the second is wrong.

42. Greg says:

Willis ,thanks for R^2 figures ,something I requested as soon as I read this.

Could you provide a table of the data used in the scatter plots. I have a couple of things I’d like to look at but don’t have the free time to do all that from the raw data.

regards.

43. I see Nick Stokes is frustrated … so he’s Tweeted that he’s retreating to his blog to explain to the faithful just how right he is.

Nick Stokes‏

moyhu: On Climate sensitivity and nonsense claims about an “IPCC model”. Willis Eschenbach (and others) at WUWT misusing definitions of CS to make a straw man “model”. Impulse response functions and what ECS and TCS really mean.

Now if we could just get some others to follow him over there …

Plus, what’s that about an “IPCC model” and a straw man “model”. I certainly said nothing about models … but Nick’s never been a man to let the facts get in his way.

w.

• Just checked. Not one person here has said one word about an “IPCC model”, Nick made that up out of the whole cloth … remember what I said about him and his dog? He steps out the door and he’s already lying …

w.

• Willis, it’s not all about you. My post was about two recent WUWT articles. I said

“The threads in question are at WUWT. One is actually a series by a Dr Antero Ollila, latest here, “On the ‘IPCC climate model’”. The other is by Willis Eschenbach Delta T and Delta F.”

It’s right there in the heading.

Yes, you didn’t refer to it explicitly as a model. You said:
“According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!”

That sounds functionally the same as Dr Ollila’s “IPCC climate model”. And just as unsourced.

• Nick Stokes December 20, 2017 at 9:12 pm Edit

Willis, it’s not all about you. …

Nick, your tweet opens like this:

moyhu: On Climate sensitivity and nonsense claims about an “IPCC model”. Willis Eschenbach (and others) at WUWT misusing definitions of CS to make a straw man “model”.

Dude, when the first and *only* name mentioned in your tweet is mine, I fear that in the eyes of your readers it IS all about me. Mine is the only name they have to hang your accusations one. You specifically said that I was misusing definitions to make a straw man “model”, when I didn’t say one word about a model.

But yes, I get it, Racehorse. Your dog was tied up all night, and besides, you don’t have a dog. So take your dog back to your blog, where your sycophants can tell you how wonderful you are, and nobody will call you Racehorse.

w.

• Willis, be nice. Poor Nick can’t help himself, his viewpoint is “mangled” by his own bias. And as we know, it’s not possible for Nick to ever be wrong.

Pity him.

• Willis,
I wrote the post and title first. The tweet came later. If I had known that “model” was a sensitive term to you, I would have associated it more specifically with Dr Ollila in the tweet. But WUWT has been running now a series by Dr O about an IPCC model, proclaimed in the title of his latest post, “On the ‘IPCC climate model’”. You are using the same equation, derived from the same IPCC definition. You assign to it the same functionality. I was writing about both. I’m sorry if it seems important to you that I should not also refer to your use as a model. I will try not to do it again.

• Nick Stokes December 21, 2017 at 3:10 am

Willis,
I wrote the post and title first. The tweet came later. If I had known that “model” was a sensitive term to you, I would have associated it more specifically with Dr Ollila in the tweet. But WUWT has been running now a series by Dr O about an IPCC model, proclaimed in the title of his latest post, “On the ‘IPCC climate model’”. You are using the same equation, derived from the same IPCC definition. You assign to it the same functionality. I was writing about both. I’m sorry if it seems important to you that I should not also refer to your use as a model. I will try not to do it again.

Nick, you did your best to rubbish my name on Twitter. Now here you are telling us you did no such thing, your dog doesn’t bite, it’s all about the order in which you wrote the post and the tweet, and besides your dog was tied up the whole time, and it has to do with some series by someone named “Dr. O.” that as far as I know I’ve never read a word of, and in any case, you don’t have a dog …

You might get away with that underhanded attempt at guilt by association on your blog. Here, I’ll call you on it every time.

w.

PS—”Model” is not a “sensitive term” for me, that’s just another of your pathetic attempts at deflection. What I’m sensitive about is you lying by putting words about “IPCC models” in my mouth that I never said.

• catweazle666 says:

Willis, you know the old saying about the relationship between the amount of flak you’re taking and your proximity to the target?

Plus, here’s an instructive piece from the much-missed MemoryVault that might be relevant, as true today as when it was first posted.

https://libertygibbert.com/2010/08/09/dobson-dykes-and-diverse-disputes/

44. Delta T and delta F are all very well, but they’re actually rather like arguing about how many angels can dance on the head of a pin if the assumptions underlying the calculations are bogus. consider the following:

Is climate science “settled,” or perhaps “unsettling?” Since 1998, the elevated, but essentially flat temperatures of the so-called “global warming hiatus” (and one El Nino event), have shown no correlation whatever with steadily rising atmospheric CO2. This is extremely damaging to the credibility of the once almost universally trusted mechanism of CO2/warming. Despite this inconvenient reality, most of us still cling to this theory, failing to realize that it actually has no hard-data support in the peer-reviewed literature. Feldman, et al., 2015, is often cited as “proof” of the link, but even this “landmark” paper uses correlations and theoretical arguments rather than hard data which, of course, is scientifically indefensible. A realistic search for an alternative to this long-trusted link that better reflects what is actually happening to global temperature clearly seems warranted. The question is, what mechanism might better account for these actual, real-world observations? First, we might consider when warming has actually occurred.

The only episode of global warming during the past 50 years that can be clearly identified occurred from 1975 to 1998, when global temperatures shot up dramatically by almost a centigrade degree, making this an obvious first place to look for an alternative mechanism. This also happens to be the same period during which anthropogenic CFCs were freely introduced into the atmosphere. This was stopped in the 1990s by the Montreal Protocol, which banned further CFC production because it was found that the chlorine in CFCs was released by photodissociation on polar stratospheric clouds, whereupon it would destroy stratospheric ozone, thus depleting Earth’s protective ozone layer. This depletion, in turn, would permit greater irradiation of Earth’s surface by ionizing solar ultraviolet-B radiation, whose normal ozone-destroying function was taken over by anthropogenic chlorine. Concern at the time was limited to severe sunburn and genetic defects from UV-B, but if this powerful radiation could cause severe sunburn and genetic defects, it could certainly also cause global warming. It’s hardly unreasonable, moreover, to expect that significant climatic effects should have resulted from so large a disruption of such a major part of Earth’s atmospheric system.

Why, then, have we had two decades of elevated temperatures? Simply because most of the chlorine that we introduced to the atmosphere is still up there, and still destroying ozone catalytically, that is, the chlorine is not itself destroyed in the process, but a single chlorine atom can destroy over a hundred thousand ozone molecules in a cyclical process. Hence, the ozone shield is still depleted and likely to remain so for several more decades. Therefore, assuming the foregoing warming mechanism is valid, the so-called “hiatus” should be in effect at least through mid-century.

Why is CO2 not a likely warming agent? Because despite its well-documented absorption of a portion of Earth’s heat radiation, absorption and emission actually happens within a waveband (13 to 17 microns wavelength) that corresponds to temperatures well below those of Earth’s surface, an important fact unfortunately ignored by climate scientists, and as is well-known, cooler objects (here, CO2) can’t transfer heat to warmer ones (here, Earth’s surface). The fact is that back-radiation from CO2 is simply too weak (it emits only as a line spectrum) and too “cold” to have a significant greenhouse effect in the Earth environment.