Guest Post by Willis Eschenbach
The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing. Forcing is defined as the net downwelling radiation at the top of the atmosphere (TOA). According to this theory, in order to figure out what the change in global temperature will be between now and the year 2050, you just estimate the change in net forcing between now and then, multiply it by the magic number, et voilà—the change in temperature pops out!
I find this theory very doubtful for a number of reasons. I went over the problems with the mathematics underlying the claim in a post called “The Cold Equations“, for those interested. However, I’m not talking theory today, what I want to look at is some empirical data.
The CERES dataset contains measurements of upwelling radiation at the top of the atmosphere. It also has various other subsidiary datasets which are calculated from both the CERES data, other satellite data, and ground measurements. These include the upwelling thermal (IR) radiation from the surface. I apply the Stefan-Boltzmann equation to that upwelling IR data in order to calculate the surface data. I’ve checked this data against the HadCRUT surface temperature data, and they agree very closely with the exception of certain areas around the poles. I ascribe this to the very poor coverage of ground weather stations around the poles. This has forced the ground datasets to infill these areas based on the nearest stations. Even with that polar difference, however, the standard deviation of difference between the CERES and the HadCRUT monthly data is only 0.08°C, extremely small. the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.
Now, this lets us compare changes in the net TOA forcing imbalance with the changes in the surface temperature. For this kind of study we need to remove the effects of the seasons. We do this by subtracting the full-dataset average for each month from the data for that month. For each month, this leaves the “anomaly”—how much warmer or colder it is that month compared to the average.
For example, here’s the temperature data, with the top panel showing the raw data, the middle panel showing the annually repeated seasonal variations, and the bottom panel showing the “anomaly”, how much warmer or cooler the globe is compared to average.

Figure 1. Raw data, seasonal changes, and anomaly of the CERES surface temperature dataset. Note the upswing at the end from the latest El Nino. The temperature has dropped since, but the CERES data has not been updated past February 2016.
According to the incorrect paradigm that says that changes in surface temperatures follow the changes in forcing, we should be able to see the relationship between the two in the CERES data—when the TOA forcing takes a big jump, the temperatures should take a big jump as well, and vice-versa. However, it turns out that that is not the case:

Figure 2. Changes in TOA radiation (forcing) ∆F versus changes in surface temperature ∆T. Delta (∆) is the standard abbreviation meaning “change in”. In this case they are the month-to-month changes. The background is a hurricane from space. I added it because I got tired of plain old white.
As you can see, in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T. Go figure.
Now, I can already hear some folks thinking something like “But, but, that’s far too short a time period for that small a change to have an effect … I mean, one watt per square metre over a month? The Earth has thermal inertia, it wouldn’t respond to that …”
So lets take a look at a different scatterplot. This time we’ll look at change in total surface energy absorption (shortwave plus longwave) versus change in temperature.

Figure 3. Changes in surface energy absorption versus changes in surface temperature ∆T.
So the objection that the time span is too short is nullified. A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree.
Finally, is this just an artifact because we’re using CERES data for both surface temperature and total surface energy absorption? We can check that by repeating the analysis, but this time we’ll use the HadCRUT surface temperature data instead of the CERES data …

Figure 4. As in Figure 3, but this time using HadCRUT surface temperature data.
While as we’d expect there are differences when we use the different surface temperature datasets, in both of them the surface clearly is able to change temperature from a difference of one watt per square metre over a month.
So we are left at the end of the day with Figure 2, showing that there is no significant relationship between changes in TOA forcing and surface temperatures.
Note that I am NOT claiming that this method can determine the so-called “climate sensitivity”. I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.
Best to all,
w.
As Usual: When you comment, please QUOTE THE EXACT WORDS YOU ARE DISCUSSING so we can all be clear just what you are referring to.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis, what about the heat generated by the earth? You don’t have to go very deep for the temperature to become unbearable- this has to represent a significant source (albeit small compared to the sun).
Just check wikipedia; it is crap on political issues as GW, but still reliable on hard fact, such like heat generated by Earth core. Not just small: Insignificant, and buffered so that it is steady.
Photosynthesis is ~2 order of magnitude more significant, Human generated heat ~1 order of magnitude, and both still insignificant.
Geothermal heat flux estimates are poorly constrained with large errors but have consistently increased over time as new data is gathered and is still grossly underestimated in my opinion due to ongoing inherent bias from sampling.
Still lacking are good comprehension of hydrothermal fluid circulation volumes through vent fields on the abyssal plane, which were dismissed as merely inconsequential on the most cited geothermal heat flux estimates. The estimated number of these vent fields continues to increase significantly as the sea floor is explored and as many as 400% more than are currently considered could be active.
The nature of fluid flow in the oceanic crust biases the estimate downward. Fluid recharge takes places over a very large area of the sea floor and slightly depresses heat flux downward over large areas, whereas heat is released through very small hot spots. Finding and estimating the number of these point sources of heat is difficult with random sampling of such a bipolar system and biases the estimates downward. This would be like studying total pollution in a system without accurately accounting for point sources.
Resolution of heat flux from the Earth show areas up to 450 mW/m^2 near volcanoes and mid ocean ridges and clearly displays this bias. Yellowstone is a good example, the park as a whole is estimated to contribute 150 mW/m^2 with the resolution of models, but in reality, local features have many orders more heat contributions: Yellowstone Lake heat flow alone is up to 30 W/m^2 in spots, the geothermal basins emit in the hundreds of W/m^2, and the steam from geysers is thousands of W/m^2. They may be relatively small in area, but when the point sources are up to thousands of times higher than the background (millions of times for lava), and in the case of the ocean floor actually suppress the background flux, they become significant and complex to model.
Groundwater discharge, especially on continental margins, is also a significant source of heat flow that is currently poorly constrained and will almost certainly increase geothermal heat flux estimates.
I’m not saying that, even as these estimates are made more accurate, they will come close to the energy contribution from Sol, I’m saying it is significant enough to be considered and does significantly contribute to the Earth climate system through ocean circulation.
https://websites.pmc.ucsc.edu/~afisher/CVpubs/pubs/Harris2004_GeolSeamnt.pdf
http://onlinelibrary.wiley.com/doi/10.1029/2008GL036078/full
http://onlinelibrary.wiley.com/doi/10.1029/2004WR003399/full
a small change in the ocean vertical circulation is all that is required to massively change the climate on a global scale.
given the 800 year deep ocean conveyor turnover and the tidal pumping action due to orbital mechanics along with wind induced upwelling from el nino there is no reason to believe the ocean turnover rate is constant.
It should be almost constant except in the immediate vicinity of volcanic activity…and that should average out to an almost constant as well.
Geographically variable but constant over periods of a million years or so
Just curious as to why the CERES data hasn’t been updated since Feb 2016?
It has been updated since then, up to February 2017 for the surface data, July 2017 for the latest version of the TOA data. Maybe Willis hasn’t checked for a while. It’s never been a monthly update thing.
rbabcock December 18, 2017 at 5:52 am
Reply
paulski0 December 18, 2017 at 7:34 am
Good question, good answer. It was updated a month or so ago, but I’ve been adventuring in the Solos on a slow connection. I’ll update when I return.
Thanks,
w.
Willis your ‘adventuring’ link didn’t work. Curious as to whether you’ve been through Marovo lagoon. I stayed at Matikuri Lodge a few years ago, beautiful spot.
I presume this is it: https://wattsupwiththat.com/2017/11/29/wanderlust/
Thanks, rbabcock, I fixed the link. It’s an oddity of wordpress, hard to link to another wordpress blog. You have to specifically include the https:// …
w.
The weakness of these graph and this demonstration is that Temperature lags forcing, from ~2 hours on a daily scale to ~1-2 month on a yearly scale.
And disregarding this, will result in a scattered plot even where a perfectly aligned, but laggy, causation.
For practical example: search “Lissajous curve”
Bottom line: this proves that that some lag exist. which we already know.
Even if there is a significant lag, shouldn’t it be the same for TOA and at the surface? These plots would seem to indicate otherwise. Is the a plausible physical explanation for there to be a big difference in lags?
“Even if there is a significant lag, shouldn’t it be the same for TOA and at the surface? ”
For instance the figure 2 is ∆T Vs ∆ TOA on a monthly basis. In North hemisphere The highest ∆ TOA is in ~June (high solar input, not so high surface output because temperature still didn’t peak) while the highest ∆ T is in ~august, when ∆ TOA already dropped at ~April value.
So, basically, for each monthly ∆ TOA, you have 2 different ∆T values (the one with higher insolation=higher input + higher temperature=higher output than the other), and for each monthly ∆ T, you have 2 different ∆TOA (the one with higher insolation and ground reservoir heat sink, and the one with lower insolation but ground reservoir heat source).
The physical explanation should be obvious.
It has to do with heat capacities and volumetric mass. The stratosphere has a very low volumetric heat capacity, so responds almost immediately to forcing. The ocean has a comparatively high volumetric heat capacity so takes a long time to respond to a radiative forcing. The 800 year lag between a surface temperature change and CO2 released from the ocean comes to mind. The response time to a change in forcing is a composite of time constants for many different objects and will depend on exactly what thermometer you are reading.
Except cooling at night responds instantly, the water vapor/air pressure/temp relationship is a hard boundary, and it is active, every night adjusting the output of radiation from condensing water as it liberates the heat of evaporation to local conditions. Much of that radiation goes back into evaporating water, but this “lights” up the lower atm in 15u ir, plus whatever other water lines are radiating. It stops radiating when air temp are higher than dew points, or it runs out of water vapor to condense. Those places have very low dew points, and are either very cold all the time, or have great ranges in daily temp because they get very cold at night. That what the non-condensing ghg’s keep warm, and it doesn’t. Temps fall like a rock at night in the desert.
paqyfelyc December 18, 2017 at 5:54 am
Actually, no. A cross-correlation graph (not shown) indicates that the greatest correlation is with no lag.
w.
OK, then, but then you SHOULD had shown this, this is important in the demonstration that “in the CERES dataset there is no statistically significant relationship between the changes in TOA forcing ∆F and the changes in surface temperature ∆T”
paqyfelyc December 19, 2017 at 1:57 am
paqyfelyc, first, no matter how many graphs I put into a post, I can guarantee that someone like you will start whining about how I didn’t do their favorite analysis. If I do CEEMD, they want Fourier. If I do CCF, they want ANOVA. If I do ANOVA they want multiple linear regression. And if I do multiple linear regression, they want CEEMD. I don’t write to please the howling mob, there’s no way to do that. I write to explain what I’ve found in the best way I know how … and for that I take endless abuse from charming folks like you.
Second, writing a post like this is a balancing act. If it’s too long or contains too many graphs, some people go “TL;DR” … and if it is short and to the point, other people go “Not enough substance”. There are dozens of other graphs I could have put in here, but those are the ones I chose. Don’t like it? Sue me.
One thing I can guarantee. 99.3% of the whiners are people too lazy to go get the data as I did and do the analysis themselves. If you’re so damned interested in cross-correlation in the CERES dataset, you might consider putting your money where your mouth is. Of course, it will take you a year or so to write all the computer programs to handle the CERES data, but hey, you can do that, right?
Finally, when someone asks politely instead of telling me what I SHOULD do, I’ll often oblige them. For example, A C asked me politely about 24 hours ago if I’d do a cross-correlation, and I did just that and posted it up here.
Unfortunately, you were too busy being aggrieved to pay attention to the thread, so you didn’t see the graph, and now you look like a jerkwagon for not seeing what was in front of your nose, and then trying to bust me for something I already did.
Next time try asking me politely rather than telling me what I SHOULD do. It goes a lot farther.
Tanktainer ship docking by the boat, gotta run …
Regards,
w.
“no matter how many graphs I put into a post, I can guarantee that someone like you will start whining about how I didn’t do their favorite analysis”
You have a point.
Well, maybe not the graph, but at least a sentence stating that you did the analysis to cope with lag, since you did it.
I confess I didn’t imagine you answered somewhere else in the thread at a question I asked in the second comment of the thread, with a graph you mentioned as “not shown” (i guess it was then when you wrote that, and you showed it later)
Anyway.
Don’t misunderstand me. I am picky when it comes to demonstration
These graphs omitted the most important number.
What was omitted is the R squared value for the linear regression analysis.
This is such a basic requirement. First year students statistical analysis.
Don’t present the results if you don’t have this value.
R squared is nothing more than the percentage of the variance of that can be accounted for by the
linear correlation. Typically the rho value is what is reported. I don’t see any particular reason for reporting rho. If you want, examine the scatterplots, calculate rho and report it yourself.
You just flunk the exam for a first year student in statistical analysis. Sorry, don’t plot a linear curve if you don’t have R squared.
R is required when you claim a fit to linear regression, not so much when no fit is obvious like here.
I would agree that an R2 value would be more useful than a “p” value, but that’s a matter of preference. Willis has always been idiosyncratic.
You could ask for the source of the graph and do it yourself if it’s bothering you. From eyeballing it, it looks somewhere in the neighborhood of an R2 of 0.4.
To Ben of Houston:
Your eyeballing is very good and this value of 0.4 indicates that there is NO relationship.
No, 0.4 is actually a pretty good relationship for something in a complicated environment. At my job, some process variables have a correlation of 0.3 or even less with their control due to the sheer number of different confounding variables in the mix. This doesn’t mean there’s no relationship; it means that there is a lot of noise.
In fact graphs 2 and 3 have a fairly clear relationship that can be seen easily. Are you meaning graph 1? That one has something in the neighborhood of “0”.
rd50 December 18, 2017 at 6:08 am
rd50, there is no need to be nasty, it just makes you look petulant. I wrote this at night in a thunderstorm after climbing both masts on the boat. I posted it despite poor internet here in the Solomon Islands, took about an hour. So sue me. I’m sorry it doesn’t meet your high standards, but next time dispense with the insults andjust ask. I’m happy to answer questions if you’re not all pissy about it.
I sometimes forget about R^2 because it’s generally obvious to me from the scatter. In any case:
R^2, Figure 3, 0.35
R^2, Figure 4, 0.25
Happy now?
w.
PS—Internet has not improved, so if I’m slow to answer …
Happy now? The answer is yes and there is NO correlation.
RD50,
Judging correlation based solely on a fixed R^2 value is not a good idea. So much depends on the process(es) you are measuring and the quality of the data. In this case, perhaps it is enough to show the correlation in figures 2 and 3 are much better than figure 1 to make the case that something is amiss with the forcing theory.
To Paul Penrose.
So now you want to fudge basic statistical analysis.
Go ahead. Pure nonsense.
Look at the numbers: 0.35 and 0.25.
These numbers were omitted.
RD50,
I don’t want to “fudge” anything, but the fact is, there is no “standard” R^2 value that one can use in every case to judge if a trend is meaningful or not, which is what you are implying. Just applying simple rules without understanding the underlying data and purpose of the analysis is voodoo, not statistics.
If it was so obvious to you that there was NO correlation, why did you added the straight line to what was and still is, nothing else but a scatter-plot?
rd50 December 19, 2017 at 4:56 pm
Quote what I said or go away. I have no idea where you think I said something, nor am I going to guess.
w.
In mathematics simulations, the common approach is to use a linear approximation. The linear equation is the simplest equation to fit to a situation. It is highly unlikely to be a perfect fit. Almost all situations are non-linear. Many years ago I was quite surprised, in learning about photographic emulsions, to learn that the response was generally non-linear. In fact, great effort was invested to isolating the portion of the response which WAS linear. This was of particular interest in photographing images of the sky, which is populated by very bright objects in a dark background. And that led me into the non-linearity of our senses, as in the example of our hearing (which is logarithmic).
Some of the most interesting mathematical simulations are the logistic approximations, the “S-curve”, which models the saturation of a market with a new product.
Also of interest is feedback, which is used to explain how an electric motor stalls under an excess load.
I strongly suspect that there is a lot of feedback in our atmosphere. The existence of feedback is strongly suggested by the persistence of life through some remarkable changes in climate, from ice ages to swamp forests. Feedback can most easily be compared to the constant engine speed on a garden tractor. Increased load opens the carburetor, decreased load closes the carburetor, the engine maintains a constant RPM. No throttle is used. Such feedback loops must govern the water vapor (and other greenhouse gas) content of the atmosphere. I don’t know the details; I suspect that no one knows all the parts of the feedback mechanism.
There is, of course, some lag in the response. You can feel this in a tractor. You can see it when using an electric drill. it is for reasons of feedback that I am suspicious of forcing. I believe the forcing is in the opposite direction; the Greenland ice records show a lag of up to 1000 years between temperature change and carbon dioxide concentration. And as for cloud cover–where is the model for that?
I do. And there is a negative feedback from water vapor to increasing non-condensing GHG’s.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
WV controls the cooling rate, why early in the evening it cools multiple degrees/hour, and by dawn, cooling has slowed significantly under clear skies.
Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point, so as the temperature approaches the dew point for a given humidity (which tends to happen around dawn), cooling slows dramatically as the latent heat of vaporization is released. This is all quite apparent if you look at diurnal graphs plotting both temperature and dew point together. The dew point forms an absolute floor for air temperature; and while it’s easy for daytime temperatures to rise above the floor, it’s much harder for nighttime temperatures to push the floor lower on their way down.
“Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point, so as the temperature approaches the dew point for a given humidity (which tends to happen around dawn), cooling slows dramatically as the latent heat of vaporization is released. This is all quite apparent if you look at diurnal graphs plotting both temperature and dew point together. The dew point forms an absolute floor for air temperature; and while it’s easy for daytime temperatures to rise above the floor, it’s much harder for nighttime temperatures to push the floor lower on their way down.”
And because of this, it is a regulator, one that has no dependents on CO2.
”Obviously, radiative nighttime cooling will never be able to drive the temperature below the dew point..”
Never? No. Dew condenses on a surface at that surface temperature (not the air temperature) so there is no fundamental law supporting this point. Here is a counter example in which air temperatures can drop below the dew point: in air that is rising so rapidly that condensational warming doesn’t keep up with adiabatic cooling. Everything takes time, including condensation.
I’m going to guess that would form fog. I see a little depression of dew point, and pressure as well. The dew point depression is it being burned off. But remember there’s quite a column of wet air above the surface, it’s a lot of energy.
Thus also show up the the dew point gradient with latitude, the further from the sun, in general the lower it is when discussing land.
”I’m going to guess that would form fog.”
Good guess, if happens near surface, if way above then forms cloud bottoms.
Lapse rate of the dew point of a parcel is roughly around 1.8C/km.
Combine that with dry T lapse rate of 9.8C/km and you get a rule of thumb clouds form by lifting air a vertical distance 1/8 of the dew point depression at that height. This was long ago figured out in an 1841 paper by meteorologist J. Espy in that bases of all clouds forming by the cold of diminished pressure will be as many hundreds yards high as the dew point in degrees is below the T of the air that time. Today replace Espy’s 100 with 75 yards.
Clouds form on ascent when the temperature of a parcel happens to decrease more rapidly than its dew point.
And I figure out stratocumulus clouds were from warm humid troposphere air, was pushing into a cold(dry?) layer, that would be an illustration of this effect.
My experience with modeling dynamic systems is that even with all the parameters being linear, the interlinked feedbacks result in a highly non-linear output. Earth is even more complicated because of the abrupt phase changes in water, at discreet temperatures, absorbing and releasing large amounts of energy.
“the CERES data is more complete than the HADcrut data, so I use it for the surface temperature.”
I always wonder what people think when they say “surface temperature.” Think about what things look like from 20,000 miles and in cross section. I’ve started to think of the earth’s surface temperature as the temperature of the ocean abyss and the surface air temperature as something evanesant. The ocean compared to the atmosphere is something like 350 to 1 and the latest refrain of the Jeramiahs is ” the missing heat is in the ocean”. I wonder how much energy does it take to refrigerate a whole ocean to something like 4 Celsius.
That’s because water vapor is regulating surface temps, which is what makes it not correlate to forcing.
And 1W/m^2 is of course significant even daily. You can even see the effect as the length of day changes by secs and minutes per day.
Here’s about 50 years of N America’s daily average change in temp.
As you can see there is a very strong signal between daily delta T
Many years ago on another Forum there was a dsicussion on was the Temperature Series a “Random Walk” and one poster in the USA had a lot of local Weather Station data including Pressure and Moisture.
His correlations between those and Temperature were far better than CO2 and he insisted that what you say is correct.
Added to that of course is Cloud Cover, which has already been shown to have a major affect.
Along with Temperature build up lag as specified by paqyfelyc.
Can the data be offset by 2 months to see what happens?
“Can the data be offset by 2 months to see what happens?”
Be more specific?
I’ve stopped working on my surface data code. Just haven’t had the time, and with the discovery it’s regulated, it’s really irrelevant.
Except this.
Shows min T follows Dew point, which mean it can’t follow Co2.
Micro, Sorry for not being more specific, my question was regarding Mr Eshenbach’s data as he is not compensating for lag at all.
Your quote about Dew point and temp is basically what that guy said 10 years ago on this Thread
https://ourchangingclimate.wordpress.com/2010/03/01/global-average-temperature-increase-giss-hadcru-and-ncdc-compared/#comment-1216
It was Tim Curtin who said it and presented data as well.
Yep, looks like Tim figured it out as well.
A C Osborn December 18, 2017 at 8:31 am Edit
Sure. Here’s the cross-correlation graph of ∆ TOA vs ∆ surface temp:
w.
I think the lags are anywhere between 0 months and 4 months.
Why 0 months. Well, we are talking about energy moving through either molecules or a free path back to space. Molecules which only hold onto energy in a time-frame of hours (the max I have ever calculated is 44 hours) but it seems like 2 or 3 hours to 12 hours is the real max. If the energy is going straight out to space, it is less than 1 second at the speed of light in an atmosphere.
Why 4 months. Well, we see this type of lag in two different scenarios.
First, we have a lag with respect to the ENSO which varies between 2 months to 4 months with 3 months being the most common. There is more than forcing involved here because we need water to move 1000s of kms, then clouds to form systematically over a month or so, and then we need atmospheric circulation to move the extra/less-than-normal energy to the rest of the planet. Not the same as forcing but similar in some manner. Sometimes the ENSO lag is just 2 months and sometimes 4 months but most often 3 months.
Secondly, we have the seasonal lags. The surface temperatures lag behind the solar forcing by about 30 days. Some places on the planet are a little less than this and some a little more but a good round number is 30 days (1 month). And then we have oceans and water bodies. They have a longer lag which is most often about 30 days to 80 days. 1 month to about 2.5 months.
So Willis just needs to use these different lags, 0 lag, 12 hours, 1 month, 2 months, 3 months and 4 months. That covers off the experience of RealEarth(tm).
800 years for the deep ocean is also worth thinking about but if global warming takes 800 years, well something else will probably happen in the interim and one could ignore it.
There are NO labels on the graph, and so I do NOT know, for sure, what I am looking at, micro.
Labels, please. Thanks.
Basically, I get it — water regulates temp — who knew? (^_^) [sarc]
… AND water regulates temp in such a way that CO2-heating is a non-issue, if an issue at all.
“AND water regulates temp in such a way that CO2-heating is a non-issue, if an issue at all.”
Yep, I figured it out a year ago on my birthday.
The key was realizing the switch in cooling rate was temp dependent because it’s tied to air temps and dew point temps.
I just needed to find the data from Australia where they recorded net radiation. They missed the correlation because they look at absolute humidity, and rel humidity independently, and that’s nonlinear, correlations are for linear function, so of course they missed it. I remembered that because of the same type of work with semiconductors, you can’t do that, and from my observations I knew cooling rates changed with RH and the sky always was 80F to over 100F colder than the air temp was. But I didn’t have net rad. I lol, when I put it all together.
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
Y Axis is Delta Tmin/day * 100, X Axis is month of the year. Signals listed across bottom by year of collect surface station data.
BTW, I have made all of the data I produce available on Source Forge. By Area per day, Area by year, plus insolation, and enthalpy for each, along with this type of seasonal slope analysis, including comparing a known change in temp, with a known insolation. All based on the Air Forces dataset.
Willis, gender-ambiguous Nuttercelli is attempting to “debunk” cloud feedback research over at the Guardian today. This may interest you though he does not name you it seems you are part of the target group.
https://www.theguardian.com/environment/climate-consensus-97-per-cent/2017/dec/18/scientists-have-beaten-down-the-best-climate-denial-argument
Sadly he does not know squat about what clouds are :
Great start for a “debunking” effort.
He is also full of unsubstantiated waffle about positive feedbacks in the Arctic, despite sea ice extent for the last two years being indistinguishable from what it was a decade ago in 2007.
He thinks that studies that rely on observational estimations of ECS are “cherry-picking” the use of that method. He seems to think “other methods” ( other than observations ) are being “ignored”.
He’s an idiot. Open arctic water is a net cooling system. When they talk about the effects of albedo, they are always putting the Sun directly over head. It is never overhead in the arctic (and antarctic). And for a month or two, for about 6 hr’s a day around solar noon. The rest of the day, that open water is radiating to a -60 or -80F sky. As long as it isn’t cloudy, the surface always radiates to space!
He’s not an idiot, he is a lair. He knows it’s BS but is happy to publish this kind of crap in the complicit lefty Guardian because it’s “for the cause”. He is an activist zealot, masquerading as a scientist.
Shame he does not know what clouds are made of before he starts trying to lecture everyone about their effects.
Have to ask Greg, have you ever read one of Dana’s ‘articles’ before?
micro6500,
And as I have pointed out previously in another post ( https://wattsupwiththat.com/2016/09/12/why-albedo-is-the-wrong-measure-of-reflectivity-for-modeling-climate/ ), just because the water looks dark from most viewing positions doesn’t mean it is absorbing all the incident light. It is just that the water reflects specularly according to Fresnel’s equation, instead of diffusely like snow.
JonA
December 18, 2017 at 9:35 am
“Have to ask Greg, have you ever read one of Dana’s ‘articles’ before?”
Yes, and it almost always results in a letter of complaint to the readers’ editor about non factual BS being presented as science, and how that undermines the credibility of their title for all other reporting. You either have integrity or you don’t. Once credibility is lost it takes years to regain what you can lose in minutes.
The Guardian used to be top quality UK paper , now it is nothing but an online campaign platform.
That post was good! Thanks.
Why assume it is linear…other than general human laziness and the simplicity of linear “curve” fitting and comparisons?
Why assume only 1 factor instead of 3 or 10?
Why assume zero elasticity?
How would we determine lags?
It’s not just the line fitting which motivates linearisation of everything. If you can’t make linear assumptions you can not make climate models because the maths is intractable.
Thus even if it is well known that a system is non linear, it is possible to do at least something with it, if you can approximate the non-linear behaviour to be approximately linear over a small range of study.
Then there is a whole other problem that is the trend for fitting ‘trends’ to everything even when there is no reason to think it may be linear , approximately linear, or anything else because you have not even thought why you want to fit a straight line except that OLS is the only hammer you know how to use.
.. well you don’t actually know how to use it correctly, but you are ignorant enough not to realise that you don’t know how to use it. Heck, there’s button in Excel , you know how to do that !
The CAGW has to assume a linear projection of every trend (Arctic sea ice mass or volume, forect fires, polar bear numbers, penguiun wings, whale poops, tree ring temperatures between 1100 and 1970, to the “recorded” daily temperatures since 1910, since 1915, since 1945, since 1970, since 1998 … because their only authorized driver for climate is CO2 levels, and over the short term, CO2 levels are actually increasing near-constantly. Therefore, they MUST relate EVERYTHING mentally to that near-linear increase in CO2.
mib8 December 18, 2017 at 7:08 am
Hey, don’t look at me, I’m not the one assumed it was linear.
I say that the basic assumption, that temperature follows TOA forcing, is wrong whether you claim it’s linear or parabolic …
w.
>>
. . . wrong whether you claim it’s linear or parabolic …
<<
How about chaotic with an unknown strange attractor or attractors?
Jim
Emissivity is the rock on which the ship of CAGW founders.
What is sweetest is that it is IR emission by CO2 itself that annihilates CO2 warming.
Increasing atmospheric concentration of CO2 might increase the ERL (equilibrium radiative level) to a higher altitude. Where it is colder and thus the IR emission less – everything else being equal.
But it is not equal. The air’s emissivity of IR is increased by the same CO2. This cancels the effect of the higher altitude of the ERL and lower temperature. So the net result is no change.
Ilia Prigogine’s nonlinear thermodynamics dictate that in a complex open and dissipative heat engine like the atmosphere, small perturbations like increase of a trace gas like CO2 will result simply in a rearrangement of emergent dissipative structures, negating any change to global parameters such as “temperature”.
In 1954, Hoyt C. Hottel conducted an experiment to determine the total emissivity/absorptivity of carbon dioxide and water vapor11. From his experiments, he found that the carbon dioxide has a total emissivity of almost zero below a temperature of 33 °C (306 K) in combination with a partial pressure of the carbon dioxide of 0.6096 atm cm. 17 year later, B. Leckner repeated Hottel’s experiment and corrected the graphs12 plotted by Hottel. However, the results of Hottel were verified and Leckner found the same extremely insignificant emissivity of the carbon dioxide below 33 °C (306 K) of temperature and 0.6096 atm cm of partial pressure. Hottel’s and Leckner’s graphs show a total emissivity of the carbon dioxide of zero under those conditions.
http://www.biocab.org/Overlapping_Absorption_Bands.pdf
Hottel charts are used in engineering especially for combustion chambers if you have mixed gases with WV and CO2. CO2 always reduces the emissivity of straight WV.
” However, the results of Hottel were verified and Leckner found the same extremely insignificant emissivity of the carbon dioxide below 33 °C (306 K) of temperature and 0.6096 atm cm of partial pressure.”
Again, just an elementary misreading of basic engineering charts.
If CO2 emissivity is zero below 33C then why are we talking about CO2 at all in regard to climate and temperature?
One must be careful to distinguish the emissivity of the entire atmosphere from the emissivity of CO2 alone. Modtran is very helpful. The atmosphere radiates at the Planck curve to 300 meters. You begin to see a dent in the curve in the CO2 bands at 400 meters. Accordingly, at 300m, when you bump CO2 from 400 to 800ppm, there is NO change in the upward flux.
Despite the small dent, there is NO change in the upward flux between 300 and 400 meters altitude at 400ppm, but at 800ppm, you lose 1.88 W/m2 and the dent increases.
So, at 300 meters, the emissivity of the entire atmosphere is unchanged in the CO2 400-800ppm range, but at 400 meters it is. Unless water, which overlaps, is causing this dent in the CO2 bands without showing up in any of the other water bands, we must say that the emissivity of CO2 has decreased at 400 meters when CO2 is increased to 800ppm. CO2 is keeping that 1.88 W/m2 and not radiating it up.
The effective radiative level is a can of worms. It is essentially the optical depth of the atmosphere, or the mean free path of escape for each wavelength. It varies by latitude and everything that effects optical depth. The ERL for CO2 bands is 12km ~220K in the tropics; 9km ~230K subarctic summer. CO2 concentration essentially the same.
Does increasing CO2 raise or lower the ERL? Got me.
Exactly, ptolemy2. Complex self-organizing dissipative systems. Like Willis’s thunderstorms, hurricanes, tornados, et al. Spontaneous and scale-invariate formation whenever potential gradients arise, be they gravity, temperature or any form of energy. The End of Certainty.
Willis, I don’t see a great difference in the degree of correlation in any of these graphs. Marginally better in Figure 3 but nothing to write home about. It would be helpful if you gave stats for fig2 to allow an objective comparison.
What are the correlation coefficients for the three graphs?
I would also invite the reader to estimate the slope of fig 2 whilst trying to ignore the fitted line. It is clearly much steeper than the fitted result. This is a classic example of regression dilution, where least squares fitting gives a spuriously low estimation due to the fact that there is significant non linear variability in BOTH datasets.
Plot the data with the axes inverted and you will get a very different answer for the slope. Here is an example of the effect using synthetic data:
https://climategrog.wordpress.com/2014/03/08/on-inappropriate-use-of-ols/
In view of your previous article, it may be very interesting to do the scatter plots for tropics and extra-tropical regions separately. Not only with the answers vary considerably for what you get with the full dataset, I expect that you would get a much better correlation for each subset.
Best regards.
damn, messed up the links, not space between:
https://climategrog.wordpress.com/2014/03/08/on-inappropriate-use-of-ols/
Willis,
“The fundamental and to me incorrect assumption at the core of the modern view of climate is that changes in temperature are a linear function of changes in forcing.”
Well, to quote your exact words,
“please QUOTE THE EXACT WORDS YOU ARE DISCUSSING”
I think you are creating a straw man. Who is assuming that?
I guess it depends on how linear you want it, but I think it’s fair to say that the mainstream view is that climate (when looking at global average scale) should respond approximately linearly to forcing.
The very concept of ECS or TCS assumes linearity, since it is defined as the change for a doubling of CO2: any doubling, not just being twice what it is today or twice what is is assumed to have been at some poorly defined “pre-industrial” era.
Greg,
The very concept of ECS or TCS assumes linearity
Basically, yes. Though that is generally understood as a first order approximation to addressing the issue. It has long been recognised in climate science that GCMs do not actually produce the same amount of warming for CO2 doubling from different levels and paleo work indicates that CO2 sensitivity is likely to be “state dependent”.
Greg,
“The very concept of ECS or TCS assumes linearity”
No it doesn’t. It’s a derivative estimate. The fact that TCS is not the same as ECS implies lack of linearity. As does the fact that TCS has to be defined under particular circumstances; one common one is that of change after compounding 1% over 70 years. If it was linear, there would be just one definition.
Then there is the question – who assumes even that there is a derivative. People try to find one, but acknowledge that they aren’t brilliantly successful.
” who assumes even that there is a derivative”. Any people talking about TCS and ECS as relevant and valid concepts (as opposed to: people talking about them as irrelevant and wrong concept).
TCS has as much relevancy as temperature response of a bowl of water outdoor at the second/minute time scale when a cloud mask the sun.
ECS has as much relevancy as the same bowl temperature response at the century time scale when it is put in a cave (and, actually, dried up soooooooo… long ago)
Nick,
“The fact that TCS is not the same as ECS …”
The reason for your confusion is a consequence of the metric. Defining the sensitivity as an incremental metric expressed as degrees per W/m^2 is intrinsically non linear because of the T^4 relationship between emissions and temperature where in the steady state, emissions and total forcing are the same. The sensitivity then must have a 1/T^3 dependence on the surface temperature.
If instead, the sensitivity is expressed as the equivalent metric of W/m^2 of surface emissions per W/m^2 of forcing, the relationship is nearly exactly linear as shown by this scatter plot of the monthly averages of surface emissions vs. post albedo input power (total forcing). The larger dots represents the average relationship over 3 decades of data. Note that in this case, the incremental and absolute sensitivity are exactly the same.
http://www.palisad.com/co2/sens/pi/se.png
The TCS will be smaller than the ECS only because of a finite time constant different from the integration period. However; the system responds far faster than the IPCC requires and the distinction between these two is just more noise to add confusion. If not for the excessive obfuscation and misrepresentation found throughout ‘consensus’ climate science, the controversy would have self corrected decades ago, the IPCC would have never been formed and Hansen would have been the last of the alarmists, instead of the first.
BTW, I would like to see you try and make a case for why it makes sense to use a non linear metric of sensitivity (degrees per W/m^2) when an equivalent linear metric (W/m^2 of emissions per W/m^2) is far more representative of how the system actually responds to forcing.
The supposed linearity is in relation to the forcing not the CO2 in ppmv. ECS 1% per year for 70 years is an exponential increase resulting in a doubling. TCS is based on an instant doubling.
Since the relation of ppm to forcing in logarithmic, these are not the same. That is the non-linear bit. The assumption that dT to dRad is linear is not contrary to that.
You are straw-manning the alleged straw man.
Greg
“ECS 1% per year for 70 years is an exponential increase resulting in a doubling. TCS is based on an instant doubling.”
It’s the other way around. But the log of CO2 is irrelevant. Willis is talking about the relation between ∆T and ∆F. Actually, as I’ve noted in he next sub-thread, the issue with ECS and TCS is even more radical. A statement
∆T = λ ∆F
could only make sense with equilibrium values. Try to put it into words for any other case. The transient response you might expect to an increase ∆F is a rate of temperature increase d∆T/dt. That is reflected in the TCR definition. Increase ∆F by regular steps for 70 years, and the ∆T cut off at 70 years is the TCS. If you didn’t change ∆F further but followed ∆T, your TCS number would keep on going up.
As far as straw-manning goes, I have just asked for an actual quote of someone assuming ∆T = λ ∆F. I don’t believe it is done. But a quote of whatever makes people think they assume that might be quite informative, if carefully read.
“I have just asked for an actual quote of someone assuming ∆T = λ ∆F. I don’t believe it is done.
You don’t “believe” it ???? What belief has to do with it? This is math, for got sake, simple math. You just cannot end up to a ∆T = λ ∆F formula without assuming linearity in the first place. Period.
But I guess this blows your mind, so you just return to the same music as a scratched disc.
YOU are creating a straw man.
I’m asking the question any skeptic should ask. Who assumes that, and what do they say?
You are asking a straw man question, so as to have play words about vocabulary.
The fact is, IPCC writes ∆T= CST x ∆F, and your whole miserable argument will be to that is not an explicit assumption, but a result, a diagnostic; as if that result wasn’t the necessary result of all the linearization (and elephant trunk wiggling, done out of fitting hundreds of parameters) done in models, that is, an implicit assumption.
The problem here is that no scientists have any expectation that such a simple proposition as
ΔT = λ*ΔF
is true, yet that is the proposition tested here. What they do entertain is the proposition that ΔT depends on the history of F, possibly linearly by convolution with a response function over time. That is why statements like
ΔT = λ*ΔF
are always associated with a scenario, like
1. ECS, F has a once only increment ΔF, then ΔT is the change when you finally reach equilibrium
or
2. TCS (one variant) F increases linearly over a century by CO2 doubling via 1%/year compounding increases
The TCS definition is still loose, because there will be a dependence on the history before the ramp of F begins.
The reason is that when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output. None of these propositions can be tested just by matching instantaneous values of T and F.
” when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output.”
This is called tuning, and this is precisely the problem with the models. A scenario was defined [the post 1976 warming]. This scenario was “put into the convolution over time”. The output was unambiguous, but wrong.
“This is called tuning, and this is precisely the problem with the models”
I’m not talking about GCMs. I’m talking about the definitions of ECS and TCS. No tuning is involved.
1. ECS, F has a once only increment ΔF, then ΔT is the change when you finally reach equilibrium
or
This is call a derivative, and you need differentiability to begin with. So, you need some clue that differentiability is not an unreasonable assumption, so you make a graph like figure 2, and you observe ….
that it IS unreasonable.
2. TCS (one variant) F increases linearly over a century by CO2 doubling via 1%/year compounding increases
“The TCS definition is still loose, because there will be a dependence on the history before the ramp of F begins.”
Which a complicated way to acknowledge that the definition has simply no meaning. without explicitly acknowledging it, in order to use it nonetheless.
“The reason is that when you define a scenario (history) you can put it into the convolution over time to get an unambiguous output. None of these propositions can be tested just by matching instantaneous values of T and F.”
Let’s translate that in simple words, will you?
“The reason is that when you define a scenario (storytelling) you are in a fantasy land were everything becomes possible and nothing can be proven wrong. None of these propositions can be tested in the real world.”
Hell, YES. You just got it.
Nick Stokes December 18, 2017 at 7:24 am Edit
Nick, I not only quoted the exact words underlying that assumption, I gave a mathematical critique of them in the post I linked to above called “The Cold Equations”.
It’s also obvious from the definition of the climate sensitivity, which is:
∆T = λ ∆F
But you knew that …
w.
Willis,
“Nick, I not only quoted the exact words underlying that assumption”
What you quoted was Stephen Schwartz saying:
“The Ansatz of the energy balance model is that dH/dt may be related to the change in GMST [global mean surface temperature] as
dH/dt = C dTs/dt (3)
where C is the pertinent heat capacity.”
Then you looked up what Ansatz meant, and found that it wasn’t a whole lot. It isn’t an assumption. It’s basically a trial guess; we’ll see how that works out. Then he goes on to discuss the time scales that are associated with the Ansatz, and the value of C. H is actually heat content, so there is some fuss aligning that with forcing. In fact, that is where the heat capacity C comes in. It’s a function of time scale. So (3) isn’t a linear equation. It may work out to be approximately linear in certain circumstances. That’s why they specify it subject to the ECS and TCS scenarios (with different numbers in each case).
An analogy is heating a swimming pool. If you turn up the heat, for a while something like (3) will apply (TCS). The temperature rises according to the extra heat and the heat capacity. But eventually it approaches a new stable temperature. That is determined not by capacity but by loss rates (ECS). Eq (3) doesn’t help any more, because the derivatives go to zero. It’s a different regime. And climate scientists are trying to capture both regimes, and the transition.
Willis,
To expand on that, the Schwartz equation that you cited in the earlier link isn’t at all the same. You are now quoting
∆T = λ ∆F
which is the definition of equilibrium sensitivity. But the Schwartz Ansatz is the equivalent of
d(∆T)/dT = λ ∆F
The swimming pool example shows what is happening. Suppose you have a pool in a uniform environment, heated and at steady temperature. Then you increase the burn rate (∆F), and keep it there. The response ∆T will look like this:

It rises with a rate determined by pool heat capacity (mass) but settles at a temperature determined by losses. You can get a parameter from both of those (red line dT/dt at 0, T(5)-T(0) (blue)), and they are different. The whole curve basically scales with ∆F, and so do the parameters. The first is like TCR, except they would usually define it as the average ∆T up to t=1, say. The second is ECS. Neither of these is enough on its own, and is not a model, but if you can pin down both, you have a reasonable approx of the whole picture.
Sorry, second equation should be d(∆T)/dt = 1/C ∆F
“eventually it approaches a new stable temperature. That is determined not by capacity but by loss rates (ECS).”
No, ECS is determined by the ratio of capacitance (absorptivity), to loss (radiation to space).
“No, ECS is determined by the ratio of capacitance (absorptivity), to loss (radiation to space).”
Absorptivity isn’t a capacitance. It’s more like a resistance. And steady radiation to space has to be unchanged, matching solar (unless albedo changes). ECS is the rise in temperature needed to overcome the greater resistance to radiate the absorbed solar back to space. While a big component of TCS is due to flux into the ocean. This allows TOA flux to drop, to the surface temperature needed to get it emitted is less. Until flux into the ocean tapers off.
Nick/Willis, if that is the logic there is a massive flaw. Again deal with the physics you aren’t dealing with anything remotely like heating a pool at some constant rate or even some strange shaped graph. The first basic should be obvious to everyone, there is a 365 day cycle imposed on the heating/cooling as the distance Earth/Sun changes.
Any equilibrium will always have precession, it has to by the very nature of the setup. Lamor worked that out on the atom in 1897 under classical physics and even when we fully understood quantum spins it is subject to the behaviour and enhanced the background. There are countless precessions already know for Earth from Tidal force, Obliquity and beyond.
The question opened to both groups is why aren’t you working thru thermodynamic behavior thru precession mathematics as that is how the equilibrium will work, it won’t be anything like linear or even “tuneable” without the proper form of the mathematics.
I am sort of perplexed by all the discussion surely everyone realizes what the format of the solution will look?
If you want to work with your swimming pool example try putting a 365 day cycle on it from some min value to some max value and some wave shape. From that calculate your d(∆T)/dT which will now have a lamor behaviour embedded in it and what you need if you want to approximate the behaviour somewhat correctly.
Nick Stokes, I really have to commend you. Your contributions are clear, to the point, polite, and usually devastating to the OP. I don’t know how you have the energy.
LDB, you forgot to add to your 365 days.
1. turn the Heating on & off in 12 hour (on Average) cycles with incremental increase in energy at the start of the heating cycle.
2. add in some cloud cover.
3. add in varying winds from varying direction of varying hot/coldness.
4. Varying atmospheric pressure.
Figures 2, 3, and 4 all look like statistics textbook illustrations for no or low correlation situations. Trying to come up with a model to explain a relationship that isn’t there would be an exercise in postmodern science.
Agreed. Pretty my point above. Also the “slope” is a spurious value.
Willis’ previous post showed neg. correlation in tropics and +ve elsewhere. Mixing all the colours in you paint pot usually ends up producing liquid diarrhoea colour.
Land and sea have sensitivities which vary by a factor of two and temperatures are not additive physical quantities.
If you can’t add, you can’t do averages and you can’t do a linear regression or any of the other stuff. It’s all invalid.
True, but you can convert it into a flux, then do all the math you want, then convert that back to temp. It actually increases average temps by ~1.2F iirc
I am merely pointing out that the CERES data does not show the expected relationship between changes in net TOA radiation imbalance and changes in surface temperature.
There’s a difference between the expected relationship and your expected relationship. You haven’t shown what the equivalent relationship is in climate models.
Well, ∆T= CST x ∆F is in the IPCC report. Meaning the IPCC says it is [the equivalent relationship, in climate models], whether it really is or not in the models of the ensemble (some, or all of them).
You may not trust the IPCC, however.
Your choice.
The relationship in that equation isn’t what’s being investigated by the above scatter plots. If anyone believes it is they are very much mistaken. TOA flux observations include forcings but most of the variability is due to feedbacks/atmospheric dynamics. What do you think you get if you do the same test as above using respective climate model data?
“What do you think you get if you do the same test as above using respective climate model data?”
Don’t know, and don’t care. I have poor opinion (to say the least) of climate models that are not even able to explain massive climate changes like glacial/interglacial, or smaller one like MWP or LIA.
The expression “(climate) model data” is unclear. Data is data; it is real (even though uncertain). Model is model; virtual. But model data????
Einstein once observed “Two ways a scientist can get things wrong. First the Devils leads him by the nose with a false hypothesis. Or second his thinking is erroneous and sloppy”. Co2 will make the earth boil and fry you have my word, scientists say.
A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree
Which is the same result as we get for the 1 W/m2 variation over a typical solar cycle: one-tenth of a degree.
Lends credence to your estimate.
“Lends credence to your estimate.”
Sounds like you are both trying to quantify the ratio between a forcing and a change in temperature. As scientists do. But they don’t assume that a ratio in one set of circumstances can be applied in another. That is something that has to be established.
“That is something that has to be established”, indeed. And where exactly the IPCC modelers do that ? What make them so sure that some ∆T= CST x ∆F relationship applies no matter what ? And, even, that some ∆T is bound to happen is some ∆F occurs ?
That is something that has to be established
And Willis’ graph does just that.
The situations are very similar: in both cases an extra Watt/m2 is applied, so a similar rise in T is expected [and established].
“What make them so sure that some ∆T= CST x ∆F relationship applies no matter what ?”
Nothing. There is no such relationship assumed in GCMs.
“There is no such relationship” EXPLICITLY “assumed in GCMs.”
Math is cruel: you cannot end up with such relationship if it is not implicitly assumed in the first place.
So, what you say, Nick?
Are GCMs assuming the relationship, despite you denying it?
Or are they not, meaning they don’t end up to the relationship, despite you saying they do ?
Where are you wrong?
You choice is not whether you are wrong or not. You are, one way or the other.
@ur momisugly lsvalgaard December 18, 2017 at 8:13 am
“And Willis’ graph does just that.”
No, Willis’ graph does the very opposite: it shows NO correlation “between a forcing and a change in temperature” (well, as I observed above, it isn’t enough to show the absence of correlation, but, still)
“you cannot end up with such relationship if it is not implicitly assumed in the first place.”
So did we end up with such a relationship? That seems to be the illogic of hese criticisms – look, they get linearity, so they must have assumed linearity, and besides, they didn’t get linearity, so they are wrong.
That is my point here; Leif and Willis get similar ratios in slightly different circumstances, which Leif says confirms something. That’s fine, but it goes against Willis’ contention that someone is wrongly assuming linearity.
blah blah blah Nick. More play words, mote in others eyes when there is beam in yours.
The fact is, IPCC writes ∆T= CST x ∆F, and whether it is explicitly stated or not this assumes linearity, this DOES “assume that a ratio in one set of circumstances can be applied in another” . Math do not allow a linear result without linearity assumption.
How about the 9% change in solar radiation between perihelion in January and aphelion in July? That’s 123w/m2. That would be 12 degrees? What am I missing?
ohn Edmondson
1408 w/m^2 Jan 3-5 at closest point, 1316 in early July at furthest distance, so the 123 w^2 is a high assumption.
Those who argue in support of Trenberth’s flat earth approximation also use the greater speed of the earth when closer to the sun, the slower speed when further away to claim that the yearly total of both arcs becomes identical.
John,
The average difference at the same scale as the 240 W/m^2 of post albedo forcing and including reflection by albedo is about 15 W/m^2. The effect of this becomes obscured because perihelion is closely aligned with the winter solstice and the solar variability aligns with seasonal change.
In 11K years when perihelion aligns with the summer solstice, the N hemi difference between summer and winter will become larger as the difference in the S becomes smaller. The asymmetries between hemispheres are so large that even with the current alignment of perihelion with the seasons, the N hemisphere already has about twice as big of a temperature difference between seasons!
However; the difference will be no where near what the IPCC sensitivity would predict. Unfortunately, we can’t wait 11K years to get this right …
“A change of one watt per square metre over a month is indeed able to change the surface temperature, by about a tenth of a degree”
The only problem with that statement is that it cannot be true, the Earth receives a change of 90+ W/M2 over a six month period every year, which means that along the Equator you would see semi Annual differences approaching 9 Degrees C. But there are no such differences to be found.
https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=112644560000&dt=1&ds=7
Thanks, LT. See my comment above.
w.
Another way of looking at the 90 W change with the seasons is the hemispherical difference. If the Northern Hemisphere gets 90 watts less in summer than the Southern Hemisphere, we can expect a summertime high temperature difference of 9 degrees, is that correct?
Latitude for latitude over the oceans or land, winters should be 9 degrees colder in the South because, hey, insolation. If a massive change like 90 W can’t pitch up a signal how on earth, literally, can 1 Watt.
Willis I think you might have better luck finding a signal in 91 watts (the sum) than an average change of 1.
it is less than the range water vapor can regulate. It’s as simple as that.
The evidence that proves it has to be regulated is to explain all of these behaviors, while there is a continuous loss to space under clear skies, even when temps are not falling.
Conservation of energy requires a source, and the only thing that fits that evidence is water vapor condensing, which we already knows it’s doing, and it ramps up as air temp nears dew point, again the proper feedback to cancel cooling.
And because that is tied to air temp, it’s a darn good temperature regulator when you compare temps to thermal ground, 3k.
With the abysmal correlation of these noisy data the OLS fitted slope can not be given any credibility. Mk I – eyeballing the data, I would estimate the slope to be about twice that. Typical regression dilution error caused by ignoring that you do not have an error-free abscissa.
When you have correlation that bad, don’t even bother doing OLS to an estimation of the slope. It won’t work.
The correlation is abysmal because land and sea; tropics and extra tropics do respond in the same way to radiative forcing. So dumping it all together just makes a muddy brown mess, not a valid analysis.
Maybe that was the point that Willis was trying make.
+1
1. The delay in Ceres and other data updates. Do you know why, with the computing power of Google at their fingertips, updates take so long?
2. Why do both warmists and skeptics use linear trends when semi-sinusoidal/curves are obvious? The cycles are clear indication s of natural processes running that must be considered as also, perhaps, having a longer term natural up/down component.
T^4 curves are the most important when it comes to understanding the sensitivity. While approximately linear over a small range of T, it’s not even close to linear over the range of T found on the planet and most definitely is not linear through the origin.
How do you handle clouds? Could the CERES “surface” temperature be sometimes the temperature of the top of clods?
Sorry, the top of clouds. The top of clods is what we hope to measure.
So, if there is no correlation between temperature and infrared radiation at the top of the atmosphere (first graph), but there is a correlation between 0.07 and 0.09 C / (W/m2) for total “shortwave + longwave” (UV + visible + IR) radiation (second and third graphs), that would exonerate CO2 as a culprit in “global warming”, since CO2 absorbs well in two narrow bands of the IR spectrum, but is nearly transparent in the UV and visible range.
There are many other factors that could alter the total radiation balance of the earth in the “shortwave” (UV+visible) spectrum, such as fluctuations in solar radiation and cosmic rays, or possible changes in the ozone layer (which partially absorbs UV), but CO2 concentrations would have very little effect, since it only absorbs in the longwave, low-energy part of the spectrum.
I recommend that Willis Eschenbach remove the hurricane photo from the background of his graphs. We wouldn’t want any alarmists to make some kind of misleading connection between the information in the graphs and the frequency of hurricanes…
In the middle of fall and spring Earth receives as much as a 0.5 W/m2 difference in TSI everyday for a couple of months as Earth approaches or departs from Perihelion, and there is no perceptible daily change in temperature that corresponds with that magnitude of change in TSI. But let a Volcano such as Pinatubo blast 20+ Metric Tons of SO2 into the stratosphere and change the forcing long enough and it has an effect. There is ample proof that the Earths climate is highly buffered by convection, conduction and various thermal sinks and does not immediately respond to daily changes of even .5 W/m2.
LT, look for my post up thread. where I show exactly this data for N America.
The semiannual difference in TSI due to orbital distance is quite impressive. (See http://lasp.colorado.edu/home/sorce/data/tsi-data/) The six hour measurements are 1316.0477 W/m^2 on 07/10/2017 0.125 and 1407.4700 W/m^2 on 01/03/2017 0.625. That comes to a total change of TSI during the last year of 91.4223 W/m^2 (if I did my subtraction correctly).
Jim
Hey Jim,
Yes, it is absolutely astounding the forcing difference that occurs daily due to the eccentricity of Earth’s orbit. Any climatic changes that are related to the variability of the Solar Cycle cannot be attributed to changes in TSI, it would have to be something like Svensmark’s GCR theory or UV irradiance differences changing the transparency of the Stratosphere.
LT December 18, 2017 at 9:26 am
Not sure where you are getting your figures. The oddity of the eccentricity of the earth’s orbit is that despite the large difference in TSI between perihelion and aphelion, the northern and southern hemispheres receive exactly the same amount of solar energy over the year. This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?
w.
“This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?”
It is wonderful, however, the FACTs are that the Earth receives 15+ W/M2 more energy at the top of the atmosphere at the end of October than it does at the beginning of October pretty much every year. Where is the large temperature gradient that should occur during every October at the equator, according to the simplified equations presented?
Don’t you agree that is a significant change in forcing for one month?
“The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% … barely enough to measure, lost in the flood.”
I may not be the smartest guy in the world, but explain to me why I see over 1 W/m2 between max and min on the graph below.
And The number you are presenting is not correct. You are quoting a 1 AU adjusted average number, which is not what Earth receives, it changes every day. Taking the 1 AU adjusted TSI number is good for measuring the Suns output but it serves absolutely no purpose whatsoever for measuring what Earth actually receives. The solar cycle clearly has some effect on the Earths atmosphere, for it clearly shows up in correlations.
LT, the earth is a sphere (divide by 4)…
>>
Willis Eschenbach
December 18, 2017 at 3:47 pm
This is because the earth spends less time in close and more time farther out, which exactly counterbalances the changes in TSI … ain’t nature wonderful?
<<
So let’s check this with actual measurements using that same TSI site (http://lasp.colorado.edu/home/sorce/data/tsi-data/). Adding up the daily values from equinox to equinox (excluding the day of the equinox since the Sun is neither in the Northern Hemisphere nor the Southern Hemisphere on that day) and dividing by the number of days, we get the following:
21Sep2017-21Mar2017…1333.108 W/m^2…185days
19Mar2017-23Sep2016…1389.951 W/m^2…178 days
21Sep2016-21Mar2016…1333.404 W/m^2…185 days
19Mar2016-24Sep2015…1390.304 W/m^2…178 days
22Sep2015-21Mar2015…1334.06 W/m^2…186 days
19Mar2015-24Sep2014…1390.605 W/m^2…177 days
The numbers aren’t exactly counterbalanced–are they?
Jim
You can easily settle the argument … hint the ISS see the TSI without an atmosphere.
Try comparing it to the published TSI from NASA 🙂
>>
. . . hint the ISS see the TSI without an atmosphere.
<<
So the ISS with an orbital altitude of between 330 km and 435 km has no atmosphere, but a satellite with an orbital altitude of about 645 km does. That’s good to know LdB.
Jim
You missed the point 🙂
If you want to understand what is different
https://science.nasa.gov/science-news/science-at-nasa/2001/ast21mar_1
Compare to your satellite and an earth based observatory.
You have data from 3 very different stations 🙂
Took me some ferretting around to find the report not behind a portal
http://lasp.colorado.edu/media/projects/SORCE/documents/SSI_Workshop_2012/1f_Thuillier_ATLAS_ISSI_SOLSPEC.pdf
“there is no perceptible daily change in temperature that corresponds with that magnitude of change in TSI”
The reason it is imperceptible is that you have nothing to measure it against. It happens every year in exactly the same way. Since perihelion is early Jan, it would make the SH summer hotter than otherwise. But what is otherwise? SH has much more ocean. Summers are different to NH anyway, and the perihelion just adds to the effect, but not in a way we can easily unravel.
And in spring/autumn, the season is changing rapidly because of solar inclination. Orbital distance is a small added effect, but again, there is no easy way to unravel it. Both are part of the seasonal change we observe.
Yes Nick, and that is exactly my point, back of the napkin equations about TSI and their effect on Earths temperature are a pointless endeavor, TSI at the top of Earths atmosphere is never a constant. And this simplistic idea that it averages out each year is just unsubstantiated guessing.
And it does not happen every year in exactly the same way, if Perihelion occurs during solar max versus solar min there are differences in TSI, Stratospheric Chemistry as well as planetary field strength, there are no constants.
LT December 18, 2017 at 5:11 pm
No, it’s actually mathematically derivable due to the fact that both insolation and gravity fall off as the square of the distance.
w.
LT December 18, 2017 at 5:18 pm
The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% … barely enough to measure, lost in the flood.
w.
The change in TSI from solar max to solar min is about 0.2 W/m2 / 340 W/m2 = 0.6% …
0.06%
“both insolation and gravity fall off as the square of the distance.”
Precisely. The macro scale forces of nature seem to do this. The nuclear scale forces (strong and weak), seem to fall off at Euler’s constant to the power of the negative distance.
But in the context of TSI, distance is the distance from the sun. In the context of earth spectrum downwelling IR, distance is altitude.
“LT, the earth is a sphere (divide by 4)…”
Afonzarelli,
Total Solar Irradiance is measured in Watts Per Square Meter, at the Peak of this Solar cycle there was a maximum of 1361.8 Watts Per Square meter reaching the top of the atmosphere and currently Earth is receiving 1360.4 at the top of Earths Atmosphere, and those number are adjusted to 1 AU, there is no division by 4. On a clear day and a small zenith about 1000 watts Per Square Meter Reach the Surface. I have no idea what Willis and Lsvalgaard are talking about. I guess when you cross plot enough things you start losing touch with reality. The Eccentricity of Earths orbit causes Earth to receive around 3.3% more energy during December and 3.3% less during the month of June.
Hope that helps…
I’ve been saying for decades that the IPCC’s assumption of approximate linearity is wrong and that the supporting theory is the SB Law which requires emissions and temperature to go as T^4 which must also be true for the steady state relationship between solar forcing and the surface temperature. The data is absolutely clear as this scatter plot shows.
http://www.palisad.com/co2/tp/fig1.png
The Y axis is temperature in degrees K and the X axis is emissions in W/m^2. Each little dot is the 1 month MEASURED average surface temperature vs. the emissions at TOA for each 2.5 degree slice of the planet. The larger dots are the 3 decade averages (from satellite data) for each slice and a curve passing through them represents the steady state absolute and incremental relationship between the surface temperature and the planets emissions. Since in the steady state, emissions are equal to incident energy, this is also a proxy for the relationship between the surface temperature and variable solar input (forcing). As scatter plots go, this relationship has the tightest distribution around the mean of any other pair of climate variables I’ve examined. The next tightest relationship is between the average surface temperature and the average water column. Many other scatter plots of satellite data are here:
http://www.palisad.com/sens
The most significant difference between adjacent slices is solar forcing and since in the steady state, solar forcing is equal to planet emissions, delta X is exactly equal to forcing, per the IPCC definition. The deltaT corresponding to this deltaF is the slope of the averages thus the line passing through the averages in the scatter plot is a proxy for the sensitivity as a function of temperature.
Also shown are plots of the SB Law. The black line is the SB Law for an ideal BB while the green line is the SB Law for a non ideal BB with an emissivity of 0.62. The slope of this line at the average temperature of the planet is about 0.3C per W/m^2 and less than the 0.4C per W/m^2 at the low end of the IPCC’s estimate.
The emissivity of 0.62 is not an arbitrarily fit constant, but is the measured ratio between average planet emissions (240 W/m^2) and the average surface emissions (390 W/m^2 @ur momisugly 288K). It should be undeniable to all that the atmosphere takes a nearly ideal BB surface and makes it look like a non ldeal BB from space (also called a gray body), moreover; there’s nothing else that it could look like unless the planet ignores first principles physics! It’s compelling when the laws of physics align with the data, unfortunately, this alignment is not aligned with the IPCC’s narrative.
While the slope of this relationship is approximately linear over a small range of T, it’s not linear through the origin as the IPCC assumes. This is how they get their sensitivity which is represented as the blue line
which is plotted to the same scale as the measured data and is a line drawn from the intersection of average surface emissions and average surface temperature through the origin.
The magenta line represents the slope of the measured relationship between power albedo solar input and the surface temperature which approaches the sensitivity of an ideal BB at the surface temperature. Note that the T^4 relationship is unconditionally independent of the equivalent emissivity.
Not ‘power albedo’, but ‘post albedo’.
co2isnotevil, I’m not an expert, but your analysis makes sense to me. Does the lack of response from other posters indicate concurrence?
” … indicate concurrence?”
Either that or a lack of understanding about what this means. The way that climate science has been framed by the IPCC and its self serving consensus is flawed at the core and the junk science arising from it has contaminated the thinking of many, including many skeptics, which misleads them away from the obvious truth. I can’t think of anything more intuitively, theoretically and practically obvious than the macroscopic average behavior of the planet obeying the macroscopic laws of physics.
My analysis measures the average relationships between variables extracted from satellite data and then offers an explanation for how the measured relationships arise by conforming to the known laws of physics. The key result of this work is the scatter plot I showed earlier which is repeatable evidence for the smoking gun that falsifies the entire range of climate sensitivity presumed by the IPCC.
Voilà, not voilá.
Thanks, fixed. Haven’t spoken French at work in thirty years, it fades …
w.
These random or semi-random dot diagrams are wprthless. No way does a straight line represent them. Do not try to pas off such random data off as science. Arno Arrak
Arno Arrak December 18, 2017 at 2:47 pm
They are called “scatterplots”, and far from being “worthless” they are routinely used to explore correlations.
Hey, don’t bust me, I’m NOT the one that claims that a straight line represents them. That would be the IPCC and the current climate paradigm which insists that temperature is a linear function of forcing … I’m just exploring their claim.
I implore you, try to follow the story a bit better next time before accusing me of trying to “pass off random data as science”, it just makes you look like a noob, not doing your reputation any good …
w.
Willis, if you want to demonstrate the lack of correlation, a correlation coeff would be a good statistic to provide. Could you provide that for the three graphs?
Hey, Greg, we’re a full service website. The table is here … It’s a CSV file called “Willis’s Data For Greg.csv”.
Doing this reminds me that although it’s not practical to post my full data and code, because the data is 13GB and the code is thousands of lines and not just user-unfriendly, it’s user-aggressive … I can and will start posting the resulting datasets that are used for the graphs with each post.
Best to you,
w.