Temperature and Forcing

Guest Post by Willis Eschenbach

Over at Dr. Curry’s excellent website, she’s discussing the Red and Blue Team approach. If I ran the zoo and could re-examine the climate question, I’d want to look at what I see as the central misunderstanding in the current theory of climate.

This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).

As evidence of the centrality of this misunderstanding, I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings. This means that in the models, everything but the forcing cancels out and the temperature is a function of the forcings and very little else. In addition, the paper laying out those claimed mathematical underpinnings is one of the more highly-cited papers in the field.

To me, this idea that the hugely complex climate system has a secret control knob with a linear and predictable response is hugely improbable on the face of it. Complex natural systems have a whole host of internal feedbacks and mechanisms that make them act in unpredictable ways. I know of no complex natural system which has anything equivalent to that.

But that’s just one of the objections to the idea that temperature slavishly follows forcing. In my post called “The Cold Equations” I discussed the rickety mathematical underpinnings of this idea. And in “The TAO That Can Be Spoken” I showed that there are times when TOA forcing increases, but the temperature decreases.

Recently I’ve been looking at what the CERES data can tell us about the question of forcing and temperature. We can look at the relationship in a couple of ways, as a time series or a long-term average. I’ll look at both. Let me start by showing how the top-of-atmosphere (TOA) radiation imbalance varies over time. Figure 1 shows three things—the raw TOA forcing data, the seasonal component of the data, and the “residual”, what remains once we remove the seasonal component.

CERES plotdecomp net TOA forcing

Figure 1. Time series, TOA radiative forcing. The top panel shows the CERES data. The middle panel shows the seasonal component, which is caused by the earth being different distances from the sun at different times of the year. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data.

And here is the corresponding view of the surface temperature.

CERES plotdecomp temperature

Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015.

Now, the question of interest involves the residuals. If there is a month with unusually high TOA radiation, does it correspond with a surface warming that month? For that, we can use a scatterplot of the residuals.

CERES scatterplot toa and temp

Figure 3. Scatterplot of TOA radiation anomaly (data minus seasonal) versus temperature anomaly (data minus seasonal). Monthly data, N = 192. P-value adjusted for autocorrelation.

From that scatterplot, we’d have to conclude that there’s little short-term correlation between months with excess forcing and months with high temperature.

Now, this doesn’t exhaust the possibilities. There could be a correlation with a time lag between cause and effect. For this, we need to look at the “cross-correlation”. This measures the correlation at a variety of lags. Since we are investigating the question of whether TOA forcing roolz or not, we need to look at the conditions where the temperature lags the TOA forcing (positive lags). Figure 4 shows the cross-correlation.

CERES ccf toa and temp

Figure 4. Cross-correlation, TOA forcing and temperature. Temperature lagging TOA is shown as positive. In no case are the correlations even approaching significance.

OK, so on average there’s very little correlation between TOA forcing and temperature. There’s another way we can look at the question. This is the temporal trend of TOA forcing and temperature on a 1° latitude by 1° longitude gridcell basis. Figure 5 shows that result:

correlation toa radiation vs temperature

Figure 5. Correlation of TOA forcing and temperature anomalies, 1° latitude by 1° longitude gridcells. Seasonal components removed in all cases.

There are some interesting results there. First, correlation over the land is slightly positive, and over the ocean, it is slightly negative. Half the gridcells are in the range ±0.15, very poorly correlated. Nowhere is there a strong positive correlation. On the other hand, Antarctica is strongly negatively correlated. I have no idea why.

Now, I said at the onset that there were a couple of ways to look at this relationship between surface temperature and TOA radiative balance—how it evolves over time, and how it is reflected in long-term averages. Above we’ve looked at it over time, seeing in a variety of ways if monthly changes or annual in one are reflected in the other. Now let’s look at the averages. First, here’s a map of the average TOA radiation imbalances.

CERES TOA Net Forcing

Figure 6. Long-term average TOA net forcing. CERES data, Mar 2000 – Feb 2016

And here is the corresponding map for the temperature, from the same dataset.

CERES surface temperature

Figure 7. Long-term average surface temperature. CERES data, Mar 2000 – Feb 2016

Clearly, in the long-term average we can see that there is a relationship between TOA imbalance and surface temperature. To investigate the relationship, Figure 8 shows a scatterplot of gridcell temperature versus gridcell TOA imbalance.

CERES scatterplot gridcell toa vs temp

Figure 8. Scatterplot, temperature versus TOA radiation imbalance. Note that there are very few gridcells warmer than 30°C. N = 64,800 gridcells.

Whoa … can you say “non-linear”?

Obviously, the situation on the land is much more varied than over the ocean, due to differences in things like water availability and altitude. To view things more clearly, here’s a look at just the situation over the ocean.

CERES scatterplot gridcell toa vs temp ocean SB

Figure 9. As in Figure 8, but showing just the ocean. Note that almost none of the ocean is over 30°C. N = 43,350 gridcells.

Now, the interesting thing about Figure 8 is the red line. This line shows the variation in radiation we’d expect if we calculate the radiation using the standard Stefan-Boltzmann equation that relates temperature and radiation. (See end notes for the math details.) And as you can see, the Stefan-Boltzmann equation explains most of the variation in the ocean data.

So where does this leave us? It seems that short-term variations in TOA radiation are very poorly correlated with temperature. On the other hand, there is a long-term correlation. This long-term correlation is well-described by the Stefan-Boltzmann relationship, with the exception of the hot end of the scale. At the hot end, other mechanisms obviously come into play which are limiting the maximum ocean and land temperatures.

Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero. This is what we would expect in a governed, thermally regulated system. In such a system, sometimes the feedback acts to warm the surface, and other times the feedback acts to cool the surface. Overall, we’d expect them to cancel out.

Is this relationship how we can expect the globe to respond to long-term changes in forcing? Unknown. However, if it is the case, it indicates that other things being equal (which they never are), a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree …

However, there’s another under-appreciated factor. This is that we we’re extremely unlikely to ever double the atmospheric CO2 to eight hundred ppmv from the current value of about four hundred ppmv. In a post called Apocalypse Cancelled, Sorry, No Ticket Refunds. I discussed sixteen different supply-driven estimates of future CO2 levels over the 21st century. These peak value estimates ranged from 440 to 630 ppmv, with a median value of 530 ppmv … a long ways from doubling.

So, IF in fact the net feedback is zero and the relationship between TOA forcing and surface temperature is thus governed by the Stefan-Boltzmann equation as Figure 9 indicates, the worst-case scenario of 630 ppmv would give us a temperature increase of a bit under half a degree …

And if I ran the Red Team, that’s what I’d be looking at.

Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound. Normally there’s not much noise here in the forest, but tonight it’s sensory-deprivation quiet … what a world.

My best regards to everyone, there are always more questions than answers,

w.

PS—if you comment please QUOTE THE EXACT WORDS YOU ARE DISCUSSING, so we can all understand your subject.

THE MATH: The Stefan-Boltzmann equation is usually written as

W = sigma epsilon T^4

where W is the radiation, sigma is the Stefan-Boltzmann constant 5.67e-8, epsilon is emissivity (usually taken as 1) and T is temperature in kelvin.

Differentiating, we get

dT/dW = (W / (sigma epsilon))^(1/4) / (4 * W)

This is the equation used to calculate the area-weighted mean slope shown in Figure 9. The radiation imbalance was taken around the area-weighted mean oceanic thermal radiation of 405 W/m2.

0 0 votes
Article Rating
285 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
July 13, 2017 1:34 am

“First, correlation over the land is slightly positive, and over the ocean, it is slightly negative. Half the gridcells are in the range ±0.15, very poorly correlated. ”
While it’s slightly difficult to see from the graph legend what 0 would be, I get the impression that the sign should the other way around in the text above?

Clyde Spencer
Reply to  Troed Sångberg
July 13, 2017 10:11 am

Troed,
That is my subjective impression also. Also, it looks like Greenland has a strong negative correlation, like Antarctica. What do they have in common? A large thermal ballast in the form of ice and 6 months without sunlight.

Reply to  Clyde Spencer
July 17, 2017 7:41 am

And both at high altitude.

chaamjamal
July 13, 2017 1:44 am

Really great analysis
Thank you
Hope your phone rings
And it’s Scott Pruitt
Bestest

July 13, 2017 2:21 am

Sensitivity is dictated by anomaly analysis.
As the temps did not materialise the sensitivity studies got less alarming.
A residue of changes (known and largely unknown) is being used to determine sensitivity to one tiny factor in the system.
Luke warmers are cargo cult scientists.
No one, NO ONE can show their model represents the actual real world mechanisms and until they can, it’s cargo cult science

Reply to  Mark - Helsinki
July 13, 2017 3:33 am

No one, NO ONE can show their model represents the actual real world mechanisms and until they can, it’s cargo cult science

I don’t want to rain on your parade, but that is true of all science…
All science is, is a collection of models based on a more basic model (rational materialism) that seem to work.
So at a stroke you have reduced all science to ‘cargo cult status’.
But actually that may be where it belongs.
If you study what is happening at the unpleasant end of quantum physics, physicists don’t actually even want to talk about what it means, or what reality is, any more. They just want maths that successfully predicts, no matter how weird the implications are…
I have been pondering this a long time, and in the end the only justification we have for thinking that any of our models are close to ‘true’ is the preposterous and illogical statement that ‘because they work its strong evidence that they are ‘almost true’.
I.e the fact by typing on this screen and people replying to it, works. is ‘strong evidence’ for my theory that the world consists of people trapped inside my computer screen. It fits the facts. It predicts what will happen.
In short we don’t actually have any understanding about how the world works at all. We have a collection of shorthand imperfect models that allow us to predict its behaviour a little bit – and that’s all.
Its cargo cult. Except we stopped building mock airfields when it stopped raining cargo.
Well obviously climate scientists did not.

Reply to  Leo Smith
July 13, 2017 4:27 am

Too much naval-gazing philosophy here. When a theory like quantum mechanics is capable of making repeatable predictions to twenty decimal place accuracy with no known exceptions observed over the best part of a century then we call that a good theory. The fact that it is not classically derivable and is potentially incomplete is irrelevant and clearly the formalism is either capturing some important aspect of reality or is the most monstrously improbable coincidence.
Cargo cult science is when a hypothesis – CAGW is still a long way from even making the grade of theory and ‘climate change’ could not even be graced with the title hypothesis since it is forever and totally unfalsifiable – makes no accurate predictions of any kind and yet is nevertheless hailed as being representative of nature.
There is a distinct difference between good models of reality and bad and in no way is Mark condemning good science as cargo cult when he puts bad models into that category.

commieBob
Reply to  Leo Smith
July 13, 2017 4:49 am

This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).

Once upon a time, before everyone had a computer on their desks, we used to analyze circuits by hand. To do that at all, for any reasonably complex circuit, it was necessary to make linearizing assumptions. The analysis was easy to confirm by building the circuit. You got good at knowing what worked. In that case, linearizing was fully justified. In any event, you were clear about your assumptions.
Our usual tools are valid if things are, or can be made, linear time-invariant (LTI). It is trivially demonstrable that the climate does not meet that requirement. Any linear climate model is therefore mathematically invalid. Could the linearizing assumptions be somehow justified? No, the system is too complex and there are too many unknowns.

Reply to  Leo Smith
July 13, 2017 4:59 am

Wheelers delayed choice experiment can be interpreted as showing reality is not real. Those are the kinds of results from QM that are held as speculative. Are tee here really multiple world’s? Does reality fade into a fog of possibilities a few seconds in the future?
QM has a stellar reputation, yet describes reality as more like a funhouse, than something like the real world.

Robert of Texas
Reply to  Leo Smith
July 13, 2017 10:52 am

There is a difference between predicting and explaining. QM is highly successful at predicting, but a complete failure at explaining. I listen to physicists attempting to use math to explain what is going on and its laughable – just like the idea that no states exist if I don’t look at them – I know the moon is up there whether or not I am currently looking at it. Entanglement is a great example of knowing how to perform the math and entirely lacking a reason why it works. (Actually some physicists seem to have an understanding, it just isn’t the prevailing view as yet).
All that said, QM has been the most successful “model” in all of humankind existence. It repeatedly works and to a high degree of accuracy.
All climate “science” is nothing but a bunch of over-rated proxy data (with confounding variables and low accuracy) or (mostly tainted) measurements of daily observations thrown together into a bunch of meaningless computer models that predict NOTHING useful. It has been the single worst, most unsuccessful “model”(s) in humankind modern existence.
The only successful “prediction” AGW makes is it will get warmer – however this is the same prediction the null hypothesis makes so again, the AGW hypothesis fails – utterly – entirely – period. AGW makes no measurable and useful quantifiable predictions, so it can never be disproved. I.e. its religion.

Stephen Richards
Reply to  Leo Smith
July 13, 2017 11:00 am

physics Models are not products that tell you HOW the world works. They are models that allow you to make and test assumptions against observations.

Reply to  Stephen Richards
July 13, 2017 11:35 am

They are models that allow you to make and test assumptions against observations.

Simulators allow you to ask questions about some model. Where people go wrong is assuming you ask the model for an answer the same way you search a pile of real world data for the corresponding answer.
You have to tell a simulator everything. I don’t know how many times I had to explain why the sim results were not wrong, but that the engineer asked the wrong question, and this is why the results are what they are. And when you asked the question correctly, you get what you’re suppose to get based on the model.
And that still doesn’t prove the model is right, just that you asked the right question, in a way the simulator knew how to answer. And you understood what it told you.

Dave Fair
Reply to  Leo Smith
July 13, 2017 11:20 am

Here is the real deal: Model(s) results were drifting out of realistic ranges, so the outputs were adjusted to more desirable results. [I don’t remember the exact circumstances. Anybody have the quote?]
I do know that AR5 had to adjust model “projections” down for the intermediate future because they were, in the main, running hot.
IPCC climate models are bunk.

Reply to  Leo Smith
July 13, 2017 1:23 pm

Leo Smith concluded:
In short we don’t actually have any understanding about how the world works at all. We have a collection of shorthand imperfect models that allow us to predict its behaviour a little bit – and that’s all.
… to which cephus20 responded to Leo’s entire post:
Too much naval-gazing philosophy here. When a theory like quantum mechanics is capable of making repeatable predictions to twenty decimal place accuracy with no known exceptions observed over the best part of a century then we call that a good theory. The fact that it is not classically derivable and is potentially incomplete is irrelevant and clearly the formalism is either capturing some important aspect of reality or is the most monstrously improbable coincidence.
… to which I now add MY two cents (sense?):
I agree with Leo, further pointing out that without “naval-gazing”, human life would become pretty dull and meaningless. We humans want to harmonize the rest of our primitive senses with our brain functions, and to divorce these senses from the brain functions of the scientific process seems to ignore these senses that are the very basis for finding meaning or purpose in everyday life.
The fact that quantum mechanics can predict to twenty-decimal-place accuracy says nothing of any reality outside of human consciousness. If anything, quantum mechanics is a precise mathematics of human consciousness. Quantum mechanics, thus, is a really good tool, … and that is all. The fact that it is potentially incomplete might be more relevant than we think, because in its current incompleteness, it denies any connection to how humans find meaning everywhere else in existence. “Existence” — QM does not even acknowledge such a thing. “Reality”? — no such thing, no, worse than that, We can neither confirm nor deny reality — it’s not our job to say.
A person could easily argue, from a QM perspective, that believing in “reality” is like believing in angels or Santa, as I understand it. It just seems shallow to me, in this respect. Why not give it some color and relationship with the human domain? But, I digress — this blog is about climate science. Oh wait, isn’t believing in human-caused-CO2-catastrophic-climate-change like believing in Santa? Maybe I haven’t digressed as much as I thought.
Yours truly,
Neville Gazer

commieBob
Reply to  Leo Smith
July 14, 2017 4:40 am

There are models and there are models.
In engineering, software design tools let us do things we could never do without them. The thing is that they are reliable because they can be validated and verified. On the other hand, …

… the validation of climate models and their evaluation against observation is inadequate (and much less extensive than it could/should be), particularly in the context of fitness for many of the purposes for which they are being used. link

It’s pretty simple.

July 13, 2017 2:25 am

Figure 8 title is confusing.

Keitho
Editor
July 13, 2017 2:27 am

Nice, tight reasoning. Another good job W.

Tom Halla
July 13, 2017 2:35 am

I wonder just how a formula that results in the plot in Figure 8 would work.

toorightmate
Reply to  Tom Halla
July 13, 2017 3:20 am

Just change the real data to make it fit better – just like the “climate scientists” do!!

Nick Stokes
July 13, 2017 2:39 am

Willis,
“This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).”
As I said at Judith’s, the first objection here is that you don’t say who has this mistaken idea, and how exactly they expressed it. I think some of the mistaken ideas are yours. One. that I have railed against over the years, is
“As evidence..I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings.”
You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula. So it is evidence only of the correct application of that formula.
You quote Steven Schwartz. But his “Ansatz” isn’t the simple formula that you quoted there:
∆T = λ ∆F
It is that
dH/dt = C dT/dt
where H is heat content. That pushes the question back to the relation between dH/dt and ∆F. And it isn’t simple linear – Schwartz gives the T~F relation as
∆T = 1/λ F (1-e^(-t/τ))
but with the further possibility of multiple time scales.
But even with that more complex relation, Schwartz has nothing that corresponds to
“But that’s just one of the objections to the idea that temperature slavishly follows forcing.
He’s developing an energy balance model he’s using to try to tease out the time constants. That doesn’t imply anything slavish. It just means that you’ll find an element in the response that corresponds to that constant. A rough analogy is that an opera house may have a resonant frequency (it probably shouldn’t). That doesn’t mean that the lady singing the aria is slavishly following the resonance. It probably does color the way that you’ll hear her song.
If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.

Reply to  Nick Stokes
July 13, 2017 6:22 am

Nick, What the blue team keeps saying is Sensitivity is likely > 2.0C, where it’s likely well below 1.1C.
They do that by treating Co2 as an additive forcing, it is not.
It displaces nearly identical amounts of forcing from natural water vapor feedback that regulates Min T.

Reply to  micro6500
July 13, 2017 2:02 pm

Micro,
“They do that by treating Co2 as an additive forcing, it is not.
It displaces nearly identical amounts of forcing from natural water vapor feedback that regulates Min T.”
That is not correct. Increased CO2 traps heat and the warmer air holds more moisture than before. CO2 forcing pulls water vapor up to join it, and the water vapor provides more heat trapping – it is additive.

Reply to  Jack Davis
July 13, 2017 2:14 pm

– it is additive

Measurements disagreecomment image

Reply to  micro6500
July 13, 2017 3:58 pm

Micro, your graph proves nothing. Logic should tell you that if the physical chemistry of both CO2 and H2O means they absorb and retain thermal radiation, and if you add more of one of them to the atmosphere without removing any of the other, then more heat absorption will occur.
They are additive.

Reply to  Jack Davis
July 13, 2017 4:02 pm

No, they are independent. And it’s a matter of physics, not chemistry

Reply to  micro6500
July 13, 2017 4:36 pm

They are independent in the individual ways their physical chemistries trap heat. They are not independent in their abundance in the air, as the more heat CO2 traps, the more water vapor can be supported by the air. The more there is of either one of them, the more heat can be trapped.

Reply to  Jack Davis
July 13, 2017 4:57 pm

Except that isnt what the measurements show. The effect of water vapor, and the water cycle has a nonlinear affect on the rate temps drop at night. There’s an almost 98% correlation between min temp and dew point. Neither are affected by co2.

Reply to  micro6500
July 13, 2017 5:34 pm

Jack Davis – “Increased CO2 traps heat and the warmer air holds more moisture than before.” Excellent point. Just about all agree with that statement, and of course we skeptics maintain that nobody really knows to what degree extra heat retention will result from that. The physics and chemistry I understand indicate that more atmospheric CO2 will not create any problems.

Reply to  micro6500
July 13, 2017 6:01 pm

Micro,
“There’s an almost 98% correlation between min temp and dew point. Neither are affected by co2.”
Of course – that’s tautological.
Obviously the minimum overnight temperature will see dew drop out of the atmosphere if it’s that kind of night. At that stage C02 is kicking back paring its nails – it has nothing to do with that process.
I’ll give it one more go as simply as I can, then I’m out of here:
CO2 absorbs heat radiated by Earth, causing the air to heat up (long and continuous process).
Warmer air holds more water vapor (humidity).
H2O vapor also absorbs heat radiated from Earth’s surface – independently!
When it gets cold at night, water will drop out of the air (dew) because the gas to liquid transition temperature is in the range of overnight temperatures.
CO2 will not drop out because its transition temperature is far lower – it has its nails to attend to!
I’m out!

Reply to  Jack Davis
July 13, 2017 6:11 pm

First in the middle of the night the optical window, the main radiative hole, is the same temp colder as it was at dusk, yet cooling rates drop by half to 3/4. Did you read the linked paper?

jclarke341
Reply to  Nick Stokes
July 13, 2017 7:01 am

Nick Stokes…your comment doesn’t seem related to Willis’s statement. He said: “As evidence..I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings.” It doesn’t matter if the forcings are an ‘input’ or a ‘diagnostic’. The relationship that Willis is referring to is still evident; if the forcing increases in the models, the temperature (output) will increase as well, and the relationship is very, very close to linear over periods of time greater than a few years. .
As the models are set up, they will never project global cooling for many years while the concentration of CO2 in the atmosphere is increasing. And the only reason why they would project any cooling at all under increasing CO2 conditions is because of the ‘volcano’ variable popping up every now and then. Or perhaps an El Nino/La Nina variable would allow for a brief cooling now and then. Outside of the few ‘natural’ variables that are acknowledged by the IPCC, the increased forcing associated with increasing CO2 will always produce warming in the models. Always.
That is not true in nature and that is the whole point of this post. It seems to me that if you want to debate something with Willis, you need to realize first what he is actually saying.

Dave Fair
Reply to  jclarke341
July 13, 2017 11:38 am

Taken at face value, he seems to be saying that the pretty forcings graph of the IPCC is nothing but the fevered (opaque) imaginings of climate modelers.

Pete Sudbury
Reply to  jclarke341
July 14, 2017 1:52 am

So, increasing CO2 has no effect on the temperature of the earth? And I have fairies at the bottom of my garden. lalalalala.

Reply to  Pete Sudbury
July 14, 2017 5:13 am

increasing CO2 has no effect on the temperature of the earth?

Not much, water compensates by condensing less water vapor by about the same amount co2 increased. Night time cooling rate is controlled by rel humidity, which slows cooling at high rel. If it’s warmer during the day, it just cools longer at the high cooling rate low humidity, and after the heat has been radiated, it slows cooling.
It’s regulated out by water vapor.

michel
Reply to  Nick Stokes
July 13, 2017 8:14 am

You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula. So it is evidence only of the correct application of that formula.
Nick, surely the point he is making is that you have models with lots of apparent inputs which yield a given output. He claims that you get essentially the same output with only one input, call it X.
To this you reply that he is mistaken about the nature of X. it is not in fact an independently calculated element, you say. It is a calculated quantity. The way it is calculated, you say, is to assume a certain level of output and a certain relationship between output and X, and then to reason that given this relationship, the X MUST be a certain level.
So, you argue, it is more or less true by definition that you can get all the outputs from X alone. X was made up to a quantity where exactly that would be possible.
I have no idea whether or not this is true. But if its true, you are actually agreeing with his point. His point is that the very elaborate model with lots of different variables actually works the same as one with only X as an input, and you seem to be agreeing with that. The only qualification you have is that you say this was arrived at by assumption and not by experimentally finding values for X and then seeing how they relate to the outputs.
Well, maybe, but it changes nothing in his argument. His argument is still that the models are absurdly overcomplicated and have lots of unnecessary variables, when all they require is X. He is not saying anything about whether the values they assume for X are valid, nor is he saying how they arrived at them. He is just saying that a great mass of complications come down to something very simple in the end.
It is a bit like someone plotting cholera incidence. He includes a whole bunch of variables in a model, like ethnic groups, age of infection, season of the year, country… and also water contamination. How he arrives at his calculation of how contaminated the water is, makes no difference. Someone points out that you do not need any of the other variables.
To which his reply is, ah, water contamination is not an input, its a calcluated factor.
To which the reply is, fine, but that is in fact the only thing that’s driving your models. And by the way, have you checked to see if the contamination you are calculating is found in the real world?

Nick Stokes
Reply to  michel
July 13, 2017 10:15 am

“His point is that the very elaborate model with lots of different variables actually works the same as one with only X as an input, and you seem to be agreeing with that.”
My point is that X wasn’t an input, but was calculated from the output. You can check the algebra here, where Forster et al made the process quite explicit. So saying that X, deduced from the GCM output, enables you to deduce that output, isn’t telling you anything. The “model” with X as only input, in fact can’t work without the GCM that provided X.

Reply to  michel
July 15, 2017 3:33 am

Nick writes

My point is that X wasn’t an input, but was calculated from the output.

Nick, this is fundamentally incorrect. You’re letting the complexity of the model’s calculation cloud your understanding. The forcing is attributable to the TOA imbalance. And the TOA imbalance is set to an “appropriate” value by tuning model parameters (primarily cloud related, for most models).
You appear to be arguing that there is no function in the GCM that takes X as a parameter to deduce T and you’re right about that but it totally misses the point.

Reply to  Nick Stokes
July 13, 2017 8:27 am

Nick “If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.”
Exactly, I couldn’t say it better myself. What exactly is the blue team saying? I have tried for years to get a definitive answer to that question.

Nick Stokes
Reply to  jinghis
July 13, 2017 10:19 am

“What exactly is the blue team saying?”
You could read what they say to find out. I have long commended Willis’ advice, given here in caps: “QUOTE THE EXACT WORDS YOU ARE DISCUSSING”. We need to hear what the blue team actually said that is characterised as the “central misunderstanding”.

Reply to  Nick Stokes
July 13, 2017 8:55 am

Nick Stokes: You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula.
What does that mean? Everywhere we are warned that increasing the CO2 concentration by continuing to burn fossil fuel will cause an increase in global mean surface temperature; i.e. that CO2 is for sure a forcing.

Nick Stokes
Reply to  matthewrmarler
July 13, 2017 10:05 am

“What does that mean?”
Willis’ contention is that you can take some published forcing numbers and derive GCM output surface temperatures by simple formulae. My objection is that those forcing numbers were not the input from which the GCM output was calculated, but were in fact deduced from the output (and in some cases other data). So all the correspondence tells you is about the deduction process.
GCMs do take in CO2 concentrations and much other atmosphere information, and do indicate that GHGs cause warming. That is usually shown by running them with and without CO2 increase. But the quantitative estimate of GHGs as forcing is derived from other information, including GCM output. So the argument that outputs are simply related to forcings is circular.

Reply to  matthewrmarler
July 13, 2017 12:58 pm

Nick Stokes: Willis’ contention is that you can take some published forcing numbers and derive GCM output surface temperatures by simple formulae. My objection is that those forcing numbers were not the input from which the GCM output was calculated, but were in fact deduced from the output (and in some cases other data). So all the correspondence tells you is about the deduction process.
That isn’t what you wrote and you mischaracterize what Willis did: he showed by statistical analysis that the GCM output changes are nearly linear effects of the CO2 forcing changes, despite the complexity of the models.

Reply to  matthewrmarler
July 13, 2017 1:43 pm

This comment by Nick jogged an insight, or maybe a confusion (you decide – Sorry, Nick, if I confuse your clarification):
So the argument that outputs are simply related to forcings is circular.
Isn’t this one of the accusations made by some skeptics?– that forcings are derived in such a way that they support certain outputs? — that certain forcings are “anticipated” by the inputs (hence, the people “inputers”) in order to arrive at those forcings?

Reply to  matthewrmarler
July 13, 2017 1:54 pm

Nick writes

GCMs do take in CO2 concentrations and much other atmosphere information, and do indicate that GHGs cause warming.

And they do it using an imperfectly modelled atmosphere. They can’t model lapse rate accurately so they can’t hope to model changes to the lapse rate as a result of the GHG concentration changes. Consequently the effective forcing is more “set” than you think it is.

Nick Stokes
Reply to  matthewrmarler
July 13, 2017 5:06 pm

matthewrmarler,
“you mischaracterize what Willis did”
I’m not trying to characterise the work he did with CERES data. I’m talking about what he says earlier, which is the blue/red moral that he wants to draw from it. On its own, the CERES analysis does not appear to conflict greatly with any “blue” theory. My issue is the set of statements about someone believing that “temperature slavishly follows forcing”. What he has said in support of that in earlier posts is based, not on someone saying it, but on the correspondence between TOA forcing related to GHG and temperature in GCM output. And that is what he refers to here. My point is that that link is weak. What is needed to establish that the blue team has that belief is a quote of someone actually saying it.

RWturner
Reply to  Nick Stokes
July 13, 2017 11:13 am

PLease once and for all inform us all WHAT THE BLUE TEAM IS SAYING? You have all the time in the world to write lengthy diatribes but don’t want to answer this very basic question.

High_Octane_Paine
Reply to  Nick Stokes
July 13, 2017 10:50 pm

Nick Stokes, Willis Eschenbach etc are the reason people hear the term climatology and become disgusted at what fakes can do to a branch of science.

July 13, 2017 2:56 am

I have wondered what happened at TOA when the surface temperature dropped so drastically in 2008 – I can’t see anything very obvious here. The temperature dropped by about 0.7 C over the year. Any ideas why?

Butch2
July 13, 2017 3:04 am

Willis, as always, a complicated and well thought out post….BUT, the simple question is…Why the %$^# does anyone want to save the ice in the North ?? If I want ice, I’ll open my freezer door !
..P.S. Nothing much that lives in the frozen North or in my freezer depends on ICE !!

tony mcleod
Reply to  Butch2
July 13, 2017 5:35 am

That’s far simpler Butch2, but I suspect your’s is a rhetorical question.

Reply to  tony mcleod
July 13, 2017 6:18 am

Can I have a go at answering his rhetorical then?
Earth’s troposphere as it is today is something like a giant stirling engine – the cold ends are necessary to keep the engine pumping.
Get rid of the ice and not only do you halt the engine, but you have overheated it. A stagnant hot steamy world we wouldn’t want to live in will be the result.

Reply to  Jack Davis
July 13, 2017 6:34 am

Earth’s troposphere as it is today is something like a giant stirling engine – the cold ends are necessary to keep the engine pumping.
Get rid of the ice and not only do you halt the engine, but you have overheated it. A stagnant hot steamy world we wouldn’t want to live in will be the result.

You are right, and completely wrong at the same time, a sterling engine is a good analogy(though I’d have to ponder the operation to decide if it’s representative or not), it’s just the warm end is the surface, and the cold end is space, and it’s always cold.

Reply to  tony mcleod
July 13, 2017 6:46 am

I used the term ‘something like’ advisadly. A stirling engine works on heat rejection – space is where the heat is rejected to, the cold ends of the engine are the poles.
Small point – glad you liked the analogy.

RWturner
Reply to  tony mcleod
July 13, 2017 11:15 am

Jack, the poles are radiators due to geometry. As far as I know no one has demonstrated how CO2 changes geometry.

Reply to  tony mcleod
July 13, 2017 2:20 pm

RWTurner,
“Jack, the poles are radiators due to geometry.”
Exactly – we have the good fortune to live in a well organised engine. Geometry is not changing CO2 levels, we are. We are raising the octane rating of the fuel to a level the engine cannot handle – or rather, produces an output we don’t want.

bitchilly
Reply to  tony mcleod
July 13, 2017 3:14 pm

jack, you are forgetting that without ice at the north pole more heat is lost to space . the ice acts as insulation for the ocean beneath.

Reply to  tony mcleod
July 13, 2017 8:01 pm

Bitchilly, that’s not so either. Just as in your gin and tonic, more ice is better if you want it chilled, and the driver of Earth’s powerful circulation system is the heat difference between lower latitudes and higher latitudes. The alternating annual melt and freeze at the poles is also part of the delicate dance. By heating the poles, we’re stuffing that up.

Dixon
Reply to  tony mcleod
July 13, 2017 11:37 pm

Jack, bad (second) analogy. In my Gin and Tonics, there is nowhere in the glass the melted ice can get to where it will refreeze.

Reply to  tony mcleod
July 14, 2017 3:22 am

Dixon, yes you’re right – but I enjoyed the G&T. What I should have said is yes, Bitchilly, the sea will radiate more energy, but at the same time it is absorbing far more than would the ice, which is reflective. As too much of the radiated energy is trapped in the greenhouse, the overall effect of losing sea ice is an increase in the rate of heat gain – which we don’t want.

Reply to  Jack Davis
July 14, 2017 5:08 am

Bitchilly, the sea will radiate more energy, but at the same time it is absorbing far more than would the ice,

at low incident angles open water has an albedo in the same range as ice. In summer 3/4 arctic open water radiates far more than it receives except for solar noon, and only for a couple months.
Open arctic water cools the planet.

July 13, 2017 3:07 am

The observed saturation effect above 28 °C ( this happens in the WPWP) is explainable with the “Iris” which is real from obs., see http://onlinelibrary.wiley.com/doi/10.1002/2016JD025827/abstract

Bear
July 13, 2017 3:29 am

The “knee” in the land data is really interesting. Latitude related?
The other thing I noticed was that you pointed out the El Nino at the end of the data. IIRC there was an El Nino about 2010 but it was much smaller and it doesn’t seem to be reflected in the TOA. Seems to imply there might be a threshold for the TOA to be affected?

Bloke down the pub
July 13, 2017 4:00 am

The first thing is to ask the right question.

Paul Penrose
Reply to  Bloke down the pub
July 13, 2017 12:40 pm

While I’m not a Will Smith fan, and I would have preferred a faithful screen play of “Caves of Steel”, that was a pretty good movie all the same.

Editor
July 13, 2017 4:05 am

Awesome job Willis! Kind of reminiscent of Spencer & Braswell.

Herbert
July 13, 2017 4:25 am

Percy W.Bridgman the Harvard physicist and winner of the 1946 Nobel Prize in the field of high pressure physics reminds everyone of the importance of verification in science and of the danger of talking about the future.
He believed in the “inscrutable character of the future”. He thought that statements about the future belonged in the category of pseudo-statements.
” I personally do not think that one should speak of making statements about the future. For me,a statement implies the possibility of verifying its truth, and the truth of a statement about the future cannot be verified.”
Verification is important as he says, because ” Where there is no possibility of verification, there can be no error and ‘ truth’ becomes meaningless”.( ” The Way things are”, P.W. Bridgman, 1959.)
Global Warming issues involve the projection of increasing levels of CO2 into the inscrutable future.
It is not possible to determine global temperature in advance by reference solely to the laws of chemistry and physics.
(h/t ” The Age of Global Warming-A History”, Rupert Darwall.)

Reply to  Herbert
July 13, 2017 11:34 am

Very perceptive point from Mr. Bridgman. There are parts of the universe that we simply cannot know.

July 13, 2017 4:30 am

To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.
The atmosphere merely reduces energy loss to space
A few meters under our feet the temperature is completely set by geothermal energy. Same for the deep oceans. The sun only warms a few (centi)meters of the soil, and the upper 200 meters or so of the oceans.
The temperature of deeper soil/water is completely caused by the enormous amount of heat inside planet Earth.
Think solar Joules INCREASING the temperature of pre-heated soil/water iso solar W/m^2 being in radiative balance using SB and the whole climate system makes perfect sense.

Reply to  Ben Wouters
July 13, 2017 2:33 pm

Ben Wouters,
Sorry, that’s all arse about face. Being more thermally dense than air, soil and sea actively suck the heat trapped by CO2 out of the air. Soil radiates the heat back at night, but conduction and convection spread the heat throughout the ocean, to surprising depth.
We don’t live several meters below our feet, we live in the region between our feet and 2 meters above them.

Reply to  Jack Davis
July 13, 2017 2:44 pm

Care to explain why the temperature increases ~25k for every km you go down below the surface ?
(geothermal gradient)
Miracle CO2 at work?

Reply to  Jack Davis
July 13, 2017 3:13 pm

Ben – its under emmense pressure and it’s radioactive. There’s heat remaining from Earth’s original thermal collapse. Nobody’s disputing it’s hot.

Reply to  Jack Davis
July 14, 2017 3:22 am

Jack Davis July 13, 2017 at 3:13 pm

Nobody’s disputing it’s hot.

Great. Then why is the 255K radiative balance temperature used, that assumes a body that will be at 0K without incoming radiation?
More relevant is the average surface temperature of our moon: 197K.
Do you actually believe that backradiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth?
If so, where are the backradiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.

Reply to  Ben Wouters
July 15, 2017 4:57 am

Ben Wouters,
You say: “To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.”
Then in the next sentence you contradict yourself when you say: “The atmosphere merely reduces energy loss to space”
Well, exactly right. The introduction of an atmosphere to an atmosphere-less rocky planet REDUCES the rate at which energy can flow to space, thus causing the surface temperature to INCREASE to a higher level in order to maintain radiative balance. So no “central mistake” there…
Then you go off into an irrelevance spiral of nonsense about geothermal heat. You seem to misunderstand the difference between the QUANTITY of heat held by a body (a function of its thermal capacity) and the RATE at which the heat can flow away from that body (a function of its conductivity). The centre of the earth is very hot indeed. However the mean rate at which heat flows up to the surface is estimated at 0.087 watt/square metre. That’s 0.03 percent of the solar power absorbed by the Earth. So forget it. Please…
Finally, you ask Jack Wouters: “Do you actually believe that back radiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth? If so, where are the back radiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.”
This is a howler of the utmost naivety. It has been corrected by me and others on WUWT and elsewhere countless times. The well-known K&T energy balance diagram does NOT, repeat NOT, imply a downward flow of “almost twice the average radiation as the sun delivers”. That would be a violation of the second law of thermodynamics!
Just google “K&T diagram” and take another much closer look. According t their figures, the Sun delivers 161W/m2 radiation downwards to the earth’s surface and the earth’s surface radiates upwards just 63W/m2 (the other balancing upward flows are due to convection and evaporation). Your confusion may have arisen from the fact that the diagram shows radiative potentials, not radiative energy flows. You need to subtract the K&T downward radiative potential of 333W/m2 from the upward radiative potential of 396W/m2 to get the true energy flow figure of 63W/m2. Please go study the physics of radiation…

Reply to  Ben Wouters
July 15, 2017 5:08 am

Ben Wouters,
You say: “To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.”
Then in the next sentence you contradict yourself when you say that: “The atmosphere merely reduces energy loss to space”
Well, exactly right. The introduction of an atmosphere to an atmosphere-less rocky planet REDUCES the rate at which energy can flow to space, thus causing the surface temperature to INCREASE to a higher level in order to maintain radiative balance. So no “central mistake” there…
Then you go off into an irrelevant spiral of nonsense about geothermal heat. You seem to misunderstand the difference between the QUANTITY of heat held by a body and the RATE at which the heat can flow away from that body. The centre of the earth is very hot indeed but the mean rate at which heat flows up to the surface is estimated at 0.087 watt/square metre. That’s 0.03 percent of the solar power absorbed by the Earth. So forget it. Please…
Finally, you ask Ben Wouters: “Do you actually believe that backradiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth? If so, where are the backradiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.”
This is a howler of the utmost naivety. It has been corrected by me and others on WUWT and elsewhere many times. The well-known K&T energy balance diagram does NOT, repeat NOT, imply a downward flow of “almost twice the average radiation as the sun delivers”.
Just google “K&T diagram” and take another closer look at it. The Sun delivers 161W/m2 downwards to the earth’s surface; and the earth’s surface radiates upwards just 63W/m2 (the rest goes upwards by convection and evaporation). Your confusion may have arisen from the fact that the diagram shows radiative potentials, not radiative energy flows. You need to subtract the K&T downward radiative potential of 333W/m2 from the upward radiative potential of 396W/m2 to get the correct energy flow figure of 63W/m2. Please go study the physics…

Reply to  David Cosserat
July 15, 2017 10:19 am

David Cosserat July 15, 2017 at 4:57 am

Then in the next sentence you contradict yourself when you say: “The atmosphere merely reduces energy loss to space”
Well, exactly right. The introduction of an atmosphere to an atmosphere-less rocky planet REDUCES the rate at which energy can flow to space, thus causing the surface temperature to INCREASE to a higher level in order to maintain radiative balance. So no “central mistake” there…

What happens on some rocky planet is not really relevant here. With an average surface temperature of ~290K and assuming emissivity=1.0 Earth would radiate ~400 W/m^2 directly to space. Due to the atmosphere Earth only emits ~240 W/m^2. Where i live this is called “reducing energy loss”.
On Earth the surface temperatures are NOT in radiative balance with incoming solar. Daytime temperatures on the moon however come close, nighttime temps there are much to high.
On planet Earth we have an ENERGY balance between incoming solar radiation and outgoing longwave radiation.

However the mean rate at which heat flows up to the surface is estimated at 0.087 watt/square metre.

For continental crust average is more like 65 mW/m^2, but you don’t seem to understand how conduction works. For a flux to exist we must have a temperature difference. Flux is from the hot mantle through the crust to the surface. So the entire crust is heated by the hot mantle.
Sun only warms the upper (centi)meters of the soil a few degrees. The base temperature of the soil just below our feet is roughly equal to the average surface temperature at that location, and COMPLETELY caused by geothermal energy.
Interested to hear your explanation for the over 90K higher average surface temperature on Earth compared to that of the moon. (albedo is also lower on the moon!)

Reply to  Ben Wouters
July 16, 2017 7:59 am

Radiative POTENTIAL versus Radiative ENERGY FLOW
Unfortunately Ben Wouters’ reply (July 15, 2017 at 10:19am) is as incoherent as his original comment (July 13, 2017 at 4:30am) to which I had responded (July 15, 2017 at 4:57am). He has not directly addressed the points I made, instead just generating additional confusion. So I fear that further communication with him is unlikely to be productive.
However in the interests of others here who may have been puzzled by his ramblings…
1. The mean surface temperature of the earth has been estimated at about 288K (~15degC). So, using the Stefan-Boltzmann formula with emissivity = 1, we can calculate that the surface will assert a mean radiative POTENTIAL of about 390W/m2. This calculation concurs very closely with the K-T energy balance diagram value of 396W/m2.
2. The earth’s air is also warm. It contains radiative gases that collectively assert a downward radiative POTENTIAL towards the earth’s surface. Most of these downward contributions come from the region in the lower atmosphere close to the surface where emissions can potentially get through to the surface without being absorbed by other molecules on the way. The K-T diagram’s estimate for this downward radiative POTENTIAL is 333W/m2. This figure is similar to but somewhat less than the upward radiative POTENTIAL of 390W/m2. This is because the contributing atmospheric molecules are not all at the surface temperature of 288K. They are ranged at various heights, and, in accordance with the atmospheric lapse rate, temperatures are lower with increasing distance above the ground. Using the Stefan-Boltzmann formula, we find that a 333W/m2 downward radiative POTENTIAL would be asserted by a solid body having a temperature of 277K. So this figure can regarded as the atmosphere’s effective surface temperature.
3. Now for the key point. Standard textbook thermodynamics (as opposed to Wouters in Wonderland Physics) tells us that, when two bodies assert radiative POTENTIALS towards one another, the rate at which radiative ENERGY is transferred between them is simply equal to the difference between the POTENTIALS.
4. So in the case of the earth’s surface-atmosphere interface, the radiative ENERGY transferred is 396 – 333 = 63W/m2 (as opposed to the colossal 333W/m2 downward flow of radiative ENERGY FLOW that Wouters erroneously claims the K&T diagram implies). Also note that the direction of the 63W/m radiative ENERGY FLOW is upwards not downwards, from warmer earth at 288K to the slightly cooler atmosphere at its effective surface temperature of 277K. This, of course, is in full conformance with the Second Law of Thermodynamics.
Like most of us here, Ben Wouters is sceptical of the case for CAGW, and undoubtedly his heart is in the right place. But, sadly, his enthusiasm for the cause is completely undermined by getting his physics so horribly wrong. His muddled approach is in danger of damaging the sceptical cause. I think this is dangerous to the extent that it can so easily give succour to climate alarmists.
_____________
P.S. He is also entirely wrong about geothermal heat. It contributes around 0.087W/m2 to the incoming energy FLOW to the earth’s surface from below, compared with the Sun’s incoming energy FLOW of 161W/m2 from above. Go figure…

Reply to  David Cosserat
July 16, 2017 11:24 am

from warmer earth at 288K to the slightly cooler atmosphere at its effective surface temperature of 277K.

except that isnt what it is radiating to. The optical window is 70F or over 100F colder than the ground, depending on absolute humidity all day long, as long as there are no clouds, which could reduce the difference to as little as 10F.
The rest of the spectrum also changes during the day, mostly because water vapor is storing energy in the day, and releasing it at night to limit the drop in surface temps. That is your gh effect, and because the release of the energy stored is dependent on air temps dropping near dew point temps, and doesn’t release much when temps are not near dew point, this is a negative feedback on co2. And this shows this in actioncomment image
And when you look at the overall impact on surface stations you see it controls daily Min T, not Co2.comment image

Reply to  David Cosserat
July 17, 2017 1:19 am

P.S. He is also entirely wrong about geothermal heat. It contributes around 0.087W/m2 to the incoming energy FLOW to the earth’s surface from below, compared with the Sun’s incoming energy FLOW of 161W/m2 from above. Go figure…

Since you don’t seem to understand the difference between the geothermal FLUX through the crust and the geothermal TEMPERATURE of that crust let’s have a look at the deep ocean floor.
Geothermal flux through the oceanic crust is ~101 mW/m^2.
http://onlinelibrary.wiley.com/doi/10.1029/93RG01249/abstract
For energy to flow from the crust to the deep ocean bottom water the TEMPERATURE of the ocean floor has to be slightly higher than the temperature of that water. So the ENTIRE oceanic crust is warmer than the deep ocean water, otherwise the conductive flux would not exist.
Same for the continental crust. Confusing apparently is that the sun warms the upper (centi)meters of that crust, but the crust just below our feet is ENTIRELY warmed from below. The sun only increases the temperature of the top soil a bit above the geothermal temperature.
If we remove the atmosphere of planet Earth the surface would radiate directly to space, and loose ~400 W/m^2 (and obviously start to cool down rapidly since the sun does not provide this kind of energy.)
WITH atmosphere Earth emits only ~240 W/m^2 to space which the sun can match, so we have a balanced ENERGY budget. Normally this is called “reducing energy loss” by the atmosphere.
If you are unable to understand that solar radiation can increase the temperature of the geothermally pre-heated soil a few degrees to the observed surface temperatures, I’m afraid nothing will make you understand.

Reply to  Ben Wouters
July 16, 2017 9:05 am

Radiative POTENTIAL versus Radiative ENERGY FLOW
Unfortunately Ben Wouters’ reply (July 15, 2017 at 10:19am) is as incoherent as his original comment (July 13, 2017 at 4:30am) to which I had responded (July 15, 2017 at 4:57am). He has not directly addressed the points I made, instead just generating additional confusion. So I fear that further communication with him is unlikely to be productive.
However in the interests of others here who may have been puzzled by his ramblings…
1. The mean surface temperature of the earth has been estimated at about 288K (~15degC). So, using the Stefan-Boltzmann formula with emissivity = 1, we can calculate that the surface will assert a mean radiative POTENTIAL of about 390W/m2. This calculation concurs very closely with the K-T energy balance diagram value of 396W/m2.
2. The earth’s air is also warm. It contains radiative gases that collectively assert a downward radiative POTENTIAL towards the earth’s surface. Most of these downward contributions come from the region in the lower atmosphere close to the surface where emissions can potentially get through to the surface without being absorbed by other molecules on the way. The K-T diagram’s estimate for this downward radiative POTENTIAL is 333W/m2. This figure is similar to but somewhat less than the upward radiative POTENTIAL of 390W/m2. This is because the contributing atmospheric molecules are not all at the surface temperature of 288K. They are ranged at various heights, and, in accordance with the atmospheric lapse rate, temperatures are lower with increasing distance above the ground. Using the Stefan-Boltzmann formula, we find that a 333W/m2 downward radiative POTENTIAL would be asserted by a solid body having a temperature of 277K. So this figure can regarded as the atmosphere’s effective surface temperature‘.
3. Now for the key point. Standard textbook thermodynamics (as opposed to Wouters Wonderland Physics) tells us that, when two bodies assert radiative POTENTIALS towards one another, the rate at which radiative ENERGY is transferred between them is simply equal to the difference between the POTENTIALS.
4. So in the case of the earth’s surface-atmosphere interface, the radiative energy transferred is 396 – 333 = 63W/m2 (as opposed to the colossal 333W/m2 downward flow of radiative ENERGY FLOW that Wouters erroneously claims the K&T diagram implies). Also note that the direction of the 63W/m2 radiative ENERGY FLOW is upwards, from warmer earth at 288K to the slightly cooler atmosphere at its effective surface temperature of 277K. This, of course, is in full conformance with the Second Law of Thermodynamics.
Like most of us here, Ben Wouters is sceptical of the case for CAGW, and undoubtedly his heart is in the right place. But, sadly, his enthusiasm for the cause is completely undermined by getting his physics so horribly wrong. His muddled approach is in danger of damaging the sceptical cause. I think this is dangerous to the extent that it can so easily give succour to climate alarmists.
_____________
P.S. He is also entirely wrong about geothermal heat. It contributes around 0.087W/m2 to the incoming energy FLOW to the earth’s surface from below, compared with the Sun’s incoming energy FLOW of 161W/m2 from above. Go figure…

Reply to  Ben Wouters
July 16, 2017 11:57 am

Willis,
Re. your query of July 16, 2017 at 8:58 am, I’m very glad to have this discussion with you and others. It is something I have been banging on about for a long time without much response. I think it is the definitional key to stopping some earnest well-meaning sceptics falling into the trap of looking ridiculous in the eyes of the CAGW crowd, thereby endangering all our sceptical contributions to the climate debate.
I do believe the term ‘back radiation’ is a useful way of characterising a phenomenon in the real physical world but only if it means the POTENTIAL to radiate energy (in W/m2 as calculated by the S-B equation R = kT^4) from a cooler body in the direction of a warmer body, and not the ACTUAL transfer of radiative energy. Likewise for consistency, one can define ‘forward radiation’ to mean the POTENTIAL to radiate from a warmer body towards a cooler body but again, not the actual transfer of radiative energy.
You showed both these POTENTIALs in your famous ‘steel greenhouse’ articles so many years ago, perhaps without realising at the time , as I did not, that they could best be considered as potentials, not flows.
Given these definitions, the ACTUAL radiative ENERGY FLOW (also in W/m2) that takes place between two opposing bodies is then simply the difference between the two independently calculated radiative POTENTIALS:
I = (k1.T1^4 – k2.T2^4)
and the direction of the resultant energy flow is (by definition) always from the warmer to the cooler body, thus satisfying the 2LT.
There is nothing remotely revolutionary about this. The equations are standard and can be found in every thermodynamics textbook. The R = kT^4 equation is actually an explanatory abstraction representing a non-physical situation. It is akin, say, to the theoretical concept of a magnetic monopole. But in the real world, all bodies exert radiative POTENTIALs towards one (or more*) other bodies, which in turn exert radiative POTENTIAL(s) back. This is even true in space where a body might be exerting its radiative POTENTIAL only towards the cosmic microwave background – but the latter is equivalent to an extremely cold body exerting back a radiative POTENTIAL of around 0.000003W/m2 corresponding to a temperature of 2.7K.
So my concept of a radiative POTENTIAL is simply a definitional approach, a reminder for climate sceptics everywhere to help prevent them making arses of themselves, as I am afraid they still do every now-and-then, by attempting to ridicule warmists such as Trenberth, or whoever, for saying (which they certainly do not) that the surface of our planet is being kept warm by energy flowing from a magical ‘back radiation’ source of 333W/m2; or that pyrgeometer measuring instruments are fakes. And so on, …and on…in a vain effort to over-complicate what is actually an elegant and simple situation.
As we know, the surface is actually being kept warm by 161W/m2 of incoming solar radiation, balanced by outgoing energy flows of 63W/m2 radiation + 80W/m2 evapotranspiration + 17 thermals or thereabouts. (K&T say that the 1 W/m2 discrepancy is heating the planet, which sounds to me like fiddling with the homework.)
[*Note1: The mathematics for a body that asserts a radiative POTENTIAL towards more than one other body in fractional proportions such that, consequently, those other bodies assert radiative potentials towards the body in the same proportions, is dealt with using fractional multipliers called View Factors. It all fits together with what I have said here but it is not a relevant complication in the case of the K-T diagram issue where only 2 bodies – atmosphere and surface – are involved. I only mention it because somebody is bound to bring it up as a killer spoiler argument against what I am saying.]
[Note 2: Modern physicists don’t need all this stuff because they deal in photon streams. So they are quite happy to think of back radiation and forward radiation as real physical flows of energy and can’t understand what all the fuss is about. But since one flow is always netted off against the other, they come to no different conclusions. However I am not interested in that debate because modern physicists don’t (generally) make arses of them selves over this issue.]
All the best,
David

Dave Fair
Reply to  David Cosserat
July 16, 2017 5:54 pm

David, happily you have articulated a concept I had struggled with since viewing Trenberth’s diagram. There is no “flow” from the atmosphere to the surface. Your description of “potentials” neatly describes the physics of energy transfer from warmer to cooler.
Within the margins of error of total energy flow in the real atmosphere, CO2 plays an unmeasurable role. 0.6 +/- 17 W/m*2 is laughable “science.” Please note the most recent diagrams don’t have the +/- 17 listed anywhere.

Reply to  Willis Eschenbach
July 17, 2017 7:22 am

Give it another shot. Start by saying “Radiation potential is …” and go on from there.

I read it as the equivalent of a voltage potential. ie SB flux potential. Now make what you will of that, and or wait for David.
This wouldn’t be too unusual a use in an antenna field where you’re looking at field strength, and such.

Reply to  David Cosserat
July 17, 2017 5:00 am

Willis Eschenbach July 16, 2017 at 6:41 pm

The fact that it is real is obvious from the fact that downwelling radiation is MEASURED EVERY DAY ALL AROUND THE WORLD.

The pyrgeometers used are basically IR thermometers, with a filter on top that allows only certain IR bands to pass. From the measured temperature a flux is CALCULATED.
Ever been to a sauna? Air temperature 90 centigrade or so. Most people do survive saunas very well.
Jump in a pool of water at 90 centrigrade and your chances of survival are negligible.
Air (even with a lot of water vapor) has a much lower energy density than eg water at the same temperature.
I don’t have a pyrgeometer, but I’m pretty sure that when you point one to some air and then to some water at the same temperature the reading would be the same as well.

Reply to  Ben Wouters
July 17, 2017 4:55 pm

Willis,
Thanks for responding.
The concept of a ‘potential’ in physics is well founded – e.g. the potential of a battery to pass energy to another system (unit: volt); the potential of a wound-up spring to do work (unit: newton); the potential for the water in a reservoir to flow down to a turbine (unit: metre). Engineers find it useful to discuss and measure all these potentials and many others.
In the textbooks, the standard formula Qdot[Watts] = kAT^4 is introduced to students to specify the maximum radiative energy flow rate from a body. This is for the theoretical case where there is no other body radiating back. In other words it is the potential to radiate into a hypothetical black universe at 0K, which does not exist. Beyond that, for practical applications students are taught that they must offset each body’s radiation against the other to obtain the net energy transfer.
I do share your frustration that some people will not buy the idea that, in a radiative interaction between two bodies, the ‘photonic’ paradigm is correct, namely that there is a two way energy transfer (where the hotter body always wins, so no violation of the 2nd law occurs). But the problem remains that some people forget (or have never learned) that radiation is an interaction between two bodies and that the cooler body is not just a passive recipient of whatever is thrown at it. On the contrary, the magnitude of its own radiative potential is absolutely pivotal in determining the RATE at which the transfer takes place. A failure to appreciate this leads to the claim that radiative gases in the atmosphere ‘do not act like a blanket slowing down heat loss to space’. Whereas that is EXACTLY what they do.
Consequently, people have put enormous amount of effort into offering bizarre alternative theories, such as Ben Wouters’ nonsense theory that geothermal energy (0.087W/m2) is the source of the Earth’s surface warming (boosted presumably just a little by the Sun’s 161W/m2). And his claim that the K-T diagram is crazily wrong because it depicts 333W/m2 of radiative energy coming from back radiation panels in the sky.
He is not alone. Hence my effort to find a way of explaining to my fellow sceptics that the K-T diagram is not conceptually wrong and that they must stop knocking it with baseless objections that just display their ignorance. Doing so does harm to the credibility of the climate sceptic cause, which I believe, despite the Wouters of this world, is growing stronger every day.
Cheers
David

Reply to  David Cosserat
July 19, 2017 3:39 am

David Cosserat July 17, 2017 at 4:55 pm

Consequently, people have put enormous amount of effort into offering bizarre alternative theories, such as Ben Wouters’ nonsense theory that geothermal energy (0.087W/m2) is the source of the Earth’s surface warming (boosted presumably just a little by the Sun’s 161W/m2).

Apparently still clueless about the difference between TEMPERATURE and FLUX.
This is stuff we learn in high school over here.
quick google: http://www.ewp.rpi.edu/hartford/~ernesto/F2014/MMEES/Papers/ENERGY/7AlternativeEnergy/Ground/Florides-GroundTemperatureMeasurement.pdf
Care to explain why the temperatures in deep mines are so much higher than the surface?
https://en.wikipedia.org/wiki/TauTona_Mine
Sun is not warming a blackbody from 0K to 255K establishing radiative balance.
It just increases the temperature of the surface a bit above the GEOTHERMALLY caused base temperature.
Problem is the people who believe the thin, cold, low density, low energy content atmosphere can somehow INCREASE the surface temperature of soil and oceans.
The atmosphere just reduces the energy loss to space. Period.

Reply to  Ben Wouters
July 18, 2017 2:50 am

Hi micro6500,
At July 17, 2017 at 7.22am you said to Willis: I read [radiative potential] as the equivalent of a voltage potential. ie SB flux potential. Now make what you will of that, and or wait for David. This wouldn’t be too unusual a use in an antenna field where you’re looking at field strength, and such.
Right on the money! As an electrical engineer I applaud your example.
Another electrical example is the concept of ‘back emf’. The term ’emf’ (electromotive force) is a synonym for ‘voltage’, and so is also measured in volts. In practice, the term is typically used for the counter-voltage, called the ‘back emf’ that is asserted with opposite polarity by a recipient of electrical energy, such as an electric motor, towards its electrical energy source. This effective reduction in net voltage (not measurable on the wire) limits the rate of energy transfer from the source to the sink:
effective voltage = source emf – back emf
What a wonderful analogy to the ‘back radiation’ effect.
All the best,
David

Reply to  David Cosserat
July 18, 2017 6:12 am

Another electrical example is the concept of ‘back emf’.

In motors, it’s the back voltage generated as one of the magnetic fields collapses. So while it repeats, it is short lived. And this kind of fits what the atm does, it self regulates temperatures at the surface late at night by stealing energy from water vapor.
The surface is the regulated side of a heat engine using water as the working fluid that cycles once a day.
Think about that 🙂

Reply to  David Cosserat
July 18, 2017 6:26 am

Oh, and because it’s temperature regulated by dew point, changes to co2 just change “when” water vapor warming turns on at night. Since it’s nonlinear, as the days get a little longer co2 no longer has an effect on min T.
I’m going to keep trying to get people to understand this. Water vapor regulates air temps, co2 while a radiative gas, changes have little to no effect because water vapor just cancels it out.
I know some of you can get this if you just think about it.
There is no global warming from co2, and there no need to try and average 140 million temps to see what they are doing. It’s a simple logic problem, where everyone already knows what the results are, they just don’t realize it’s actively regulated.
And David it goes to your potentials, the optical window when it’s clear out is open to space all day long, and it’s cold! I’ve seen over 100F colder than the ground, on a sunny day.
There’s always a cold sink, so why does it nearly stop cooling in the middle of a clear night?

Reply to  Willis Eschenbach
July 19, 2017 6:46 am

Heck, you can look at the dictionary definition of “potential” to see that it doesn’t apply:
po·ten·tial
pəˈten(t)SHəl
adjective
1.
having or showing the capacity to become or develop into something in the future.
“a two-pronged campaign to woo potential customers”
Just exactly what is it that you expect a radiating object to “become or develop into something in the future”?
So no, there’s no “potential” in thermal radiation. It just radiates according to the formula, period. Nothing in the slightest about it that says “potential”.

Of course there’s potential willis, first a better definition
https://isaacphysics.org/s/Xlrabk

A potential is a scalar field that describes the potential energy per unit of some quantity due to a vector field. It is closely related to potential energy. Just like potential energy, the field potential at a point can only be defined with respect to a zero (reference) point, while differences in field potential are independent of the choice of zero point.

And my IR thermometer measures temperature field potential. And when your temperature potential between 2 objects is large enough, you can turn that potential into work.

Reply to  Ben Wouters
July 20, 2017 7:19 am

Willis,
Terminology is only useful if it is not misleading. In electrical engineering it is common and perfectly sensible to talk about the ‘potential difference’ (measured in volts), between the plus and minus terminals of a source of electrical power such as a battery, irrespective of whether those terminals are (or are not) connected to a circuit. In the one case energy flows. In the other case it does not.
Despite the above, I do understand the point you are making in the particular case of electromagnetic radiation where the modern photonic theory of EMR assumes there are real flows of energy-carrying photons in both directions between two radiating bodies, with only the difference resulting in net energy transfer, always from hotter to cooler. I subscribe to that theory too, as do most professional engineers and physicists.
But I think we are both equally sick and tired of climate sceptics who bang on about back radiation (meaning energy flow from a cooler to a hotter body) being unreal ‘because it violates the 2LT’. For some reason they simply cannot grasp the concept that the back and forth radiation flows between two bodies are inextricably interlinked. Yet this is geometrically undeniable because they are, to use the jargon, ‘in the view’ of each other. Given this reality, the net flow of energy is inevitably from the hotter to the cooler surface, and there is no violation of the 2LT.
So they look at the K-T diagram, and see lots of energy flows with numbers on them. In particular their eyes alight on the ‘huge’ 333W/m2 back radiation figure from atmosphere to surface and treat it as if it were a stand-alone independent flow that they can cast mighty scorn upon. In their fury they seem blinded to the greater (and inextricably interlinked) figure of 396W/m2 of forward radiation from the surface, thus resulting in a modest net radiation flow upwards of only 63W/m2.
Having, as they think, demolished K-T, they then proceed to provide crazy alternative explanations for why the surface is warmer than it would be with no atmosphere, such as Wouter’s ludicrous idea that geothermal heat (at 0.086W/m2) is the true cause of the earth’s elevated atmospheric surface temperature.
There’s probably no hope of changing such people’s minds, but, in a modest attempt to help others falling into the same intellectual trap, I was simply offering an alternative way of looking at the issue for people who are unconvinced by, or indeed unaware of, statistical thermodynamics.
All the best
David

Reply to  David Cosserat
July 23, 2017 7:57 am

David Cosserat July 20, 2017 at 7:22 am

Having, as they think, demolished K-T, they then proceed to provide crazy alternative explanations for why the surface is warmer than it would be with no atmosphere, such as Wouter’s idea that geothermal heat (at 0.087W/m2) is the true cause of the earth’s elevated atmospheric surface temperature.

Suggest to go back to school and study conduction. The HOT mantle loses energy by conduction through the crust, and at the surface a flux remains of ~65 mW/m^2 (continental crust). But the ENTIRE crust is warmed from below. So the soil just below our feet is WARMER than the surface due to GEOTHERMAL ENERGY.
Consequently the whole idea that the sun is unable to warm the surface to our observed values is wrong.
The atmosphere does NOT need to warm the surface above what the sun has already done, it merely reduces energy loss to space.
Still awaiting your explanation for the 25K/km increasing temperature when going down into the crust or the 330K plus temperatures of the rockwand in deep mines.
Since you dismiss geothermal, what is the real cause?
Backconductive potential perhaps, or deep penetrating backradiation from the atmosphere?
The formulas used in the pyrgeometers that measure backradiation don’t seem to have a factor for emissivity. Is the emissivity of the atmosphere 1.0? Higher than eg oceanwater?
PS posting the same nonsense twice as you do regularly doesn’t make it any more credible.

Reply to  Ben Wouters
July 20, 2017 7:22 am

Willis,
Terminology is only useful if it is not misleading. In electrical engineering it is common and perfectly sensible to talk about the ‘potential difference’ (measured in volts), between the plus and minus terminals of a source of electrical power such as a battery, irrespective of whether those terminals are (or are not) connected to a circuit. In the one case energy flows. In the other case it does not.
Despite the above, I do understand the point you are making in the particular case of electromagnetic radiation where the modern photonic theory of EMR assumes there are real flows of energy-carrying photons in both directions between two radiating bodies, with only the difference resulting in net energy transfer, always from hotter to cooler. I subscribe to that theory too, as do most professional engineers and physicists.
But I think we are both equally sick and tired of climate sceptics who bang on about back radiation (meaning energy flow from a cooler to a hotter body) being unreal ‘because it violates the 2LT’. For some reason they simply cannot grasp the concept that the back and forth radiation flows between two bodies are inextricably interlinked. Yet this is geometrically undeniable because they are, to use the jargon, ‘in the view’ of each other. Given this reality, the net flow of energy is inevitably from the hotter to the cooler surface, and there is no violation of the 2LT.
So they look at the K-T diagram, and see lots of energy flows with numbers on them. In particular, their eyes alight on the ‘huge’ 333W/m2 ‘back’ radiation figure from atmosphere to surface and treat it as if it were a stand-alone independent flow hat they can cast mighty scorn upon. In their fury they seem blinded to the greater (and inextricably interlinked) figure of 396W/m2 of forward radiation from the surface, thus resulting in a modest net radiation flow upwards of only 63W/m2.
Having, as they think, demolished K-T, they then proceed to provide crazy alternative explanations for why the surface is warmer than it would be with no atmosphere, such as Wouter’s idea that geothermal heat (at 0.087W/m2) is the true cause of the earth’s elevated atmospheric surface temperature.
There’s probably no hope of changing such people’s minds, but, in a modest attempt to help others falling into the same intellectual trap, I was simply offering an alternative way of looking at the issue for people who are unconvinced by, or indeed unaware of, statistical thermodynamics.
All the best
David

steve
July 13, 2017 4:32 am

It is important to note that because the system is inhomogeneous and the radiation depends on the average of T^4 values while the T depends on the average of T values one can easily show with area weighted averages for a large inhomogeneous system that it is possible to change the average T in either direction and simultaneously have the average T^4 go in the other direction within a few kelvin. This can be shown with a simple excel spreadsheet. The large disparity in temperatures around the surface matters quite a bit when discussing total radiation from the surface.

ferdberple
Reply to  steve
July 13, 2017 11:47 am

Exactly. Average temperature is a poor metric for climate change because it assumes the spatial distribution of temperature will remain unchanged as the climate changes, which is a nonsense.

Reply to  ferdberple
July 16, 2017 3:55 pm

For comparisons with the means calculated from , eg : ToA power measurements , temperatures should be converted to energies , averaged , then converted back .

Reply to  Bob Armstrong
July 17, 2017 7:26 am

, temperatures should be converted to energies , averaged , then converted back .

This is what I started doing with my last run.
But, after spending 10 years looking at surface data, it was sort of a waste of time. While it clearly shows that the temperature record is not being forced by a slight, increasing forcing, and it shows that when water vapor level drop, air temps drop like a rock, and rel humidity is actually going down slightly, and this alnoe is proof there is little warming from the increases in co2.
You only need need to look at the temperature drop and radiation at night under clear clam skies to prove co2 has little to no effect on Min T (and surface data confirms it follows dew point).
At night water vapor actively tries to reduce how cold it gets. Deserts and tropical jungles are extreme examples of this in operation. That warm muggy feel, is water vapor condensing and liberating it’s stored heat of evaporation, that helps warm the air, and changes the fundamental cooling rate, this is above the ^4th power reduction, plus most people actually use the wrong equilibrium temp, the optical window, half of the spectrum space is clear to space for all bands except 1 water vapor line, and when it’s clear and low humidity it’s 100F or more colder than the surface, and tracks air temps, so it’s still 100F colder at 5am as it was at 6pm.
You can’t see this process optically (and no it is not fog!!!!), but I captured a short video of the road in the afternoon after a short shower.
https://micro6500blog.files.wordpress.com/2017/06/20170626_185905.mp4
Well at night as the atm column cools, water vapor will start to sink, and condense, and reeveporate, and you can see it on this hot asphalt with rain. And that same asphalt is still a lot warmer than grass at 5 am.

Luis Anastasia
Reply to  ferdberple
July 16, 2017 4:06 pm

I don’t think you can covert temperatures to energy. Using SB, you can convert them to power, be we all know power and energy are two different things.

Reply to  Luis Anastasia
July 17, 2017 6:40 am

I don’t think you can covert temperatures to energy. Using SB, you can convert them to power, be we all know power and energy are two different things.

No it’s not converted to power, it’s converted to a instantaneous flux. To get power you multiply energy(flux) by time.

Thin Air
Reply to  steve
July 13, 2017 12:23 pm

Surely the genius that built all the sophisticated global climate models (and spent many $100 M doing so over the last 40 years), did include that very basic algebraic knowledge. Surely !!!
But does anyone here among WUWT comments know if that is the case?
And if they did, they surely have NOT, done it at sufficient granularity (e.g., for the hundreds of individual cumulonimbus clouds from the tropics up to the mid-latitudes, on any given afternoon). I am looking a about 10 of them from my window now.

Thin Air
Reply to  Thin Air
July 13, 2017 12:28 pm

I meant “hundreds of thousands” of cumulonimbus clouds, where ever a summer afternoon is happening on the planet (and that happens constantly, across some large regions, over land, and parts of ocean).

steve
Reply to  Thin Air
July 13, 2017 5:28 pm

Keep in mind that the delta T under consideration is roughly 1 K out of 288 K. Any calculation of the IR spectrum of CO2 in the atmosphere has to be good to 3 9’s, or it isn’t applicable to the problem.
Add to this the spatial time and temperature inhomogeneities already mentioned. Now consider that T only applies to the kinetic energy component of energy. Energy comes in as kinetic, but is partitioned into kinetic and potential once it gets to the planet. All by itself this partitioning will cause a drop in observed radiant T. We know it partitions because coal is chemically stored solar energy, and we have quite a bit of it. To calculate temperature changes on the order that is important to the “global warming” problem, we need to understand the spatial distribution of T and the changes in spatial distribution and the partition of KE into KE and PE within the system and how the partitioning changes with time, and we have to know all of this at least well enough to calculate a T change (KE only) to about 1 part per thousand. That’s for a coarse guess. To really nail it down it would be good to have another decimal place on that calculation. It looks like the entire effect is in the noise of the calculation. If so, that would explain the variation seen in the calculations. Within a few kelvin you could end up anywhere.

Reply to  steve
July 18, 2017 6:28 am

It also means that if you do not measure the entire planet 24×7, you do not really know what outgoing radiation, not to 0.1w/m^2 by any means.

DHR
July 13, 2017 4:45 am

“…a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree …”
Isn’t that what Lindzen has been saying for a long time? Is it for the same reason?

Reply to  DHR
July 13, 2017 5:11 am

Lindzen and Choi came at this from a slightly different perspective, if I remember correctly. I think this is more akin to Spence and Braswell. Both came up with climate sensitivities less than 1°C. But it’s been a while since a read either paper… And I’m not sure I understood Spencer’s methodology very well.
Interestingly, Trenberth et al’s rebuttal to Lindzen and Choi only managed to push the climate sensitivity up to 2.3°C. Both Lindzen’s and Trenberth’s sensitivity estimates were sensitive to the time range of the analysis.

Reply to  David Middleton
July 13, 2017 10:28 am

I studied both papers, and think both are flawed. They rely on arbitrary lags. Both papers have also been pretty much refuted by later papers.

RWturner
Reply to  David Middleton
July 13, 2017 11:21 am

Yet observations seem to support the lower climate sensitivity estimates. I’ll go with observations over other papers.

Reply to  RWturner
July 13, 2017 2:50 pm

All day long. Observations deliver a climate sensitivity from 0.5 to 1.75 C (2.35 in the case of Trenberth’s cherry picking). Models deliver >3 C.

July 13, 2017 5:07 am

I’m not sure if it’s the source of the hook in the land data, but land temps are actively regulated over the 24 hour solar cycle, and cooling is very nonlinear.
During the night, sensible heat from condensing water vapor in the collapsing atm column at the surface slows cooling, regulating min T to dew point.

sonofametman
Reply to  micro6500
July 13, 2017 5:59 am

Yes, I experienced this when I lived in Bahrain, which has an oppressive summer climate. High temps then average 99 F, but the humidity is very high, and with a dew point of 78 F, the night time low only gets down to 88F. Not very comfortable.

ralfellis
July 13, 2017 5:10 am

Interesting, as usual, Willis.
For fig 5 you said “First, correlation over the land is slightly positive, and over the ocean, it is slightly negative.”
Did you mean “First, correlation over the ocean is slightly positive, and over the land, it is slightly negative.
R

Bad Andrew
July 13, 2017 5:19 am

“If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.”
Of course. If someone criticizes what you say, just say you didn’t say that. You’ll never be wrong, Racehorse.
Andrew

July 13, 2017 5:39 am

Excellent! Willis, you are very good at making your analysis understandable to laymen. I hate to say “dumbing down”, but you are consistently good at dumbing topics down, which is really necessary to connect with the broad stroke of society, ie., laypersons. To the community who take the time to read and comment, bravo! But may I suggest you take a lesson from Willis and recognize that your excellent comments need to be “dumbed down” in language only so that laypersons can reap the value of them. Use small words to reach a larger audience. I love this website! Thank you.

Dr Deanster
July 13, 2017 5:52 am

Willis ….. I think your figure 1 figure 2 are the smoking gun that the temperature records are cooked. They should agree! I remember work by Lindzen and maybe Spencer that show TOA corresponds to SST. Yet ….. F1 and F2 don’t agree here.

July 13, 2017 6:00 am

Shouldn’t we treat the TOA data analogously to an exterior calculus integration over the whole planet? If more energy goes in than comes out, the interior will be heating up. That flux imbalance seems to be the case. There is enough weirdness inside that integration to account for the fact that we don’t know exactly how the excess energy is sequestered on the surface – but we know it’s lurking somewhere.

Tom Dayton
Reply to  Jack Davis
July 13, 2017 6:58 am

Jack Davis, exactly so. A more pedestrian example is a bathtub with more water coming in than going out.

Reply to  Tom Dayton
July 13, 2017 2:49 pm

Yeah Tom, I should have used that. It was the talk earlier on about quantum mechanics and how we can make precise predictions of useful outcomes (the phone I’m on for instance) without knowing what is actually going on down at the base of reality. I tried to apply that to the planet. Once you do that, a lot of the argument here is seen as sophistry – as ‘how many angels can dance on the head of a pin’ territory.

old construction worker
July 13, 2017 6:06 am

Let me know when you find the “Hot Spot” then I may start worrying about “Co2 induce global warming”.

Scott Scarborough
July 13, 2017 6:13 am

“There are some interesting results there. First, correlation over the land is slightly positive, and over the ocean, it is slightly negative” Quote is from the article. Color coded plot of the earth above this quote seems it indicate the opposite… correlation over the land is slightly negative, an over the ocean is slightly positive.

Bill Illis
July 13, 2017 6:29 am

Interesting that if you take the Earth’s average surface emission of 390 W/m2 (or 15.0C in temperature) and you increase that emission level by 1.0 W/m2, the temperature should increase by 0.18C according to the Stefan Boltzmann equations. That is exactly what Willis calculated from Ceres TOA radiation.
The issue in climate science is the theory does work from the surface but from the average emmission level balancing incoming solar of 240 W/m2 or -18C. In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.
Now the combination of these two changes can be calculated as 0.81C per 1.0 W/m2 of GHGs. And they just stick with that. Fundamentally flawed.
Go back to actually measuring what is happening, go back to the surface which is what we are concerned with and actually measure how the proposed feedbacks are actually operating. Use the Stefan Boltzmann equations for calcutions because this has been proven to work perfectly everywhere in the universe.
This is what real science would do and what Willis has done here.

Reply to  Bill Illis
July 13, 2017 9:00 am

On average each sq meter gets 3,741 WHr/ 155.8W/m^2, average temp goes up 9.8C.
That’s 0.06C/W/m^2 measured at the surface with PMOD averaged TSI.
Based on the Air Forces surface data summary.

Reply to  micro6500
July 13, 2017 9:10 am

This is basically using the seasonal change in forcing in the hemisphere’s and the resulting change in temps.comment image
The units are Degree’s F/Whr/day to get C/w/m^2, divide by 13.3 ( / 24 (day to hour) / 9 * 5(F to C))

Reply to  micro6500
July 13, 2017 9:11 am

comment image

RWturner
Reply to  Bill Illis
July 13, 2017 11:25 am

” In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.”
And the feedbacks are yet more GHGs, which in turn continue the feedback, which we know is pure sophistry. Never once in Earth’s 4 billion year history of climate has a runaway greenhouse effect occurred.

John
July 13, 2017 6:30 am

One of the basic mistakes in this whole CO2 hypothesis is that it is well mixed. Yes, in a closed container in a lab, it would be well mixed. But most CO2 is generated at the surface and is sunk at the surface. Does ANYONE actually believe that the percentage of CO2 at the surface is, uh, well mixed? If most of the CO2 concentration is at the surface, and it is, then why do the models use the other end of the atmosphere – the top?

Ed Bo
Reply to  John
July 13, 2017 7:50 am

CO2 IS well mixed — it doesn’t have to be perfectly mixed to be “well mixed”. The constant churning of the atmosphere through day/night heating and cooling, winds, and seasons sees to that. The concentrations at high altitudes in the troposphere are not that different from typical concentrations at the surface. This stuff is easy to measure (unlike many other climate-related variables).

Greg Goodman
Reply to  Ed Bo
July 13, 2017 8:42 am

Yes, a variation of a few ppmv around an average of 400 ppmv IS well mixed.

richard verney
Reply to  Ed Bo
July 13, 2017 4:40 pm

But at the surface, ie., low altitudes one can see variations of several hundred ppm
See for example:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_2005-07-14.jpg
And
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
CO2 is anything but well mixed at low altitude and that is why the IPCC rejected the Beck reinstatement of historical chemical analysis of CO2

Don K
Reply to  John
July 13, 2017 12:14 pm

The Scripps Institute manages the atmospheric CO2 measurement program initiated by Charles Keeling in the 1950s. http://scrippsco2.ucsd.edu/ Not only are “continuous” measurements made at the Mauna Loa Observatory, but weekly flask samples are taken at 11 other stations at varying latitudes. They show that atmospheric CO2 isn’t perfectly mixed, but overall it’s not far off. The OCO2 satellite shows much the same thing once you get past the false colors used to emphasize small differences in CO2 concentration.

richard verney
Reply to  Don K
July 13, 2017 4:45 pm

See my comment above.
At high altitude, CO2 is a well mixed gas (ie., sat about +/- 10 ppm around 395 ppm), but at low altitude CO2 is anything but well mixed and there can be local variations (depending on season, windspeed, temperature, geography, topography, vegation etc) of several hundred ppm

Editor
July 13, 2017 6:43 am

The scatterplots are good examples of what appear to be chaotic strange attractors. To me this means that the physical system (the climate system itself that gives the results of ocean surface and surface air temperatures) is chaotic [highly non-linear]. For examples of scatterplots that are attractors, see: Chaos & Climate – Part 4: An Attractive Idea or google “images strange attractors”.
It must be remembered that the Stefan-Boltzmann Law, when applied to a non-equilibrium state, is itself highly non-linear, and only its approximations and “linearized” versions produce the straight red line in the essay above. In the real world, where the atmosphere hits space, the red line of S-B is not straight by any means.

Greg Goodman
Reply to  Kip Hansen
July 13, 2017 8:40 am

the red line is not straight , it shows the fourth power curvature over the limited range of temperatures found in the record.

Editor
Reply to  Greg Goodman
July 13, 2017 1:54 pm

Goodman ==> Perhaps I should have said “narrow” or “regular” or some other word to relate to its visual.
S-B under non-equilibrium does not produce such a line….more probably something far more similar to the bluw, that would be, if we could actually solve the thing at all — which I do not believe we can at this time.

Reply to  Kip Hansen
July 13, 2017 8:43 am

It must be remembered that the Stefan-Boltzmann Law, when applied to a non-equilibrium state, is itself highly non-linear, and only its approximations and “linearized” versions produce the straight red line in the essay above. In the real world, where the atmosphere hits space, the red line of S-B is not straight by any means.

And during clear calm skies at night, as rel humidity goes up, and the amount of water vapor condensing goes up, this reduces the rate temperatures drop. You can see it in both temp and net radiation in this chart.comment image
And explained here
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/

Editor
Reply to  micro6500
July 13, 2017 1:57 pm

micro ==> Interesting….

Greg
Reply to  micro6500
July 14, 2017 10:44 am

Hansen => ” is itself highly non-linear,” T^4 is itself highly non-linear ! Perhaps you should have said what you meant.

Reply to  Greg
July 14, 2017 12:46 pm

If you read the link, they as well as myself show where it’s at least 2 different nonlinear rates. Besides under clear skies, low humidity the zenith temp will be 80-100F colder than the surface. And has the same difference at both cooling rates. You can even see net radiation drop in the middle of the night under clear skies.

July 13, 2017 6:54 am

I’m not sure I understand the following speculation:

Is this relationship how we can expect the globe to respond to long-term changes in forcing? Unknown

Are you speculating about a possible relationship between temperature’s spatial response to the spatial variation in radiation imbalance and its temporal response the temporal variation in forcing? (As I understand it, forcing is not the same as imbalance. Roughly, the forcing associated with a given CO2 concentration is the imbalance that would initially result from a sudden step increase to that concentration from equilibrium at a baseline, presumably pre-industrial concentration. Imbalance would decay in the fullness of time, but the attributed forcing would not.)
I ask because I wasn’t immediately able to see the logical relationship between the two quantities, and I didn’t want to ponder the question further if you meant something else.

July 13, 2017 6:56 am

Where there is water or ice on a surface, the temperature of that surface will approach the dew point temperature of the air measured near that surface (ocean, top of thunderclouds, moist soil, leaves). Those surfaces will radiate at that temperature. Air temperature is a measure of molecular collisions (wet and dry adiabats) that is never colder than the measured dew point. Thus, water vapor in the air is the primary temperature controller. Man made impervious surfaces do not contain water, thus, the heat island effect.

July 13, 2017 6:57 am

Yet another excellent critique of the climate models , but again on their agenda. Yes it’s a complex, non linear stochastic, and undersubscribed statistical model of a poorly modelled system, That uses decisions regarding which effects are relevant that the “scoentists”, AKA numerical modellers, try to prove and how much gain to apply to them, rather like the economc models used to support or deny Brexit, world growth, etc. They then measure correlation and claim this proves some science, when it can only prove correlation, and can never prove any science, no control planet, etc..
As with weather and economic forecasts, it is not deterministic and proves no laws, it’s pseudo science. Further, as anyone familiar with Neural Nets and slack variables knows, the extrapolation of non linear data outside its known range in a noisy multivariate system is notoriously unreliable over any significant time, never mind the multi lifetime natural periodicities of climate change. But that’s not the problem I now focus on. What about plant denial?
Modellers assumptions about plants seem to deny the data record of their effect on CO2 control. From 95% to <0.20% fairly sharpsh once the the oceans formed, m reduced to a level just enough the keep an efficient carbon cycle optimised , for them and us, and holding that very low figure through mutiple mass extinctions and real catastrophes over Billions of years, by increasing and decreasing amount of plants and rate of plant photosynthesis. To make the models work dynamic response had to be discounted vs the evidence. When the plants grew and started reducing the CO2 we produce, they were said by modellers, I won't call these people scientists, because they deny the basic scientific method, to be unexpected response that would be "overwhelemed" by our few hundred ppm of CO2, showing how blatantly basic controls were discounted by their models, and now again denying that plants will continue to grow wherever it suits them until the atmospheric CO2 levels drop. That's my hypothesis re CO2.
Here's Catt's Hypothesis re serious climate change – out of the oceans, by volcanicity. with extremess due to orbital eccentricity. Probaly also accounts for smaler but still significant variations over 100's years.
I suggest significant climate change has more obvious and easy to quantify primary cause. THis is oceanic magma release, the most during maximum gravitational variation extremes of Milankovitch cycles, Simply put, why not, this drags tectonic plates and the core around enough to rattle our thin crust around enough to trigger interglacials from the longer term "relatively steady state" ice age.
Data? 30% gravitational variation pa, actual force is 200 times the Moons, ocean floor is 7km basalt crust on a 12,000Km hot rock pudding that is permanently leaking 1,200 degree rock into the ocean, at seperating plate juntions and hot spots like Hawaii, ring of fire, etc.. I suggest this direct ocean heating simply steps up at an order of magnitude or more every 100K years. Takes about 200,000 Mount Fujis of basalt to raise the oceans 12 degrees, on the back of my envelope. no modellers required, proven physics of if, then.
I have made VERY rough guess at this. The amount of magma arriving into the oceans seems poorly documented. If it's 1×10^13 Tonnes when unstressed – 1,000 undersea Mount Fujis plus 40,000 Km worth of 7Km deep crack filing with no overflow @ 2cm pa tectonic seperation, it would need to increase by 7,000 times to meet the 12 degree heat delivery of an intergacial event. A Milankovitch eccentricity caused volcanic forcing event. AS a practical engineer and phsyicist wh has seen how well simple process models work in Chemical Engineering at Imperial Colege amongst other examples, I like the clear probabiitity of hot rock delivered direct to the oceans as the warming effect that supports both ice age conditions and the short interglacials, and the significant events in between, a lot more than blaming an atmospheric gas at trace levels that created and maintained the stabe atmospheric conditions for life in the carbon cycle, and is probably not guilty anyway.
nb: 3.8×10^11 tonnes basalt Mount Fuji has only been there for one ice age… so is a total amount of 200 Mount Fujis worth of magma emitted into the oceans pa over 1.000 years likely, under these conditions of extreme gravitational stress? That'll do it. No forcing required, just basic heat transfer from our on board nuclear reactor. Simples!
CONCLUSION: 1. Climate modeller are basically plant deniers.The plants control CO2, always have, ignoring natural responses at the levels we know have occured in the past is simply estabishing bogus hypotheses regarding CO2 that they then have to force to make their "models" correlate as promised, This is just a show trial of CO2 by a religious court. Science abuse.
Hardly the deterministic scientific method of trying to disprove your hypothesis – by doubting and not testing the most obvious control of plants, assuming low gains for that proven response that don't increase enough as CO2 increases.. Why not ask the model how much plant growth is required to control likely CO2 emissions? And this still doesn't prove more CO2 causes anything except more plant growth, or is more than a simple consequence of fires, volcanoes, etc. that is absorbed by rocks, plants, etc. and recycled by plate tectonics. Of course the bad news is that if the plants do it without us doing anything, there is no easu climate change money flowing into innefective or regressive projects that pretend to solve the "climate change catastrophic disaster".
And, of course, it is more likely in a reasonably quiescent world that natural CO2 increases as a consequence of warming oceans that warm the atmosphere, hence the correlation, not a cause of any significance. Oceans are where the surface heat of the planet is, over 1,000 times more than the atmosphere at 6×10^24 Joules per Degree K or so, And there is a LOT more heat on the inside, being generated all the time and trying to get out. And succeeding all the time, at varying and renewable rates, through our very leaky selection of loosely linked crustal plates, especially the thin ocean plates, with very little net mass loss as its all recycled every 200 Million years back into the core.
2. There is at least one more likely and provable cause of warming, via the oceans that drive the atmosphere, that fits the ice core evidence, we can see happening and have documented, via the ocean, and without any need for "forcing". Leaks in the thin crust of earth we live on release massive amounts of heat direct into the oceans where it can be held and cause serious long term change to the atmosphere, not the reverse situation where these "climate pseudo scientists" heads are, chasing the government and renewable lobbyists' $$$$$.
I suggest we look under the oceans for the smoking gun that can do the climate change job as advertised, deliver 70×10^24 Joules to the Oceans over 1,000 years or so to create a 12 degree rise for an interglacial, for example. I suggest the rather puny atmospheric climate cannot,certainly not due to human CO2 emissions. Real climate change that makes the trip to Europe a walk over Dogger Land and The Great Barrier Reef an interesting white 300 foot rocky white ridge a short drive East from Cairns, is a consequence of greater and more powerful controls, primarily solar gravity and radiation variance, CO2 is not the cause, the atmosphere simply an effect or consequence of the larger controls.,
And solar radiation, while powerful, is usually in balance, and orbital eccetricity does not create significant effects in any way I have seen credibly proposed. It is interesting that most of the climate discussion around Milankovitch cycles considers the bizarre fringe effects on the atmosphere of obliquity/precession plus eccentricity on the atmosphere, a low energy capacity sink, rather than the unbalanced gravtitational effect on a serious heat source that can change ocean temperatures the 12 degrees required for an interglacial, what my approach is grounded in. See what Jupiter and its moons do to Io if you doubt the power of gravtiational stress,
CO2 is innocent. It was the rocks what done it. The so called scientists like Michael Mann have taken the easy money for supporting political agendas with actual science denial that frames CO2 for climate change using statistical models nor science, picking data that is in the noise of an interglacial on amplitude and period, when CO2 is most probably only a consequence of volcanoes and plants, also to boost their own egos as high priests of science become religion with its own inquisition and distorted language. Their narrow presumptive focus on atmospheric effects of CO2 deny the larger real effects and obvious science facts – the established world of the natural carbon cycle and the interaction of our mostly hot soft rock with the oceans that drive the joined up planetary systems. IMO. Rebuttals /critiques of my data and results with other I can check aways welcome.
CEng, CPhys, MBA
There are probably a number of typos above, but that's all the time I have for now. The message is clear, I hope.

Reply to  brianrlcatt
July 13, 2017 10:53 am

brianrlcatt July 13, 2017 at 6:57 am

I suggest this direct ocean heating simply steps up at an order of magnitude or more every 100K years. Takes about 200,000 Mount Fujis of basalt to raise the oceans 12 degrees, on the back of my envelope. no modellers required, proven physics of if, then.

Pretty close 😉
Latest provable large magmatic event is the Ontong Java one, possibly 100 million km^3 magma erupting in the oceans. No surpprise the deep ocean were ~18K warmer then today at the end of those eruptions, around 85 mya.
see http://www.sciencedirect.com/science/article/pii/S0012821X06002251
1 million km^3 magma carries enough energy to warm ALL ocean water 1K.
We need another eruption like the Ontong Java one to lift us out of the current ice age.

Reply to  Ben Wouters
July 15, 2017 12:42 pm

Cliamate modellers, nort real scientists, academic statisticians, are so looking the worng way to prove CO2 guilty on a bum wrap , amanipulating and witholding evidence like some inquisition court. . Heads in the religious clouds when the action is under the oceans. This was not my idea, but badly presented by the person who first suggested the principal. This close a 121 fit of magma heat content with Ocean temperature change, no forcing required, CO2 follows as a cisnequence, not a cause. http://news.nationalgeographic.com/news/2013/09/130905-tamu-massif-shatsky-rise-largest-volcano-oceanography-science/

oppti
July 13, 2017 7:08 am

Nice job!
Next analysis can detect ocean currents that distributes heat into the oceans before it can radiate?

K. Kilty
July 13, 2017 7:36 am

I note that the objections that Nick Stokes raises here represent exactly the sort of thinking that makes this post by Eschenbach necessary.
Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:

∆T = λ ∆F
It is that
dH/dt = C dT/dt
where H is heat content. That pushes the question back to the relation between dH/dt and ∆F. And it isn’t simple linear – Schwartz gives the T~F relation as
∆T = 1/λ F (1-e^(-t/τ))
but with the further possibility of multiple time scales.

This may indeed refer possibly to multiple time scales, but that is not sufficient for full representation of the problem as all this refers to a ∆T applying to the system as a whole. If the Biot number for the system is very small, then this solution of homogeneous temperature works just dandy. The Biot number itself is a function of heat transfer mechanisms and scale size, and “smallness” is a function of temperature resolution among other things. If the Biot number is not small the temperature distribution at any point during a transition from one equilibrium state to another is a complex function of time and space. In this case a mean temperature can always be calculated, but depending on how the one monitors the problem (distribution of measuring instruments, scale effects, schedule, instrument resolution) one can arrive at different mean temperatures, and find that a mean value of any sort may have no pertinence to particular locations.
I have used this example for a long time to explain my skepticism about “mean air temperature” or “global mean temperature” (GMT), which a person can calculate in any situation may not be unique and may have little importance in a practical sense. I suppose the counter argument to what I have just explained is a “yes but it still is a useful monitor of system change”. Yet in view of the GMT being a non-unique and context dependent entity with time dependence, this counter argument seems logically doubtful. And it doesn’t even begun to discuss the “corrections” made to observations to calculate a GMT in the first place.
There are many issues for a Red Team to calculate, but the most important are beyond these technical issues, and revolve around costs in relation to benefits, and what strategies are likely to be practical or whether such strategies are even needed.

Nick Stokes
Reply to  K. Kilty
July 13, 2017 10:38 am

“Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:”
Yes. Schwartz is describing a lumped element model, with prospects of reasonable success. But engineers who use such models do not do so claiming that:
“[the] system has a secret control knob with a linear and predictable response”
And neither do climate scientists. Again, it comes back to Willis’ excellent advice
“QUOTE THE EXACT WORDS YOU ARE DISCUSSING”
As far as GMT, or more carefully, global mean surface temperature anomaly, is concerned, it is pretty much unique – that is, it doesn’t depend much on whichever valid set of measurements you use. It’s true that regions may differ from the average, as with many things we observe. But it is a consistent observable pattern. The analogy is the DJIA. It doesn’t predict individual stocks. Different parts of the economy may behave differently. But DJIA is still useful.

Gary Pearse
July 13, 2017 7:40 am

Impressive demonstration Willis. Are climate scientist proponents of excessive global warming not using all the wonderful tools they put up above us.
I realize you are looking at S-B/Global T relationship and not imbalances in incoming and out going radiation. Enthalpy in melting ice at constant temperature and endothermic rapid greening of the planet including phytoplankton (cooling effect) don’t reduce the fit of the Stefan-Boltzmann / global T noticeably, but the imbalance should be a measure of enthalpy changes I’d imagine.
I recall Hansen using imbalance as ‘proof’ of serious warming. Perhaps both poles alternately freezing and thawing balance out but the greening is a long term issue and must be part of the imbalance. Ferdinand Englbeem in a reply to a post of mine suggested that changes to Oxygen which is recorded to ppmv accuracy give a fair estimate of greening. Might your excellent facility with global scale analysis using satellite data be an approach to investigating the imbalance question as a measure of enthalpy in the system? Could the departure of the S-B to the warm side at the top be the cooling from greening?

H. D. Hoese
July 13, 2017 7:44 am

All this is very good, but the National Academy of Science and the Royal Society has put this overview out as certain evidence of human caused climate change.
http://nas-sites.org/climate-change/qanda.html#.WWZ4B_WcG1t
This statement from number 18 would seem to settle the matter even if one did not understand anything about climate, but has at least a little common sense. I would not walk on bridges built by engineers who made similar statements.
“Nevertheless, understanding (for example, of cloud dynamics, and of climate variations on centennial and decadal timescales and on regional-to-local spatial scales) remains incomplete. ”

Thin Air
Reply to  H. D. Hoese
July 13, 2017 1:54 pm

They go on to say:
“Together, field and laboratory data and theoretical understanding are used to advance models of Earth’s climate system and to improve representation of key processes in them, especially those associated with clouds, aerosols, and transport of heat into the oceans. This is critical for accurately simulating climate change and associated changes in severe weather, especially at the regional and local scales important for policy decisions.”
Without admitting that they understand so little of it. So we can look forward to $Billion more spent for them to tweak their models, without finding the most basic errors in the models, because they refuse admit the role of “negative feedbacks” that have stabilized climate, while focusing only on CO2 as the driver of changes.

Bob Weber
July 13, 2017 8:10 am

Willis always makes it interesting, but has he really identified science’s real misunderstanding?
My research shows the temperature limits are limited by the range of TSI.

“As evidence of the centrality of this misunderstanding, I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings. This means that in the models, everything but the forcing cancels out and the temperature is a function of the forcings and very little else. In addition, the paper laying out those claimed mathematical underpinnings is one of the more highly-cited papers in the field.
To me, this idea that the hugely complex climate system has a secret control knob with a linear and predictable response is hugely improbable on the face of it. Complex natural systems have a whole host of internal feedbacks and mechanisms that make them act in unpredictable ways. I know of no complex natural system which has anything equivalent to that.”

My research shows changes in SST are a simple linear lagged function of TSI forcings.
The effort to use TOA can be misleading. The main action of solar energy is upon the ocean, which then acts on the atmosphere. On any given day the air temperature within the troposphere is a response to the ocean surface temperature (which is a lagged response to TSI) and present day TOA. You will completely miss the lagged influence of former TSI when you don’t include it, an influence that is more powerful. It’s no wonder you claim there isn’t a lagged or linear response to TSI.
if it were so that 85% or so of the energy needed for warming the ocean did not come from the sun, from where did the other 85% or so of the necessary energy for warming came from besides the sun, and why is it so widely believed that there is such another greater source of tangible heat than the sun that no one can identify, measure, or feel? We humans have a pretty good sense of the solar daily heating effect, so why can’t we feel the heat 5X stronger than sunlight, day and night?
NO evidence exists for an energy source 5X more powerful than sunlight!
In my view the IPCC solar POV blue team is defending the indefensible, and has everyone chasing their tail looking for other forcings and feedbacks.”
***
Willis has never argued for a real heat source 5X more energetic than sunlight – he couldn’t find one if he ever tried, because there isn’t one. Isn’t that interesting?

July 13, 2017 8:24 am

Two thoughts:
Net TOA Radiative Forcing trend 0.08 + – 0.24 W/m2, signal-to-noise ration 1 to 3, stating a “trend” is a stretch.
CERES Surface Temperature ending in March 2016 right before El Nino warming ends and cooling begins, clearly cherry-picked.

Reply to  Michael Moon
July 13, 2017 9:05 am

H.D. Hoese and Michael Moon
Yep. The biggest problems in climate science are:
It’s a hypothetical field of investigation i.e. theoretical science, yet it has crossed ethical boundaries into being taken as fact and acted upon.
For example, not one of the temperature anomaly data sets is actual data. They all have uncertainties of at least +/- 0.5K or more. We don’t even know if temperatures have decreased from 1910 or remained flat because it’s all in the noise.
Same for other climate related data sets. We don’t have the resolution.
Hence the data uncertainty and how the data was measured, does not support the conclusions in any real-world applications (if that is the way to say it). Results and conclusions are only consistent (or maybe not) against the constructed artifice around AGW.
Or in simple terms within the set of assumptions they use, climate science has a certain amount of consistency. But very little of which stand up to experimentation.
When people call AGW a scam, it shouldn’t be the science. It’s the application of shody data as if it is platinum-coated verified.
Advocates of taking action are similar to a hypothetical set of people who would lobby, say New York or London, to force every home and business owner to fit special anti-slip tiles on their roof, costing in the thousands of dollars or pounds, just so that the Health and Safety risk to Santa Claus and his reindeer would be minimised by about 10%.

Reply to  mickyhcorbett75
July 13, 2017 9:22 am

Just to emphasise this:
From CERES itself the rms erro for LW TOA is 2.5W/m2
CERES DQ Summary for EBAF_TOA Level 3B – net flux has been energy balanced

H. D. Hoese
Reply to  mickyhcorbett75
July 13, 2017 5:10 pm

Hypotheticals are not just a problem in climate science, this paper, even quoting from Pielke’s book, takes marine science types to task over the problem with advocacy. Cowan is a good biologist, might argue a couple of points, but what he examines has led, among others worse, to Google Earth posting fish skeletons all over the world, click Oceans.
https://benthamopen.com/ABSTRACT/TOFISHSJ-2-87

Reply to  Michael Moon
July 14, 2017 3:15 pm

Willis,
I did not say or even imply that you cherry-picked. The CERES bosses did, as they get numbers from their satellite every day. They chose to show the years that gave the largest warming trend, not you.

July 13, 2017 8:24 am

“ratio”

July 13, 2017 8:28 am

Willis Eschenbach, thank you for another insightful and informative essay.

Greg Goodman
July 13, 2017 8:32 am

Hi Willis, very interesting stuff.

Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero. This is what we would expect in a governed, thermally regulated system. In such a system, sometimes the feedback acts to warm the surface, and other times the feedback acts to cool the surface. Overall, we’d expect them to cancel out.

I’m not sure how you manage to see this as confirmation of “a governed, thermally regulated system.” Maybe because you do not define what you mean that term. But knowing your past posts along those lines you propose a governor like a switch on an AC unit which clamps the max temperature and turns on the AC.
What your figure 9 shows is slightly non-linear feedback : the planck f/b based on the T^4 S-B relationship. This could be reasonably well approximated by a straight linear negative f/b over the limited range of temps in the dataset.
You may cite the flatter section at the top of the temp range as evidence of a much stronger negative f/b at the top end of the scale. This is presumably the tropics, One of you colour coded graphs with latitude and colour coded variable would confirm this. If you can establish that is flat you can claim a governor, I would suggest a much stronger neg. f/b is a better description.
That graph, if correct, seems to be formal observational proof that the water vapour feedback is NOT doubling CO2 forcing ( over water at least and roughly over a good proportion of land. ).
What does need explaining is the orthogonal section of the land data where a less negative TOA imbalance ( ie more retained heat ) is leading to a drop in surface temperature. That is counter intuitive on the face of it though there is little scatter and a very clear, linear relationship.
The first thing to establish is whether this is a particular geographical region. It probably is.
Best regards.

Greg Goodman
Reply to  Greg Goodman
July 13, 2017 1:10 pm

Ah, causality the wrong way around. Drop in temp leading to less outgoing IR, hence the TOA change. The slopes are close to perpendicular which means gradients are the reciprocal of each other.
It would be enlightening to determine what this division is: night/day ; summer/winter or geographical.
Sea seems a lot simple. This is probably another reason why simply averaging temperature of land and sea is physically invalid as I pointed out on Judith’s blog last year.
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/

July 13, 2017 8:45 am

Regarding “a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree”: Also said was “Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero”. The zero feedback climate sensitivity figure is 1.1 degree C per 2xCO2, unless one provides a cite for something lower.

Reply to  Donald L. Klipstein
July 13, 2017 9:04 am

OK, I just looked at the math at the end of the article that mentions Figure 9 and the numbers in Figure 9. The math does not consider that as the surface warms, so does the level of the atmosphere that downwelling IR comes from. So that if the surface warms from 290 to 291 K, the amount of radiation leaving it increases from 401 to 406.6 W/m^2. (Numbers chosen because 405 W/m^2 was mentioned for sea surface.) So, radiation from the surface increases by 5.6 W/m^2 from 1 degree K of warming. Dividing 3.7 W/m^2 per 2xCO2 by that does indeed result in a figure of .66 degree K per 2xCO2. But the zero feedback figure is higher, because some of that 5.6 W/m^2 is returned to the surface by increase of downwelling IR from greenhouse gases in the atmosphere. That is not counted as a feedback, but part of the explanation of temperature change from change of CO2 due to radiative transfer alone.
At this rate, the .67 degree C per 2xCO2 mentioned in Figure 9 is not essentially zero feedback, but indicative of negative feedback. I figure about 2 W/m^2-K negative feedback is indicated, using 1.1 degree per 2xCO2 as the figure with no feedbacks due to albedo, lapse rate effects or change of water vapor, etc.

Reply to  Donald L. Klipstein
July 13, 2017 11:43 am

I just noticed something else about Figure 9: TOA radiation imbalance seems to roughly match the difference between surface outgoing radiation at the surface temperature in question and the outgoing surface radiation at 291 K. This seems to mean that everywhere in the world’s oceans has half its radiation imbalance being used to make the temperature of each place in the oceans different from 291 K, and the other half causing heating/cooling advected to somewhere else in the world. (I hope I got this right.) So if a change equivalent to a 2x change of CO2 has half of it used to change the temperature of that location by .67 degree C and the other half used to change the temperature of elsewhere in the world, I think that indicates global climate sensitivity of 1.34 degree C per 2x CO2.

Reply to  Donald L. Klipstein
July 15, 2017 7:08 am

I’m not confident about the half-and-half part that I said above; the amounts may be different. That means global climate sensitivity may be other than 1.34, but more than .67 degrees C per 2xCO2.

July 13, 2017 8:51 am

The temperature is definitely not linear to forcing. The way forcing is defined by the IPCC is somewhat ambiguous and this leads to the error. If Pi is the instantaneous power entering the planet and Po is the power leaving, their difference is considered ‘forcing’. This can be expressed as the equation, Pi = Po + dE/dt, where E is the energy stored by the planet and dE/dt is the energy flux in and out of thermal store of the planet which in the steady state becomes 0 when Pi == Po.
While the energy stored, E and the temperature of the matter storing E is linear (1 calorie increases the temperature of 1 gm of water by 1C), this doesn’t account for the fact that E is continually decreasing owing to surface emissions, thus dE/dt is not linear to dT/dt. So, forcing would be linear to temperature if and only if the matter whose temperature we care about (the ‘surface’) is not also radiating energy into space. The consensus completely disregards the FACT that Po is proportional to T^4 and the satellite data supporting this is absolutely unambiguous, but once this is acknowledged, the high sensitivity they claim becomes absolutely impossible.
Note the ambiguity in the definition of forcing. 1 W/m^2 entering from the top is equivalent to an extra 1 W/m^2 being absorbed by the atmosphere. The 1 W/m^2 entering from the top is all received by the surface while the 1 W/m^2 entering from the bottom is split up so about half exits into space and half is returned to the surface. The distribution of energy absorbed by the atmosphere owing to its emission area being twice the area over which energy is absorbed seems to be widely denied by main stream climate science and this alone represents a factor of 2 error.
BTW, when you look at the math and the energy balance, if more than half of what the atmosphere absorbs is returned to the surface, the resulting sensitivity is reduced, not increased or the atmosphere must be absorbing far less than expected.

David L. Hagen
July 13, 2017 8:52 am

Willis
My compliments on very informative graphs and analysis of the data.
Re: “Figure 5. Correlation of TOA forcing and temperature anomalies”
You note: “Antarctica is strongly negatively correlated.” Note a similar effect over Greenland.
Re: “Figure 8. Scatterplot, temperature versus TOA radiation imbalance”
The lower left land (red) data seems to show a strong anti-Stephan-Boltzman correlation over the temperature range from -20 deg C to -60 deg C. That appears to be over the polar regions including Antarctica and Greenland.
Preliminary Hypothesis: Variations in albedo from changing cloud cover over polar regions might cause this anti-Stephan-Boltzman correlation between temperature and TOA forcing.
Is there a way to distinguish such albedo variations due to changing cloud cover?
Such cloud and albedo variations might vary with galactic cosmic rays and thus with solar cycles per Svensmark’s model and Forbush evidence.
Best wishes on your explorations

Gary Pearse
Reply to  David L. Hagen
July 13, 2017 11:16 am

Surely albedo from ice and snow in the polar region is the prime reason for out going rad ‘violating’ S-B relation. B

Don K
Reply to  Gary Pearse
July 13, 2017 1:04 pm

That’s certainly a good bet Gary. Another possible contribution MIGHT be that CERES instruments seem (mostly?) to be (to have been) put on satellites in sun synchronous orbits That has a lot of advantages, but it means the satellites never go closer than about 8 degrees to latitude of the poles. And there are other issues like no solar input for much of the year and low angle illumination the rest of the year. I’m no longer as smart as I once was and can’t begin to guess the impact of those things.

nobodysknowledge
Reply to  David L. Hagen
July 13, 2017 12:07 pm

It is something called temperature inversion and radiative cooling. From a discussion at Science of Doom: “About temperature inversion: “The intensity maximum in the CO2 band above Antarctica has been observed in satellite spectra [Thomas and Stamnes, 1999, Figure 1.2c], but its implication for the climate has not been discussed so far.”
“However, if the surface is colder than the atmosphere, the sign of the second term in equation (1) is negative. Consequently, the system loses more energy to space due to the presence of greenhouse gases.”
“This implies that increasing CO2 causes the emission maximum in the TOA spectra to increase slightly, which instantaneously enhances the LW cooling in this region, strengthening the cooling of the planet.”
“This observation is consistent with the finding that in the interior of the Antarctic continent the surface is often colder than the stratosphere; therefore, the emission from the stratospheric CO2 is higher than the emission from the surface.”
And even over Antarctica climate models have systematic bias: “This suggests that current GCMs tend to overestimate the surface temperature at South Pole, due to their difficulties in describing the strong temperature inversion in the boundary layer. Therefore, GCMs might underestimate a cooling effect from increased CO2, due to a bias in the surface temperature.”
So what about the inversion over Greenland plateau then?”
Citations from: How increasing CO2 leads to an increased negative greenhouse effect in Antarctica. Authors Holger Schmithüsen, Justus Notholt, Gert König-Langlo, Peter Lemke, Thomas Jung, 2015. http://onlinelibrary.wiley.com/doi/10.1002/2015GL066749/full
Other maps with blue colour over Antrctica and Greenland.
https://scienceofdoom.com/2017/02/17/impacts-vii-sea-level-2-uncertainty/

July 13, 2017 9:55 am

Great work, Willis!
“forcings are not inputs but diagnostics”
Say what?
CO2 is a “forcing”, therefore it is a diagnostic as well, and not an input to the system. Atmospheric water is not a forcing (according to the models), but a feedback only. One hesitates to guess the label for feedback only, metadiagnostic?
It seems we are in grave danger of losing all the actual inputs…

Nick Stokes
Reply to  gymnosperm
July 13, 2017 10:42 am

“Say what?”
From the post
“This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).”
Those “forcings” in W/m2 are not inputs to GCM’s. They are deduced. Yes, CO2 etc are inputs.

Greg Goodman
Reply to  Nick Stokes
July 13, 2017 1:16 pm

what do you mean “deduced”? Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.
The only thing which is deduced from models is overall sensitivity and that is pre-loaded by ASSUMPTIONS about things like cloud “amount”, constancy of rel. humidity, the scaling needed to volcanic forcing. etc.

Nick Stokes
Reply to  Nick Stokes
July 13, 2017 7:54 pm

” Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.”
Evidence? It is generally not true. Treatment of volcanoes is variable – often omitted entirely in future years. Radiative effect of CO2 is not an input – GCM’s don’t work that way anyway. What is input is either gas concentration or emissions of gas. The task of figuring the net CO2 radiative forcing from that is a by-product of the whole GCM process.

Greg Goodman
Reply to  Nick Stokes
July 13, 2017 9:49 pm

OK, you meant calculated , not deduced. GSMs don’t to deductions that would require AI. I thought you meant deduced from model output as is done to get CS. Just a confusion of wording.
Scaling of volcanic forcing is a fiddle factor used to tweak models to reproduce recent past climate. Volcanic forcing calculated in Lacis et al from basic physics and El Chichon data in 1992 was 30. This got reduced to 21 by Hansen et al ( same group at GISS different lead author ) .
The motivation for lowering volcanic forcing was to reconcile model output. This is not science based, it fudging. They ended up with models which are too sensitive to all radiative forcings which balanced out reasonably well when both volcanoes and CO2 were present. There has been negligible AOD since around 1995 and models run too hot.

Reply to  Nick Stokes
July 16, 2017 11:21 am

So by diagnostic you mean tuning set point.

michel
Reply to  gymnosperm
July 13, 2017 11:32 am

Agreed. The logic of what Nick is saying escapes me. Maybe there is something we are missing here. But it seems like Willis has shown that there is a model with a lot of irrelevant detailed variables in it, but when you come down to it, only one input is needed to duplicate its output.
So then the reply is, the value of this variable is not an input to the model, its a result of some model assumptions. So what? Willis’ point still stands. You have a very complicated model apparently taking account of lots of different factors, but when you come down to it they none of them matter, because if you just remove them all, you get the same results by using the one variable.
I don’t get it. Nick is a bright and well informed guy, so please, explain why this is wrong.

Nick Stokes
Reply to  michel
July 13, 2017 8:01 pm

Basically this post comes down to an assertion about a “central misunderstanding” of climate science. Ideally, the way to find out about that central misunderstanding would be to quote the actual words of someone expressing it. We never get that. Instead there is an indirect argument that scientists must believe it because GCM surface temperatures can generally be deduced from published TOA forcings. My point is that that argument fails idf the forcings were actually deduced from the GCM output (or from a late stage in GCM processing).

Dave Fair
Reply to  Nick Stokes
July 13, 2017 11:17 pm

Wasn’t it Gavin Schmidt that said they ensemble model outputs to determine the overall models’ internal response to forcings? Or am I mis-remembering?

Greg Goodman
Reply to  michel
July 13, 2017 10:03 pm

The criticism is valid, Willis should have followed his own golden rule and quoted something.
However:

that argument fails if the forcings were actually deduced from the GCM output

This is the problem they are not “deduced” they INDUCED. Model parameters are tweaked to get the desired results. There are dozens of poorly defined parameters that can be freely changed within quite a large range. I’m talking primarily about volcanic forcing and cloud amount and the questionable assumption that rel.humidity is constant.
Through many iterations these fudge factors are juggled to come up with a combination which gets fairly close to reproducing 1960-1995 climate record. They consistently ignore the fact that it does not reproduce the early 20th c. warming.
This is an ill-conditioned problem, with far too many poorly constrained variables and
a combination which fits ( a limited portion ) of the record and fails outside that is very likely NOT the right combination.
So your “if” is not satisfied. The forcings are not deduced they are induced.

Bob Smith
July 13, 2017 10:50 am

“Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015”
The data/labels for “temperature” and “seasonal component” appear to be reversed. The seasonal component appears to be slightly larger than the temperature.

HankHenry
July 13, 2017 11:00 am

I don’t get why it is when discussing climate models there is always much talk of Stefan Boltzmann but nothing about combined gas laws. It seems the notion of a “surface temperature” for Earth is a difficult simplification to make with the difficulties glossed over. Even when abstracting out a “top of atmosphere ” idea things seem ill defined. Maybe I just don’t get the assumptions.

July 13, 2017 1:00 pm

BTW did you read the small-print on climate forcing? All would-be climate forcers please wait on line for 6,500 years, the normal lag between insolation forcing and resultant change in the climate, as shown (recently bt Javier) on this relationship between Milankovich obliquity forcing and 6,500 year lagged temperature:comment image

Reply to  ptolemy2
July 13, 2017 1:01 pm

wait in line (fat thumb)

Nick Stokes
Reply to  Willis Eschenbach
July 13, 2017 7:44 pm

Willis,
My request is that you quote the words. What did Schwartz actually say that makes it a paradigm? What did the citing papers actually say? Schwartz’ paper was quite controversial.

Greg Goodman
Reply to  Willis Eschenbach
July 13, 2017 10:16 pm

Take for example the volcanic forcing. The models didn’t make that up. It is calculated from the physics of the volcanic ejecta and the measured time that the ash and sulfates remained in the atmosphere.

Willis, while I agree that volcanic forcing is an input ( AOD data ) it is worse than that. The scaling is one of the principal fudge factors of the models. It USED TO BE calculated from basic physics around 1990 but the GISS team abandoned that idea in favour of tweaking it to reconcile model output with the climate record around Y2K.
Lacis, Hansen & Sato found a scaling of 30 using proper science methods. It is now typically 21 per Hansen et al 2002. That is one friggin enormous frig factor.
I quote the relevant papers in my article at Judith’s C.Etc. ;
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
relevant refs are 2,3,and 4 cited at the end of the article. Search “Lacis” in the article for relevant sections.

Greg Goodman
Reply to  Willis Eschenbach
July 13, 2017 10:25 pm

See my earlier comments on this. They simple chose a combination of fudge factors which produce output that fits a limited portion of the climate record and fails outside that period. It is pretty clear that they have not got the right combination of fiddle factors.

Greg Goodman
Reply to  Willis Eschenbach
July 13, 2017 10:37 pm

From Hansen et al 2002 [4] ( Emphasis added. )

Abstract:
We illustrate the global response to these forcings for the SI2000 model with specified sea surface temperature and with a simple Q-flux ocean, thus helping to characterize the efficacy of each forcing. The model yields good agreement with observed global temperature change and heat storage in the ocean. This agreement does not yield an improved assessment of climate sensitivity or a confirmation of the net climate forcing because of possible compensations with opposite changes of these quantities. Nevertheless, the results imply that observed global temperature change during the past 50 years is primarily a response to radiative forcings.

my bold.

3.3. Model Sensitivity
The bottom line is that, although there has been some narrowing of the range of climate sensitivities that emerge from realistic models [Del Genio and Wolf, 2000], models still can be made to yield a wide range of sensitivities by altering model parameterizations.

Hansen is quite clear about all this in his published work. Shame not one takes any notice.

July 13, 2017 1:51 pm

Willis,
I really like your writing style. You are consistently clear and to the point. Thanks for your hard work.

July 13, 2017 2:14 pm

“Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound.”
I expected a discussion of the importance of understanding cloud formation on temperature to follow this paragraph. Alas, they are only concluding remarks.
This article and the comments are interesting, but as an outsider, I am not interested in getting into the weeds so much. I wonder more about the elephants in the room. Is classical physics the right tool for modeling long-term climate? Will the current modeling techniques ever produce useful information for policymakers?
For some physicists, the answer is no. Dr. Rosenbaum at Caltech posited that nature cannot be modeled with classical physics but theoretically might be modeled with quantum physics. Last year, CERN CLOUD experiments produced data on cloud formation under tropospheric conditions. CERN reports “that global model simulations have not been directly based on experimental data” and that “…the multicomponent inorganic and organic chemical system is highly complex and is likely to be impossible to adequately represent in classical nucleation theories…” They conclude that model calculations should be replaced with laboratory measurements (2 DECEMBER 2016 • VOL 354 ISSUE 6316).
Simple question: Are the CERN CLOUD experiments relevant to a discussion of “Temperature and Forcing.”

Germinio
July 13, 2017 4:32 pm

Willis states: “This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance”
Technically that statement is nothing more than a statement about conservation of energy. If you want to deny conservation of energy then go ahead but you are going to need a lot of evidence before anyone will believe you. If there is an imbalance between the amount of energy received and the amount radiated by the earth then it must be warming (or cooling). The interesting questions are surely whether (a) there is radiation imbalance and (b) what is causing it.

Reply to  Germinio
July 13, 2017 8:29 pm

Absolutely Geminio. As someone said earlier on in this discussion, if your running the tub and don’t allow it to drain at the same rate…..

Reply to  Jack Davis
July 13, 2017 8:30 pm

‘You’re’

Greg Goodman
Reply to  Germinio
July 13, 2017 10:21 pm

… and (c) how does the system change to restore balance.

Dr. S. Jeevananda Reddy
July 13, 2017 5:27 pm

After seeing Figures 1 & 2, it ives me an impression that Residual pattern at surface is in opposition to TOA, particularly the peaks [upward and downward]. What does it means?
Dr. S. Jeevananda Reddy

July 13, 2017 6:27 pm

The K-T power flux balance diagram has 160 W/m^2 net reaching the “surface.” There is exactly zero way 400 W/m^2 can leave.
Over 2,900!! Just a dozen shy of 3,000 (up 11 hundred since 6/9) views on my WriterBeat papers which were also sent to the ME departments of several prestigious universities (As a BSME & PE felt some affinity.) and a long list of pro/con CAGW personalities and organizations.
NOBODY has responded explaining why my methods, calculations and conclusions in these papers are incorrect. BTW that is called SCIENCE!!
SOMEBODY needs to step up and ‘splain my errors ‘cause if I’m correct (Q=UAdT runs the atmospheric heat engine) – that’s a BIGLY problem for RGHE.
Step right up! Bring science.
http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-
http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C
http://writerbeat.com/articles/16255-Atmospheric-Layers-and-Thermodynamic-Ping-Pong

Walter Sobchak
Reply to  Willis Eschenbach
July 14, 2017 3:06 pm

Just out of curiosity, the typical value given for the flux of solar radiation at the top of the atmosphere is 1361 W/m². (Google “Solar Constant”). The chart above starts with 341.3 w/m^2. What happened to the other 1020 W/m^2?

Walter Sobchak
Reply to  Willis Eschenbach
July 14, 2017 3:13 pm

Maybe this answers my question: “Hence the average incoming solar radiation, taking into account the angle at which the rays strike and that at any one moment half the planet does not receive any solar radiation, is one-fourth the solar constant (approximately 340 W/m²)”
https://en.wikipedia.org/wiki/Solar_constant
But, perhaps the averaging conceals more than it reveals, and is not a good basis for modeling. Because, it does not consider the differences between land and water and day and night.

Reply to  Walter Sobchak
July 14, 2017 3:33 pm

But, perhaps the averaging conceals more than it reveals, and is not a good basis for modeling. Because, it does not consider the differences between land and water and day and night.

I agree. You to not learn how something works by throwing much of your information way, or treating a dynamic process as if it is static. You have to look from multiple points of view.

Reply to  Willis Eschenbach
July 14, 2017 7:48 pm

I’d love to have hourly or even daily data

PMOD solar data is reported daily.
I is it to calculate flat surface energy every hour, for every stations lat and alt, from I think the 50s. I calc a relative value, then just multiple by tsi. PMOD starts in 79, so I average the entire series and use that as well, so I have both average , and daily if it exists, and can select which one to use.

Reply to  Willis Eschenbach
July 15, 2017 3:17 am

Willis Eschenbach July 13, 2017 at 7:36 pm

Actually, the K-T diagram shows ~ 160 W/m^2 of shortwave solar energy entering the surface, along with about 340 W/m2 of longwave downwelling radiation for a total of about half a kilowatt total downwelling radiation on a 24/7 global average basis.

With the backradiation being on average twice the solar radiation, why don’t we see backradiation panels iso solar panels? Twice the energy and available 24/7.
An infrared camera can convert longwave easily, so just increase the efficiency and our energy problems are solved.

Nick Stokes
Reply to  Willis Eschenbach
July 16, 2017 2:12 am

“why don’t we see backradiation panels iso solar panels?”
It can’t work. People wrongly say that down IR can’t warm the surface because it comes from a cooler place. It can and does. What it can’t do is do work, thermodynamically. A collector for down IR must be exposed to the source (sky), which is cooler. Net heat is necessarily lost. You can’t get usable energy that way.

Walter Sobchak
July 13, 2017 8:29 pm

“we’re extremely unlikely to ever double the atmospheric CO2 to eight hundred ppmv from the current value of about four hundred ppmv.”
Dear Willis: As usual, you have written a very thought provoking post.
Incidentally, I read a new article about fossil fuel and its effect on CO2. because it was linked by a story in the Wall Street Journal. The article is:
“Will We Ever Stop Using Fossil Fuels?” by Thomas Covert, Michael Greenstone, & Christopher R. Knittel in Journal of Economic Perspectives vol. 30, no. 1, Winter 2016 at 117-38. DOI: 10.1257/jep.30.1.117
https://www.aeaweb.org/articles?id=10.1257/jep.30.1.117
The downloads available at that URL include an Appendix which which sets forth estimates of total fossil fuel resources and how much CO2 will be released by their combustion. Table 3 in the Appendix includes an estimate of the eventual concentration of CO2 in the atmosphere. If I am reading the Table correctly, they are estimating eventual concentrations of up to 2400 ppm, 6x the current level.
I know that this differs from your estimate dramatically, If you have a chance to look at it, i would appreciate your thoughts on it.

Mike Wallace
July 14, 2017 8:26 am

I don’t always get the math right away but perhaps the derivative you use to calculate the slope
dT/dW = (W / (sigma epsilon))^(1/4) / (4 * W)
could be stated:
dT/dW = W / (4 sigma epsilon)
but if wrong happy to learn here

LT
July 14, 2017 9:46 am

Willis,
I realize this is a bit off topic, but in my quest to find a solar cycle in the atmosphere I wrote some software to take the binary RSS data of the lower stratosphere and averaged each longitude band for each month. So I have a dataset that is decomposed by latitude and year, which gave me 472 months by 72 latitudes. I then computed the FFT for each latitude and displayed the spectrum. Theoretically I thought I should see some signal around 11 years in the Northern Hemisphere, but to my surprise I found a very strong signal in the Southern Hemisphere as well as a weaker one in the Northern Hemisphere. This only works on the actual temperature data and not the anomaly data. So my question is, in your extensive analysis a while back of solar signals did you always use anomaly data or did you ever try your methodology on actual temperature data?
Latitude vs Year
https://photos.google.com/photo/AF1QipNfBeVwrtmsbs-hVgLqf23rimZv4cpUvABoV_C3
Latitude vs Spectrum
https://photos.google.com/photo/AF1QipMn5J7xk6GVX7M7hYmL5iut7uK905W7uCmQOOAH
~LT

LT
Reply to  LT
July 14, 2017 10:05 am
LT
Reply to  Willis Eschenbach
July 14, 2017 10:11 am

Well, I used the two RSS datasets, one is anomaly and one is temperature in K., but it is the same set of code that does the analysis. I suppose I could attempt to compute the anomaly myself. Does UAH have a binary dataset that allows the global monthly dataset?

LT
Reply to  Willis Eschenbach
July 14, 2017 10:20 am

The data is actually 472 months with 72 latitudes and 144 longitudes, so for each month and for each latitude I summed the longitudes so that the resulting temperature dataset will allow you to see common latitudes and how they change with each month.

Reply to  LT
July 14, 2017 11:23 am

The data is actually 472 months with 72 latitudes and 144 longitudes, so for each month and for each latitude I summed the longitudes so that the resulting temperature dataset will allow you to see common latitudes and how they change with each month.

LT I do something similar in one paper/blog I wrote. I look at the changing day to day temp, as the length of sunlight change, and compare the change in surface insolation with the change in temperature at the same location. I calculate this for the extratropics in latitude bands as Degrees F/Whr/m^2 of insolution change. You can convert this to C/W/m^2 dividing by 13.3
https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity/
This is based on NCDC’s/Air Forces Summary of Days surface data set, and includes all station with an set of geo-coordinates, is in the range specified, and has the minimum number of reporting/year specified(360).

LT
Reply to  Willis Eschenbach
July 14, 2017 2:39 pm

Ok, so I have both anomaly and actual temperature working now and there is no doubt that an anomaly dataset is not the most accurate way to analyze a time variant signal. Essentially when you remove the seasonal cycle you are imposing a waveform in the signal. The FFT can detect an embedded waveform in a signal but the relative strength of the waveform can never be determined if you have applied a bias to the signal. Removing a seasonal cycle also is not a zero phase process, therefore there will be a phase shift implied to the data that can potentially cause cancellation of certain waveforms that will not show up accurately in any type of cyclical analysis process. It is my opinion that attempting to use anomaly data for anything other than looking at delta amplitudes is plagued with phase errors that will mask the validity of any type of process that involves correlations. There is indeed a significant signal in both the Troposphere and the Stratosphere with a cycle of 11.8 years, which happens to be the orbital period of Jupiter. I cannot say if it is a solar cycle, but I can tell you that it is not noise and is almost certainly a natural phenomena that is perturbing Earth’s climate in some way.
~LT

Reply to  Willis Eschenbach
July 15, 2017 4:48 am

Willis writes

Finally, I’d be careful using the RSS data. Given the direction of their changes, those guys seem to be driven in part by climate politics rather than climate science …

From Roy, an early comment on the latest changes to the data

Roy W. Spencer, Ph. D. says:
July 6, 2017 at 9:22 AM
well, it does seem unusual that virtually all temperature dataset updates lead to ever-more warming. Very curious. Must be some law of nature at work here.

As Feynman explains with regard to the Millikan oil drop experiment

Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of–this history–because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong–and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard.

One might imagine that every one of our independent measurements for temperature was biased and only now are we slowly adjusting towards the correct warmer value.
Except that’s exactly NOT whats happening here. There have been millions of individual, independent measurements by many thousands of people and when combined, form those trends. There’s simply no way an individual measurement could be wrongly read so as to be closer to some perceived consensus.
The opposite is true. There is a belief that the “correct” trend is high, originally around 3C/century by the IPCC, and so adjustments have taken us towards, not the truth because nobody knows that, but instead the belief of high sensitivity that AGW has permeated throughout the scientific world.

Reply to  TimTheToolMan
July 15, 2017 9:36 am

Except that’s exactly NOT whats happening here.

Well until they process the data and start changing it.

Greg
Reply to  Willis Eschenbach
July 15, 2017 5:57 am

It is my opinion that attempting to use anomaly data for anything other than looking at delta amplitudes is plagued with phase errors that will mask the validity of any type of process that involves correlations. There is indeed a significant signal in both the Troposphere and the Stratosphere with a cycle of 11.8 years, which happens to be the orbital period of Jupiter.

My first response to your comments here was to use UAH and to use daily REAL temps , not “anomalies”.
Glad you noted the Jupiter link, that was my first reaction on seeing your graph. I don’t think this even close to the mean solar cycle over the period of the data you are using.
Theoretically J is too far away to exert a significant tidal force but it does affect the distance of perihelion which may help explain why it is only visible at high latitudes in your periodogram.
I’m very suspicious of the bands at equal interval in period . This is a frequency analysis and would better be displayed in “per year” frequency x-axis. I can’t see what would create this kind of repetition in time-based abscissa of a frequency plot. This is highly suspicious.
I’m also surprised by the lack of other features. I have done lots of frequency spectra of loads of climate variables and have to seen anything so featureless. Again, it’s worrying. Maybe it is the colour coded amplitude which the eye is less sensitive to, than a y-axis plot for amplitude.
The outstanding feature of your 3D graph is the across the board peak at just above 4y. I have often found peaks centred on 4.4 or 4.45 years, which is most likely lunar in origin, but I would not expect it to be so uniform with latitude.
That results are “surprising” is not a reason to reject them but certainly a reason to double check you are doing. A good cross check would be to do the same thing with UAH monthly averages and see if you get basically the same thing. Does RSS really claim to have continuous, uninterrupted data out to 82.5 degrees N/S ?? I think UAH have gaps outside the tropics. Have you checked for NaN dtaa flags ?
convert to ascii and check your dataset by eye.
I’d encourage you to dig deeper, it’s interesting but several feaures raise a few red flags for me.
I’d love to dig into this but sadly sometimes the real world gets in the way of trying to save the planet from those would save the planet.
Keep us posted.

Greg
July 15, 2017 6:11 am

Another thought: TLS is dominated by two broad bumps followed by a drop of about 0.5 kelvin, around 1982 and 1991 , these features will likely dominate any frequency analysis.comment image

LT
Reply to  Greg
July 15, 2017 7:24 am

Hi Greg,
The TLS is dominated by those features, but only in the anomaly dataset, in the real data they are of little consequence, and that is my point. When you subtract the seasonal average for each month you have changed the signal dramatically which will have a very different amplitude spectrum. Check out the very busy frequency spectrums of the Lower Stratosphere using the anomaly data.
https://photos.app.goo.gl/hJawFC3jpnnAB7wr1
And also note that people often talk of the 11 year solar cycle, there really is no such thing as an 11 year solar cycle in our life time or our grandparents life time.
cycle 21 – 10.5 years
cycle 22 – 9.9 years
cycle 23 – 12.3 years
cycle 24 – still waiting

Greg
Reply to  LT
July 15, 2017 7:59 am

Thanks LT, that is an average of 10.9y over last three cycles , which was my point. That if way off 11.89.
I agree that one should not try doing frequency analysis on anomaly data. The only value I can see in that technique is as a crude kind of 12mo filter which runs up to the end of the data ( unlike a proper low-pass where you do not get results for the last few years ).
Like I said in an earlier post, Monthly means are inappropriate for frequency analysis because they do not anti-alias the data and this is not a pedantic option, it is a fundamental requirement. Monthly averaging assumes that all sub-60 day variability is totally random and has no cyclic components. That is almost certainly false. This just underlines the basic paradigm of climatology that climate is assumed to be a long term rise driven by GHG plus net-zero, random, natural variability.
Monthly means are used almost universally in climatology and are INVALID data processing. That was why I suggested you start with daily data and do your own LP filtering ( if you cannot conveniently handle the data size in your FFT ).
UAH provides daily TLS data. The graph above uses the previous version 5.6 . You will be able to find v6 I would imagine. Just walk the ftp tree.
daily global means:
http://www.nsstc.uah.edu/data/msu/t4/tlsday_5.6.txt
daily zonal data: (5.1 Mb ascii )
http://www.nsstc.uah.edu/data/msu/t4/tlsdayamz_5.6.txt
It would be interesting if you get notably different results.

Greg
Reply to  LT
July 15, 2017 8:02 am

“The TLS is dominated by those features, but only in the anomaly dataset”
The graph posted was NOT anomaly data: see the legend.

LT
Reply to  LT
July 15, 2017 8:05 am

Greg, also I am using the Version 3 of the RSS, which everyone used to love. And as the image below shows the troposphere shows all the El-Nino’s very clearly so I am comfortable with the data. Now the frequency spectrum however, yes you are right looking at the data in color does not highlight all of the features in the spectrum very well. Also I am just using a simple real to complex FFT without using a hanning window to the data before I do the FFT which will cause some small undulations but the 11.8 year cycle that is showing up is real. Displaying the color of the frequency spectrum in logarithmic distribution instead of linear will also help highlight more spectral features. Do you do your own coding?
https://photos.app.goo.gl/9B4Kubsxo2TXa7Zq2

LT
Reply to  LT
July 15, 2017 8:19 am

Oh yes, one more graph of interest, when I look at the frequency spectrum of the lower troposphere I find the same cycle in the Northern hemisphere and just a hint of it at -25 to -50. This is all preliminary and I need to thoroughly check my Frequency estimation equations, and probably will try something different than an FFT to estimate the cycles. I really appreciate your feedback.
https://photos.app.goo.gl/usfDsriMfSZV2wBX2

Greg
Reply to  LT
July 15, 2017 8:23 am

PS are you using any kind of windowing or “taper” function. TLS is not long term stationary as is required for good FT results. You could try using first difference of the data which makes it more stationary, or one of a number of window functions which taper the data to zero at each end. ( Yes that is a crude distortion too but usually better that ignoring the stationarity condition and getting a load of spurious spikes in the spectrogram ).

Greg
July 15, 2017 8:26 am

You may find this spectrogram of trade wind data interesting for comparison to equatorial TLS.
https://climategrog.wordpress.com/wpac850_sqr_chirp/comment image

Greg
Reply to  Greg
July 15, 2017 8:38 am

” Do you do your own coding?”
Mostly. I linked the source of the filters I use in my first comment above. They are written in awk which is cross-platform. Use if they are helpful.
I use a “chirp-z ” analysis mostly which gives almost indefinite frequency resolution, rather than latching to multiplexes of the dataset length as in FFT. I did not code that it was provided to me Tim Shannon. I have not had contact with him for a while since he had serious health probs.
chirp is better for locating peaks accurately because it is freed from the length of the dataset. I don’t know how this compares to padding the dataset with zeros which a lot of people do after windowing to get around the problem.

Greg
Reply to  Greg
July 15, 2017 8:46 am

I find Kaiser-Bessel better than Hamming / Hanning windows. There are dozens of possibilities, with various amounts of ringing.
K-B tends to smoother the peaks a bit but does not produce much ringing, which can be misleading. If I don’t want to dampen the spectrum too much I use a cosine on the first and last 10% of the data and leave the middle 80% unattenuated.
It’s horses for courses, they give different views into the data and you need to take account of the fact that it is all distorting in some way.
Hamming etc are popular in radio and audio work where they usually have very long data sample in relation to what they are interested in. Climate data is tricky for FFT because typically very short.

LT
Reply to  Greg
July 15, 2017 8:49 am

I’ll check out the chirp-z algorithm, I have written lots of different software in a few languages and now I am tinkering with C# in Windows Visual Studio, I would be happy to give you this solution if you had any interest in messing with it. My hole point of this exercise is to see if I can use a deconvolution algorithm to remove the annual cycle from a temperature time series, it’s really just a hobby for me. But the deconvolution algorithm has been used commercially for many years processing seismic data. Currently I am high pass filtering the data to get rid of all the 1 – 3 year stuff.

Greg
Reply to  Greg
July 15, 2017 8:53 am

The vertical bands may be an artefact of the data window caused by the lack of window function. They are pretty surely not part of the climate signal.
Why are all the graphs basically flat below 3 years, even when working on actual temperatures ?

Greg
July 15, 2017 9:00 am

Ha, you beat me to it. What filter are you using? I’m very interested in the deconvolution idea but I’m not sure it really appropriate here. I’d be interested in how you are doing it.
The problem is that there is not a fixed annual cycle. This is the problem with the “anomaly” approach. You remove an AVERAGE annual cycle and end up with lots of annual variability since the annual cycle is not constant. It seems better to me to use well design low-pass filters for climate.
I have done some experimentation with deconvolution to refocus blurred photo images and remove hand shake etc. very interesting.

Greg
Reply to  Greg
July 15, 2017 9:04 am

Are you retaining the complex result from the deconvolution? If not this will mess with the phase as badly as anomalies. Also dosing the noise injected in deconv is very critical and can cause exaggerated swings. The early Pluto photographs during approach had obviously been processed in this way and they had over done it a bit. I could see it a mile off having done this kind of processing.

LT
Reply to  Greg
July 15, 2017 9:11 am

The high pass is just a cutoff with a 12 sample linear taper. The deconvolution algorithm works by the following flow.
Input Time series
Compute the auto correlation
Apply Gaussian taper to autocorr
Get a time advanced version of the autocorr by gaplength
Use the Wiener Levinson algorithm to shape the auto correlation to the time advanced autocorr.
Convolve data with prediction filter to create predicted oscillations
Subtract the predicted time series from the original
Operator length is only as long as the maximum period of reverberations you are attempting to model and remove.
Choosing the right gaplength is more art than science, generally on or two zero crossings in length as indicated on the autocorrelation of the time series

Greg
Reply to  LT
July 15, 2017 9:26 am

Thanks, I follow most of that, I’ll have to look up Wiener Levinson to see how it works.
“Currently I am high pass filtering the data to get rid of all the 1 – 3 year stuff.” I presume you mean high-cut / low-pass in frequency terms. If you are multiplying the freq domain by a linear cut-off ( if I’m following you correctly ) this will cause some rather crude and unnecessary distortions in the time domain. You may like to look into Lanczos filter. I’m sure you will find the theory of it interesting as an optimal way to get a fast transition zone and a flat pass-band.
https://climategrog.wordpress.com/2013/11/28/lanczos-filter-script/comment image?w=584

Greg
Reply to  LT
July 15, 2017 10:28 am

OK, so if I get the method correctly, it is a way of estimating annual component but without requiring it to be a constant 12mo cycle.So , if all goes well, you may be able to estimate a variable annual cycle.
However, Wiener deconvolution also requires stationarity , so again working on first difference of the time series may be helpful.
Since you are using a gaussian on the AC anyway you could use a derivative of gaussian on the time series to do both in one hit . This also gives a slightly better approximation to the true derivative than using first difference.
https://climategrog.wordpress.com/2016/09/18/diff-of-gaussian-filter/comment image

I would be happy to give you this solution if you had any interest in messing with it

yes, drop me a message on my about page at climategrog , it looks like it could be useful.

LT
Reply to  LT
July 15, 2017 10:53 am

Greg,
Yes I meant low pass, The time series is not stationary, but Earths orbit and the seasonal oscillation as well as the 90 Watt/M**2 Solar variation is the signal that will show up as the dominant component in the autocorrelation and that is what I would like to remove so that the residual temperature trend can be analyzed.

Greg
Reply to  LT
July 15, 2017 11:35 am

I suspect that both components need to be stationary but I cannot be sure since I have not fully gone through the maths of Wiener Levinson.
People tend to be a bit dismissive of mathematical pre- conditions and criteria which get in the way but sometimes they come back and bite. 😉
You could always try using D.o.G in place of the gaussian. It will give you another perspective and a cross-check on what parts of the spectrum are consistent between the two methods. Those which disappear are likely artefacts of the processing.

Greg
Reply to  LT
July 15, 2017 11:43 am

Hint, if you code DoG yourself by modifying your gaussian, you need to make it a bit wider to maintain accuracy. A three-sigma gaussian will get replaced by a 5-sigma kernel in the DoG.
It’s all quite a neat trick which profits from differentiation and convolution being linear operations and thus commutative. It is identical whether you do diff or gauss first. The improvement comes from doing an analytic diff of the gaussian fn BEFORE you start working on discrete sampled approximations. First difference is not identical to the derivative. Thus DoG is better than first diff on top of a gaussian convolution.

Reply to  LT
July 18, 2017 9:00 am

The time series is not stationary, but Earths orbit and the seasonal oscillation as well as the 90 Watt/M**2 Solar variation is the signal that will show up as the dominant component in the autocorrelation and that is what I would like to remove so that the residual temperature trend can be analyzed.

I never got why everyone always wanted to throw this away. you have a nice known signal we can use to check the response of our system. That’s how you test complex system to find their response. Same with 24hr cycle, just the day to day response tells us about the response time of the atm.
If the response shows it’s invariant to changing Co2, trying to make up surface data to come up with some hypothetical hundredth of a degree trend in some GAT is pointless.

Greg
Reply to  Willis Eschenbach
July 15, 2017 11:09 am

“With the backradiation being on average twice the solar radiation”
You are confusing SW and solar as though they were synonymous. The term “downwelling” is also confusing. Radiation does not “well” up or down, it radiates. All the SW is solar but not all solar is SW. Not all the downwards IR is “backradiation”.
Part of the solar radiation is long-wave and not all the downwards IR is from thermal emissions in the atmosphere.
Solar PV cannot catch everything. The semiconductors have to be ”tuned” to capture certain wavelengths. There is more energy available in higher frequencies of SW.
If you want to catch LWIR, use a flat absorber to heat water. This is far more efficient the PV . Ideally, you use water to cool the back of your PV , thereby increasing its efficiency and collecting both UV and IR from the same surface area.

Reply to  Greg
July 16, 2017 1:33 am

Greg July 15, 2017 at 11:09 am

Part of the solar radiation is long-wave and not all the downwards IR is from thermal emissions in the atmosphere.

Funny thing is that CO2 absorbs some of the solar IR, so actually prevents some solar from warming the surface. See just above 2000nm.comment image

Greg
Reply to  Greg
July 16, 2017 3:39 am

Yes, it affects both directions but that absorption band is in the tail of the solar spectrum whilst it is in the peak of surface emissions. That means that it is the outward flow which is larger and has the dominant effect on TOA budget at those wavelengths.comment image
Gives you an idea how puny CO2 is in relation to all of the much broader and stronger bands controlled by water vapour.
water vapour is the control know , not CO2.

Greg
Reply to  Willis Eschenbach
July 15, 2017 11:28 am

“When the orbit falls back to a lower orbit, it produces electricity.”
The photons knock the electrons free of atoms in the crystal lattice leaving behind a positive “hole”. The opposing charges are collected by fine wires and the voltage difference causes a current to flow in the external circuit.
Electrons can jump from one atom into a “hole” in a neighbouring atom and thus the holes can be regarded as being mobile. The charge mobility of both holes and electrons are one of the properties which are controlled and engineered in semiconductors .

Reply to  Willis Eschenbach
July 16, 2017 1:25 am

Willis Eschenbach July 15, 2017 at 10:29 am

Unfortunately, at the frequencies of thermal radiation, the individual photons don’t have the energy to knock an electron out of its orbit.
However, it seems (although I may be wrong) that you think that this argues against the existence of ~ 340 W/m2 of downwelling radiation. This radiation is measured, not estimated or modeled but measured, by dozens of scientists around the world every day.

As anything with a temperature the atmosphere must radiate according that temperature. But imo a low density, low temperature gas can’t radiate enough energy to increase the temperature of the warmer oceans (or soil).
As you say, the photons just don’t have the energy required.
Pyrgeometers CALCULATE the amount of radiation they receive from the atmosphere.
http://www.kippzonen.com/Product/16/CGR3-Pyrgeometer#.WWsYq9Tyg1I
“The CGR3 is a pyrgeometer, designed for meteorological measurements of downward atmospheric long wave radiation. The CGR3 provides a voltage that is proportional to the net radiation in the far infrared (FIR). By calculation, downward atmospheric long wave radiation is derived. For this reason CGR3 mbodies a temperature sensor.”
For an actual MEASUREMENT I believe the instrument should be cooled to 0K and then measure how much the temperature rises when in radiative balance with the incoming radiation. I don’t see that temperature rise to ~278K when pointing the instrument upward to the night sky.
To explain our observed surface temperatures we don’t need any back radiation from the atmosphere.
see https://wattsupwiththat.com/2017/07/13/temperature-and-forcing/comment-page-1/#comment-2552384
For the oceans the mechanism is obviously different, but imo the TEMPERATURE of the deep oceans is completely set and maintained by geothermal energy.
Simply put, Earth has a temperature. The sun is not warming a blackbody from 0K to 255K or so.
It just increase the temperature of a shallow top layer which in turn warms the atmosphere.comment image

July 15, 2017 12:58 pm

Basic physics is that heat is radiated, conducted, convected. Welling is not a physics principal in the context of heat – up or down.
So, wot Greg said plus water heating is good because it integrates heat it acquires over an extended period in the high capacity energy store of the domestic water system/water heating system – depends if mains pressure or gravity header tank- for later use over a short period – on demand and when required, e.g. its not real time load balancing as pure electrical energy must always be, no batteries required, etc. NASA numbers on Slar Insolation Hope this helpscomment image?dl=0

July 15, 2017 1:07 pm

As I am here, you might like to consider this, a collection I just made. Ypu decide what is noise, and what happens next
https://www.dropbox.com/sh/7lfirzox1a0hpq4/AADLsIYVP5aa4iisnQOChI0Ca?dl=0

July 15, 2017 1:15 pm

Try again, I don’t k like WordPress, or HTML. Gimme native Mac any day.comment image?dl=0

Greg
July 15, 2017 2:19 pm

The most interesting part of this article is figure 8. In particular the lower part of the land data which shows an inverse relationship. This underlines why global average temps are not informative.
Hopefully he will be inspired to do a follow up looking at the geographical regions represented by the various bits of the land data scatter plot.
Most of that lower section must be Antarctica, with changes in TOA being driven by surface temps, not the other way around. The flat top to the red splash being tropics when temps are tightly regulated and insensitive to radiation changes.

Reply to  Greg
July 15, 2017 5:19 pm

Any where the surface of the earth is covered with water or ice, the surface temperature will be “tightly regulated” by the processes of evaporation/condensation and freezing/sublimation. The temperature will approach the measured dew point or frost point not the measured air temperature. Air temperature will never go below the dew point as long as there is moisture present.

Greg
Reply to  fhhaynie
July 15, 2017 11:54 pm

The lower slope of the sea data at the top of figure 8 shows the feedbacks are at least 2 or 3 times stronger in the tropics and temperature is much tighter regulated there.
The lower red ( land ) data shows a totally difference regime working in Antarctica.
It would be worth properly identifying the regions contributing to the flat top in the red data too. Figure 8 is very informative. Good work by Willis.

July 18, 2017 7:00 am

On the large numbers of source enrgy and heat capacity I doubt the atmosphere is more than just a relatively low energy symptom/consequence of what the 1,000 times more heat energy containing oceans are doing, plus the imbalance of solar insolation the atmosphere may transfer. However this subject ignores one very powerful direct heating mechanism in the oceans, more powerful than radiatists can possibly imagine…
Whiie radiative energy through the criust is small, the effect of Petatonnes of magma continuously being reheated by our radioactive interior and recycled through the ocean floors evry 200 Million years, entering the oceans at 1,000 degree temperature delta is NOT. That’s real “Potential”, and serious heat capacity, transferred directly to the water, which is being continually warmed this way, more and less.
BTW there is some very made up physics above. Not even correct at High School level. Temperature exchange is driven by temperature gradient, not “potential”, which is used in other specialisations, gravity, chemical, electrical, but not thermal. There is no credibiity attached by those who make up their own beliefs, language and laws of physics to go with it, as they will not know what is proven science and what made up/hypothesis, and cannot communicate meaningfully with those who have studied the subject in oredr to understand it better, and discuss it with others sharing this knowledge. The whole point of a universal science language is precise exchange and learning. No point in commenting if you have not first mastered these basics. Science doesn’t care what you believe.
PS And magma flow through the thin basalt sea floor WILL vary asymetrically at a MIlankovitch maximum as the wafer thin 7km crust on our 12,000 Km th diameter hot rock pudding gets dragged about over the molten and semi molten mantle, bank=nging o into each other for the 1,000 years it takes to kick off an interglacial, creating new leaks along with the existing calderas and the molten core itself, 30% annual garvitational variation of a force 200 times te moon. There are roughly 1Million active volcanoes under the oceans, 100,000 over 1Km high. Mont Fuji wan’t born in a day, but it is only one ice age old, 100,000 years.

Reply to  brianrlcatt
July 18, 2017 7:15 am

Temperature exchange is driven by temperature gradient, not “potential”

The gradient between 2(multiple) potentials.

Reply to  micro6500
July 18, 2017 7:49 am

Temperature is a measure of the collision of gaseous molecules . Kinetic energy being converted to heat energy. Think PV=nRT when thinking gradient.

Reply to  fhhaynie
July 18, 2017 8:30 am

You’re gong to have to stop explaining proven science established over centuries, people prefer to make up their own to fit whatver they believe at the time. The likes of Michael Mann and Paul Erlich for instance, the Josef Goebels of science approach, all with science denying agendas that put power and money above, honest, decent and truthful fact and the laws of physics.
PS It took a while before I realised, when doing water bottle rocket science, that PV has the units of energy. We were never taught that. I ended up with PV=mgh for the altitude reached w/o atmospheric drag and with instantaneous water discharge. People have done PhDs on this…. you can get a grant for anything, including science and plant denial as well data adjusting. as Michael Mann has again demonstrated.

Reply to  fhhaynie
July 18, 2017 8:45 am

Temperature is a measure of the collision of gaseous molecules . Kinetic energy being converted to heat energy. Think PV=nRT when thinking gradient.

PV is a result of gravity, the Pauli exclusion principle, and photons being the force carrier for changed matter.

Reply to  micro6500
July 18, 2017 8:49 am

with instantaneous water discharge.

Imagine if you could vaporize the water as it left the rocket how much higher it would go?
That’s the work you can do with water vapor.

Reply to  micro6500
July 18, 2017 8:57 am

Not if it vapourised outside the rocket, has to be action and reaction, so the phase change would need to drive the gas mass out of the bottle in the required direction. I do explain that the Saturn 5 and the Shuttle took off by blowing water vapour out their exhausts. Quite fast.

Reply to  brianrlcatt
July 18, 2017 9:08 am

Yes I understand. Consider clear sky atm column, as rocket combustion chamber, with Sun pumping water vapor during day, settles and warms surface at night when late at night it starts condensing

Reply to  brianrlcatt
July 18, 2017 9:09 am

If we could see it, it would look like this
https://micro6500blog.files.wordpress.com/2017/06/20170626_185905.mp4

Wim Röst
July 23, 2017 3:52 am

Willis, I have got a request.
Figure 7. Long-term average surface temperature. CERES data, Mar 2000 – Feb 2016 is giving some interesting data:
Land: 8.7
Ocean: 17.5
Ant.: – 26.6
‘Ocean’ is supposed to be a better sun collector and a better ‘energy saver’ than ‘land’ is. The above numbers seem to confirm, but I don’t think the above numbers represent the right proportions. For example, because Antarctica (land) is that cold because of latitude, Antarctica will bring down the average number for surface temperatures of ‘land’ considerably. The same for ‘altitude’, as shown by Tibet in your figure: ‘high’ is represented by a low surface temperature because of the altitude. To be able to compare well, we need comparable numbers at ‘ocean level’ and from the same latitude.
Lacking skills myself to find this out, I want to ask you whether you can produce for every latitude (or every 10 degrees of latitude) the average surface temperature for land and for ocean grid cells. And the ‘land’ grid cells corrected for altitude. ‘Season’ (month) must have a large influence as well.
I think those data will give a good insight in the different latitudinal effects on the distribution of land temperatures resp. ocean surface temperatures. And so on the role ‘land’ and ‘oceans’ play at every latitude, at the NH and SH. The results could be very interesting.
Zonal differences are very important. For example the changes in ‘obliquity’ cause zonal changes, during seasons, but also on geological timescales. And those changes could be very equal in effect. Knowing this all will help us to understand better the climate mechanisms of the Earth.

Wim Röst
Reply to  Willis Eschenbach
July 24, 2017 2:54 pm

Willis, thanks for the link, I’ve read your post ‘Temperature Field’. Interesting, nice maps, but not exactly what I wanted. I am very interested in absolute temperatures of ocean and land per latitude (!), corrected for elevation. I suppose (in combination with the maps) they will tell us more about the real role of the oceans.
As an example of what absolute temperatures can raise for questions (that can lead to interesting answers) I had a look at the numbers for surface temperatures in both posts. In your last post you included the data for two extra years: 2014 from March to 2016 to March.
First column: Data Mar 2000 – Feb 2014, fig. 1 of post‘Temperature Field’
Second column: Data (Mar 2000 – Feb 2016), fig. 7 of post ‘Temperature and Forcing’
Avg Globe: 15 (15.1)
NH: 15.7 (15.8)
SH: 14.4 (14.4)
Trop: 26.5 (26.6)
Arc: -11.9 (- 11.8)
Ant: – 39.6(– 26.6)
Land: 10.5 (8.7)
Ocean: 16.8 (17.5)
While the average temperature of the Earth as a whole remains nearly the same over the two periods, the last three items show unexpected temperature changes. The Antarctic temperatures over the whole period are that much higher than expected that I think there must be a typing error. The next two items, Land and Ocean are also remarkable. After two El Nino years average ocean temperatures rose quite a bit: 0.7 degrees. I think that is too much for average temperatures over the whole period. And land temperatures lowered (average over the whole period…..) with nearly two degrees (1.8). Given the difference in surface area of resp. ocean and land the numbers might be correct in comparison with the average globe numbers 15(15.1). But if they are correct, what is the explanation for the fact that warm El Nino years lead to lower land temperatures?
Here with the surface data of CERES more things might play a role. Technical things. Perhaps you can’t compare the two periods as I’ve done. But my point is that the absolute temperatures per latitude as requested (elevation corrected) might give some insight in the working of our present Interglacial – Pleistocene climate system. Maps will surely be interesting, but also a table with the absolute numbers, split for Land and Ocean per latitude. They will raise interesting questions, I suppose.

Reply to  Willis Eschenbach
July 24, 2017 5:32 am

Willis Eschenbach July 23, 2017 at 7:54 pm

Ben, I don’t understand this. As you point out, the geothermal flux is on the order of a tenth of a watt or so.
Given that … if there were no sun and no atmosphere, please provide an estimate of the surface temperature of the earth.

In that case the geothermal flux would be slightly higher (greater delta T over the crust) and the geothermal gradient consequently a bit steeper.
Surface temperature to radiate away this flux ~35-40K. (emissivity 1.0)
The deep mines I mentioned would now be very cold (< 100K) iso ~330K.
But when applying external warming to the surface, the flux is blocked, and the crust starts to warm, until the gradient is such that energy can be lost again at the surface.
On planet Earth the temperature just below the surface provides the base temperature on top of which the sun does its warming magic. Think solar Joules warming just ~50 cm of soil, or the upper 10 meter of water.
Sun delivers in one day normally 5 or 6 kWhr/m^2, just enough to warm 10 meter of water 0,5K or so.
http://www.pveducation.org/pvcdrom/average-solar-radiation
Looks like this:comment image
For land see eg.
http://www.tankonyvtar.hu/hu/tartalom/tamop412A/2011_0059_SCORM_MFGFT5054-EN/content/2/2_1/image099.jpg
With the resulting surface temperatures, Earth loses on average ~240 W/m^2 to space since the atmosphere slows the energy loss compared to the ~390-400 W/m^2 the surface would radiate directly.to space without atmosphere.
Since the sun delivers ~240 W/m^2 on average, the energy budget is balanced at a much higher surface temperature than would be possible with a purely radiative balance as eg on our moon (average temperature ~197K)

Reply to  Ben Wouters
July 24, 2017 1:17 pm

Willis Eschenbach July 24, 2017 at 10:05 am

How does the fact that the earth would be extremely cold if the sun didn’t shine somehow mean that the sun is “able to warm the surface to our observed values”? I don’t see what one has to do with the other. Just because it would be cold without the sun doesn’t mean it would be 15°C with the sun, that makes no sense at all.

The geothermal gradient adjusts very slowly to the average surface temperature.
If eg the average surface temperature increases 10K, the flux can’t reach the surface anymore, and the entire crust begins to warm up until the gradient is re-established starting at the higher surface temperature and the flux can flow again, but It takes indeed a lot of time to warm up 25-70 km of rock 😉
see https://en.wikipedia.org/wiki/Geothermal_gradient#Variations
In the following gif a 24 hr display of 70 cm surface and 200 m air temperature reading on a calm, clear summer night nicely showing how a Nocturnal Inversion develops:
http://wtrs.synology.me/photo/share/Su74sMwt/photo_54455354_70726f6674616c6c202831292e676966
Notice how only the upper 20-30 cm of the soil reacts to solar warming and overnight cooling.
The “base temperature” at ~50 cm and below is caused by geothermal energy. (13 centigrade is roughly the yearly average temperature at that location)
I haven’t found any numbers yet for how long it takes for the geothermal gradient to re-establish to a 1K average surface temperature change. Very interested, especially the number for oceanic crust.

July 25, 2017 6:40 am

Willis Eschenbach July 24, 2017 at 4:47 pm

The part it seems you are missing is that yes, 300 metres down in the earth it’s hot. And 300 metres down in the ocean it’s cold.
So what? That doesn’t make us any warmer or any colder.

Wiliis, it seems we’re talking past each other.
To me temperatures at ~300 m in the oceans and ~15 m in the soil are a base temperature, not caused by solar energy. The surface temperatures are this base temperature plus what the sun adds to it.
see eg.http://www.oc.nps.edu/nom/day1/annual_cycle.gif
At ~150 m the temperature does not change throughout the year => no solar influence.
Surface temperature changes from ~4,5C to ~13C and back again in autumn and winter.
In the Cretaceous the deep ocean temperatures where ~20C or even higher, so the surface temperatures were correspondingly higher.
Do you agree with this?
In the energy budget for the continents the geothermal flux can obviously be neglected, relative to the solar flux. What imo can’t be neglected is the geothermally caused temperature just below the surface. Unless of course you claim that it doesn’t matter whether the temperature at 15 m is 0K, 100K, 200K or 290K, the surface temperatures will always be the same, caused by solar energy only..

Reply to  Willis Eschenbach
July 26, 2017 12:57 am

Willis Eschenbach July 25, 2017 at 8:36 am

So yes, I agree with you. Base temperature is something like 30-40K “plus what the sun adds to it”.

Don’t think you’ll find a temperature of 30-40K anywhere between the ~7000K inner core and the ~290K surface 😉 , but I’ll let the continents rest for now.
Good to see you agree with the base temperature of the oceans at around 4C,
This means that for ~70% of earth’s surface the temperature is already ~277K, before the sun starts to add anything. With an observed surface temperature of ~290K we only have to explain a ~13K difference for the surface layer to arrive at the observed surface temperatures.
Question is how much of this difference the sun delivers, and how much the atmosphere.
I go for 100% solar, but maybe a few percent can be attributed to the atmosphere.
This means that we do not need the 33K atmospheric warming the GHE claims to explain the surface temperature of the oceans and still have a balanced energy budget for planet earth:comment image
We do however need to explain how the deep oceans got their temperature in the first place, and how eg they got so much warmer in the Cretaceous. I think I have a solid answer to those questions.

July 27, 2017 1:14 pm

Willis Eschenbach July 26, 2017 at 9:39 am

Wiliis, it seems we’re talking past each other.
To me temperatures at ~300 m in the oceans and ~15 m in the soil are a base temperature, not caused by solar energy. The surface temperatures are this base temperature plus what the sun adds to it.
Thanks, Ben. Seems to me we’re in agreement. You’ve already said that without the sun, the surface temperature would be about 30°-40°K … which obviously has to be the “base temperature” above which there are further additions from the sun.

The 30-40K for earth is theoretical only, since earth has never been without sun.
It does happen on our moon, craters near the poles that never get any sunshine are at ~25K.
On the equator the sub-regolith temperature is ~220K, settled on the average surface temperature.
see http://earthguide.ucsd.edu/earthguide/diagrams/woce/
The dark blue 4C layer is roughly the depth where the sun has no more influence.
My point remains that the base temperature of the oceans is ~277K and of the continents around the average surface temperature for a specific location. These temperatures are completely caused by geothermal energy and the sun warms the surface layer on top of these temperatures.
The oceans have been created boiling hot, since they sat on more or less bare magma, and the temperatures of the DEEP oceans has been maintained by the geothermal flux plus occasional large magma eruptions like the 100 million km^3 Ontong Java one. For the continents the geothermal flux is sufficient to maintain the temperature of the entire crust as we see it today. (increasing 25K for every km you go down into the crust)
I do understand that the atmosphere radiates in all directions, since the direct radiation from the surface through the atmospheric window is a small part of the total radiation to space. I don’t see the atmosphere increasing the temperature of the surface, since imo solar energy is enough to explain the surface temperatures given the base temperatures as shown above. Possible exception (nocturnal) inversions, when part of the atmosphere is a bit warmer than the surface.

Reply to  Ben Wouters
July 27, 2017 2:16 pm

Possible exception (nocturnal) inversions, when part of the atmosphere is a bit warmer than the surface.

This is the “greenhouse effect” in operation at night, The surface cools, air temps drop, as temps near dew points more water vapor starts condensing. That radiates some 4J/g, some of which evaporates water that just condensed. This creates a lot of Dwlr and helps keep surface temps from dropping. It sacrifices evaporated water, and regulates to dew point temp. But only starts after rel humidity gets to higher levels, which in the temperate zones is usually late at night.
The effect of this on cooling is that it is nonlinear, cooling faster at dusk, and slower near dew point. This is in addition to the slowing from the 1/t^4 drop due to temp change.