Guest Post by Willis Eschenbach
Over at Dr. Curry’s excellent website, she’s discussing the Red and Blue Team approach. If I ran the zoo and could re-examine the climate question, I’d want to look at what I see as the central misunderstanding in the current theory of climate.
This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).
As evidence of the centrality of this misunderstanding, I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings. This means that in the models, everything but the forcing cancels out and the temperature is a function of the forcings and very little else. In addition, the paper laying out those claimed mathematical underpinnings is one of the more highly-cited papers in the field.
To me, this idea that the hugely complex climate system has a secret control knob with a linear and predictable response is hugely improbable on the face of it. Complex natural systems have a whole host of internal feedbacks and mechanisms that make them act in unpredictable ways. I know of no complex natural system which has anything equivalent to that.
But that’s just one of the objections to the idea that temperature slavishly follows forcing. In my post called “The Cold Equations” I discussed the rickety mathematical underpinnings of this idea. And in “The TAO That Can Be Spoken” I showed that there are times when TOA forcing increases, but the temperature decreases.
Recently I’ve been looking at what the CERES data can tell us about the question of forcing and temperature. We can look at the relationship in a couple of ways, as a time series or a long-term average. I’ll look at both. Let me start by showing how the top-of-atmosphere (TOA) radiation imbalance varies over time. Figure 1 shows three things—the raw TOA forcing data, the seasonal component of the data, and the “residual”, what remains once we remove the seasonal component.
Figure 1. Time series, TOA radiative forcing. The top panel shows the CERES data. The middle panel shows the seasonal component, which is caused by the earth being different distances from the sun at different times of the year. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data.
And here is the corresponding view of the surface temperature.
Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015.
Now, the question of interest involves the residuals. If there is a month with unusually high TOA radiation, does it correspond with a surface warming that month? For that, we can use a scatterplot of the residuals.
Figure 3. Scatterplot of TOA radiation anomaly (data minus seasonal) versus temperature anomaly (data minus seasonal). Monthly data, N = 192. P-value adjusted for autocorrelation.
From that scatterplot, we’d have to conclude that there’s little short-term correlation between months with excess forcing and months with high temperature.
Now, this doesn’t exhaust the possibilities. There could be a correlation with a time lag between cause and effect. For this, we need to look at the “cross-correlation”. This measures the correlation at a variety of lags. Since we are investigating the question of whether TOA forcing roolz or not, we need to look at the conditions where the temperature lags the TOA forcing (positive lags). Figure 4 shows the cross-correlation.
Figure 4. Cross-correlation, TOA forcing and temperature. Temperature lagging TOA is shown as positive. In no case are the correlations even approaching significance.
OK, so on average there’s very little correlation between TOA forcing and temperature. There’s another way we can look at the question. This is the temporal trend of TOA forcing and temperature on a 1° latitude by 1° longitude gridcell basis. Figure 5 shows that result:
Figure 5. Correlation of TOA forcing and temperature anomalies, 1° latitude by 1° longitude gridcells. Seasonal components removed in all cases.
There are some interesting results there. First, correlation over the land is slightly positive, and over the ocean, it is slightly negative. Half the gridcells are in the range ±0.15, very poorly correlated. Nowhere is there a strong positive correlation. On the other hand, Antarctica is strongly negatively correlated. I have no idea why.
Now, I said at the onset that there were a couple of ways to look at this relationship between surface temperature and TOA radiative balance—how it evolves over time, and how it is reflected in long-term averages. Above we’ve looked at it over time, seeing in a variety of ways if monthly changes or annual in one are reflected in the other. Now let’s look at the averages. First, here’s a map of the average TOA radiation imbalances.
Figure 6. Long-term average TOA net forcing. CERES data, Mar 2000 – Feb 2016
And here is the corresponding map for the temperature, from the same dataset.
Figure 7. Long-term average surface temperature. CERES data, Mar 2000 – Feb 2016
Clearly, in the long-term average we can see that there is a relationship between TOA imbalance and surface temperature. To investigate the relationship, Figure 8 shows a scatterplot of gridcell temperature versus gridcell TOA imbalance.
Figure 8. Scatterplot, temperature versus TOA radiation imbalance. Note that there are very few gridcells warmer than 30°C. N = 64,800 gridcells.
Whoa … can you say “non-linear”?
Obviously, the situation on the land is much more varied than over the ocean, due to differences in things like water availability and altitude. To view things more clearly, here’s a look at just the situation over the ocean.
Figure 9. As in Figure 8, but showing just the ocean. Note that almost none of the ocean is over 30°C. N = 43,350 gridcells.
Now, the interesting thing about Figure 8 is the red line. This line shows the variation in radiation we’d expect if we calculate the radiation using the standard Stefan-Boltzmann equation that relates temperature and radiation. (See end notes for the math details.) And as you can see, the Stefan-Boltzmann equation explains most of the variation in the ocean data.
So where does this leave us? It seems that short-term variations in TOA radiation are very poorly correlated with temperature. On the other hand, there is a long-term correlation. This long-term correlation is well-described by the Stefan-Boltzmann relationship, with the exception of the hot end of the scale. At the hot end, other mechanisms obviously come into play which are limiting the maximum ocean and land temperatures.
Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero. This is what we would expect in a governed, thermally regulated system. In such a system, sometimes the feedback acts to warm the surface, and other times the feedback acts to cool the surface. Overall, we’d expect them to cancel out.
Is this relationship how we can expect the globe to respond to long-term changes in forcing? Unknown. However, if it is the case, it indicates that other things being equal (which they never are), a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree …
However, there’s another under-appreciated factor. This is that we we’re extremely unlikely to ever double the atmospheric CO2 to eight hundred ppmv from the current value of about four hundred ppmv. In a post called Apocalypse Cancelled, Sorry, No Ticket Refunds. I discussed sixteen different supply-driven estimates of future CO2 levels over the 21st century. These peak value estimates ranged from 440 to 630 ppmv, with a median value of 530 ppmv … a long ways from doubling.
So, IF in fact the net feedback is zero and the relationship between TOA forcing and surface temperature is thus governed by the Stefan-Boltzmann equation as Figure 9 indicates, the worst-case scenario of 630 ppmv would give us a temperature increase of a bit under half a degree …
And if I ran the Red Team, that’s what I’d be looking at.
Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound. Normally there’s not much noise here in the forest, but tonight it’s sensory-deprivation quiet … what a world.
My best regards to everyone, there are always more questions than answers,
w.
PS—if you comment please QUOTE THE EXACT WORDS YOU ARE DISCUSSING, so we can all understand your subject.
THE MATH: The Stefan-Boltzmann equation is usually written as
W = sigma epsilon T^4
where W is the radiation, sigma is the Stefan-Boltzmann constant 5.67e-8, epsilon is emissivity (usually taken as 1) and T is temperature in kelvin.
Differentiating, we get
dT/dW = (W / (sigma epsilon))^(1/4) / (4 * W)
This is the equation used to calculate the area-weighted mean slope shown in Figure 9. The radiation imbalance was taken around the area-weighted mean oceanic thermal radiation of 405 W/m2.
“First, correlation over the land is slightly positive, and over the ocean, it is slightly negative. Half the gridcells are in the range ±0.15, very poorly correlated. ”
While it’s slightly difficult to see from the graph legend what 0 would be, I get the impression that the sign should the other way around in the text above?
Troed,
That is my subjective impression also. Also, it looks like Greenland has a strong negative correlation, like Antarctica. What do they have in common? A large thermal ballast in the form of ice and 6 months without sunlight.
And both at high altitude.
Really great analysis
Thank you
Hope your phone rings
And it’s Scott Pruitt
Bestest
Sensitivity is dictated by anomaly analysis.
As the temps did not materialise the sensitivity studies got less alarming.
A residue of changes (known and largely unknown) is being used to determine sensitivity to one tiny factor in the system.
Luke warmers are cargo cult scientists.
No one, NO ONE can show their model represents the actual real world mechanisms and until they can, it’s cargo cult science
I don’t want to rain on your parade, but that is true of all science…
All science is, is a collection of models based on a more basic model (rational materialism) that seem to work.
So at a stroke you have reduced all science to ‘cargo cult status’.
But actually that may be where it belongs.
If you study what is happening at the unpleasant end of quantum physics, physicists don’t actually even want to talk about what it means, or what reality is, any more. They just want maths that successfully predicts, no matter how weird the implications are…
I have been pondering this a long time, and in the end the only justification we have for thinking that any of our models are close to ‘true’ is the preposterous and illogical statement that ‘because they work its strong evidence that they are ‘almost true’.
I.e the fact by typing on this screen and people replying to it, works. is ‘strong evidence’ for my theory that the world consists of people trapped inside my computer screen. It fits the facts. It predicts what will happen.
In short we don’t actually have any understanding about how the world works at all. We have a collection of shorthand imperfect models that allow us to predict its behaviour a little bit – and that’s all.
Its cargo cult. Except we stopped building mock airfields when it stopped raining cargo.
Well obviously climate scientists did not.
Too much naval-gazing philosophy here. When a theory like quantum mechanics is capable of making repeatable predictions to twenty decimal place accuracy with no known exceptions observed over the best part of a century then we call that a good theory. The fact that it is not classically derivable and is potentially incomplete is irrelevant and clearly the formalism is either capturing some important aspect of reality or is the most monstrously improbable coincidence.
Cargo cult science is when a hypothesis – CAGW is still a long way from even making the grade of theory and ‘climate change’ could not even be graced with the title hypothesis since it is forever and totally unfalsifiable – makes no accurate predictions of any kind and yet is nevertheless hailed as being representative of nature.
There is a distinct difference between good models of reality and bad and in no way is Mark condemning good science as cargo cult when he puts bad models into that category.
Once upon a time, before everyone had a computer on their desks, we used to analyze circuits by hand. To do that at all, for any reasonably complex circuit, it was necessary to make linearizing assumptions. The analysis was easy to confirm by building the circuit. You got good at knowing what worked. In that case, linearizing was fully justified. In any event, you were clear about your assumptions.
Our usual tools are valid if things are, or can be made, linear time-invariant (LTI). It is trivially demonstrable that the climate does not meet that requirement. Any linear climate model is therefore mathematically invalid. Could the linearizing assumptions be somehow justified? No, the system is too complex and there are too many unknowns.
Wheelers delayed choice experiment can be interpreted as showing reality is not real. Those are the kinds of results from QM that are held as speculative. Are tee here really multiple world’s? Does reality fade into a fog of possibilities a few seconds in the future?
QM has a stellar reputation, yet describes reality as more like a funhouse, than something like the real world.
There is a difference between predicting and explaining. QM is highly successful at predicting, but a complete failure at explaining. I listen to physicists attempting to use math to explain what is going on and its laughable – just like the idea that no states exist if I don’t look at them – I know the moon is up there whether or not I am currently looking at it. Entanglement is a great example of knowing how to perform the math and entirely lacking a reason why it works. (Actually some physicists seem to have an understanding, it just isn’t the prevailing view as yet).
All that said, QM has been the most successful “model” in all of humankind existence. It repeatedly works and to a high degree of accuracy.
All climate “science” is nothing but a bunch of over-rated proxy data (with confounding variables and low accuracy) or (mostly tainted) measurements of daily observations thrown together into a bunch of meaningless computer models that predict NOTHING useful. It has been the single worst, most unsuccessful “model”(s) in humankind modern existence.
The only successful “prediction” AGW makes is it will get warmer – however this is the same prediction the null hypothesis makes so again, the AGW hypothesis fails – utterly – entirely – period. AGW makes no measurable and useful quantifiable predictions, so it can never be disproved. I.e. its religion.
physics Models are not products that tell you HOW the world works. They are models that allow you to make and test assumptions against observations.
Simulators allow you to ask questions about some model. Where people go wrong is assuming you ask the model for an answer the same way you search a pile of real world data for the corresponding answer.
You have to tell a simulator everything. I don’t know how many times I had to explain why the sim results were not wrong, but that the engineer asked the wrong question, and this is why the results are what they are. And when you asked the question correctly, you get what you’re suppose to get based on the model.
And that still doesn’t prove the model is right, just that you asked the right question, in a way the simulator knew how to answer. And you understood what it told you.
Here is the real deal: Model(s) results were drifting out of realistic ranges, so the outputs were adjusted to more desirable results. [I don’t remember the exact circumstances. Anybody have the quote?]
I do know that AR5 had to adjust model “projections” down for the intermediate future because they were, in the main, running hot.
IPCC climate models are bunk.
Leo Smith concluded:
In short we don’t actually have any understanding about how the world works at all. We have a collection of shorthand imperfect models that allow us to predict its behaviour a little bit – and that’s all.
… to which cephus20 responded to Leo’s entire post:
Too much naval-gazing philosophy here. When a theory like quantum mechanics is capable of making repeatable predictions to twenty decimal place accuracy with no known exceptions observed over the best part of a century then we call that a good theory. The fact that it is not classically derivable and is potentially incomplete is irrelevant and clearly the formalism is either capturing some important aspect of reality or is the most monstrously improbable coincidence.
… to which I now add MY two cents (sense?):
I agree with Leo, further pointing out that without “naval-gazing”, human life would become pretty dull and meaningless. We humans want to harmonize the rest of our primitive senses with our brain functions, and to divorce these senses from the brain functions of the scientific process seems to ignore these senses that are the very basis for finding meaning or purpose in everyday life.
The fact that quantum mechanics can predict to twenty-decimal-place accuracy says nothing of any reality outside of human consciousness. If anything, quantum mechanics is a precise mathematics of human consciousness. Quantum mechanics, thus, is a really good tool, … and that is all. The fact that it is potentially incomplete might be more relevant than we think, because in its current incompleteness, it denies any connection to how humans find meaning everywhere else in existence. “Existence” — QM does not even acknowledge such a thing. “Reality”? — no such thing, no, worse than that, We can neither confirm nor deny reality — it’s not our job to say.
A person could easily argue, from a QM perspective, that believing in “reality” is like believing in angels or Santa, as I understand it. It just seems shallow to me, in this respect. Why not give it some color and relationship with the human domain? But, I digress — this blog is about climate science. Oh wait, isn’t believing in human-caused-CO2-catastrophic-climate-change like believing in Santa? Maybe I haven’t digressed as much as I thought.
Yours truly,
Neville Gazer
There are models and there are models.
In engineering, software design tools let us do things we could never do without them. The thing is that they are reliable because they can be validated and verified. On the other hand, …
It’s pretty simple.
Figure 8 title is confusing.
Nice, tight reasoning. Another good job W.
I wonder just how a formula that results in the plot in Figure 8 would work.
Just change the real data to make it fit better – just like the “climate scientists” do!!
Willis,
“This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).”
As I said at Judith’s, the first objection here is that you don’t say who has this mistaken idea, and how exactly they expressed it. I think some of the mistaken ideas are yours. One. that I have railed against over the years, is
“As evidence..I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings.”
You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula. So it is evidence only of the correct application of that formula.
You quote Steven Schwartz. But his “Ansatz” isn’t the simple formula that you quoted there:
∆T = λ ∆F
It is that
dH/dt = C dT/dt
where H is heat content. That pushes the question back to the relation between dH/dt and ∆F. And it isn’t simple linear – Schwartz gives the T~F relation as
∆T = 1/λ F (1-e^(-t/τ))
but with the further possibility of multiple time scales.
But even with that more complex relation, Schwartz has nothing that corresponds to
“But that’s just one of the objections to the idea that temperature slavishly follows forcing.
He’s developing an energy balance model he’s using to try to tease out the time constants. That doesn’t imply anything slavish. It just means that you’ll find an element in the response that corresponds to that constant. A rough analogy is that an opera house may have a resonant frequency (it probably shouldn’t). That doesn’t mean that the lady singing the aria is slavishly following the resonance. It probably does color the way that you’ll hear her song.
If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.
Nick, What the blue team keeps saying is Sensitivity is likely > 2.0C, where it’s likely well below 1.1C.
They do that by treating Co2 as an additive forcing, it is not.
It displaces nearly identical amounts of forcing from natural water vapor feedback that regulates Min T.
Micro,
“They do that by treating Co2 as an additive forcing, it is not.
It displaces nearly identical amounts of forcing from natural water vapor feedback that regulates Min T.”
That is not correct. Increased CO2 traps heat and the warmer air holds more moisture than before. CO2 forcing pulls water vapor up to join it, and the water vapor provides more heat trapping – it is additive.
Measurements disagree
Micro, your graph proves nothing. Logic should tell you that if the physical chemistry of both CO2 and H2O means they absorb and retain thermal radiation, and if you add more of one of them to the atmosphere without removing any of the other, then more heat absorption will occur.
They are additive.
No, they are independent. And it’s a matter of physics, not chemistry
They are independent in the individual ways their physical chemistries trap heat. They are not independent in their abundance in the air, as the more heat CO2 traps, the more water vapor can be supported by the air. The more there is of either one of them, the more heat can be trapped.
Except that isnt what the measurements show. The effect of water vapor, and the water cycle has a nonlinear affect on the rate temps drop at night. There’s an almost 98% correlation between min temp and dew point. Neither are affected by co2.
Jack Davis – “Increased CO2 traps heat and the warmer air holds more moisture than before.” Excellent point. Just about all agree with that statement, and of course we skeptics maintain that nobody really knows to what degree extra heat retention will result from that. The physics and chemistry I understand indicate that more atmospheric CO2 will not create any problems.
Micro,
“There’s an almost 98% correlation between min temp and dew point. Neither are affected by co2.”
Of course – that’s tautological.
Obviously the minimum overnight temperature will see dew drop out of the atmosphere if it’s that kind of night. At that stage C02 is kicking back paring its nails – it has nothing to do with that process.
I’ll give it one more go as simply as I can, then I’m out of here:
CO2 absorbs heat radiated by Earth, causing the air to heat up (long and continuous process).
Warmer air holds more water vapor (humidity).
H2O vapor also absorbs heat radiated from Earth’s surface – independently!
When it gets cold at night, water will drop out of the air (dew) because the gas to liquid transition temperature is in the range of overnight temperatures.
CO2 will not drop out because its transition temperature is far lower – it has its nails to attend to!
I’m out!
First in the middle of the night the optical window, the main radiative hole, is the same temp colder as it was at dusk, yet cooling rates drop by half to 3/4. Did you read the linked paper?
Nick Stokes…your comment doesn’t seem related to Willis’s statement. He said: “As evidence..I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings.” It doesn’t matter if the forcings are an ‘input’ or a ‘diagnostic’. The relationship that Willis is referring to is still evident; if the forcing increases in the models, the temperature (output) will increase as well, and the relationship is very, very close to linear over periods of time greater than a few years. .
As the models are set up, they will never project global cooling for many years while the concentration of CO2 in the atmosphere is increasing. And the only reason why they would project any cooling at all under increasing CO2 conditions is because of the ‘volcano’ variable popping up every now and then. Or perhaps an El Nino/La Nina variable would allow for a brief cooling now and then. Outside of the few ‘natural’ variables that are acknowledged by the IPCC, the increased forcing associated with increasing CO2 will always produce warming in the models. Always.
That is not true in nature and that is the whole point of this post. It seems to me that if you want to debate something with Willis, you need to realize first what he is actually saying.
Taken at face value, he seems to be saying that the pretty forcings graph of the IPCC is nothing but the fevered (opaque) imaginings of climate modelers.
So, increasing CO2 has no effect on the temperature of the earth? And I have fairies at the bottom of my garden. lalalalala.
Not much, water compensates by condensing less water vapor by about the same amount co2 increased. Night time cooling rate is controlled by rel humidity, which slows cooling at high rel. If it’s warmer during the day, it just cools longer at the high cooling rate low humidity, and after the heat has been radiated, it slows cooling.
It’s regulated out by water vapor.
You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula. So it is evidence only of the correct application of that formula.
Nick, surely the point he is making is that you have models with lots of apparent inputs which yield a given output. He claims that you get essentially the same output with only one input, call it X.
To this you reply that he is mistaken about the nature of X. it is not in fact an independently calculated element, you say. It is a calculated quantity. The way it is calculated, you say, is to assume a certain level of output and a certain relationship between output and X, and then to reason that given this relationship, the X MUST be a certain level.
So, you argue, it is more or less true by definition that you can get all the outputs from X alone. X was made up to a quantity where exactly that would be possible.
I have no idea whether or not this is true. But if its true, you are actually agreeing with his point. His point is that the very elaborate model with lots of different variables actually works the same as one with only X as an input, and you seem to be agreeing with that. The only qualification you have is that you say this was arrived at by assumption and not by experimentally finding values for X and then seeing how they relate to the outputs.
Well, maybe, but it changes nothing in his argument. His argument is still that the models are absurdly overcomplicated and have lots of unnecessary variables, when all they require is X. He is not saying anything about whether the values they assume for X are valid, nor is he saying how they arrived at them. He is just saying that a great mass of complications come down to something very simple in the end.
It is a bit like someone plotting cholera incidence. He includes a whole bunch of variables in a model, like ethnic groups, age of infection, season of the year, country… and also water contamination. How he arrives at his calculation of how contaminated the water is, makes no difference. Someone points out that you do not need any of the other variables.
To which his reply is, ah, water contamination is not an input, its a calcluated factor.
To which the reply is, fine, but that is in fact the only thing that’s driving your models. And by the way, have you checked to see if the contamination you are calculating is found in the real world?
“His point is that the very elaborate model with lots of different variables actually works the same as one with only X as an input, and you seem to be agreeing with that.”
My point is that X wasn’t an input, but was calculated from the output. You can check the algebra here, where Forster et al made the process quite explicit. So saying that X, deduced from the GCM output, enables you to deduce that output, isn’t telling you anything. The “model” with X as only input, in fact can’t work without the GCM that provided X.
Nick writes
Nick, this is fundamentally incorrect. You’re letting the complexity of the model’s calculation cloud your understanding. The forcing is attributable to the TOA imbalance. And the TOA imbalance is set to an “appropriate” value by tuning model parameters (primarily cloud related, for most models).
You appear to be arguing that there is no function in the GCM that takes X as a parameter to deduce T and you’re right about that but it totally misses the point.
Nick “If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.”
Exactly, I couldn’t say it better myself. What exactly is the blue team saying? I have tried for years to get a definitive answer to that question.
“What exactly is the blue team saying?”
You could read what they say to find out. I have long commended Willis’ advice, given here in caps: “QUOTE THE EXACT WORDS YOU ARE DISCUSSING”. We need to hear what the blue team actually said that is characterised as the “central misunderstanding”.
Nick Stokes: You don’t seem to be able to shake the notion that the forcings are an input from which the outputs are calculated. As I have noted here and elsewhere, the forcings are not inputs but diagnostics, and are frequently calculated from the outputs by some linear formula.
What does that mean? Everywhere we are warned that increasing the CO2 concentration by continuing to burn fossil fuel will cause an increase in global mean surface temperature; i.e. that CO2 is for sure a forcing.
“What does that mean?”
Willis’ contention is that you can take some published forcing numbers and derive GCM output surface temperatures by simple formulae. My objection is that those forcing numbers were not the input from which the GCM output was calculated, but were in fact deduced from the output (and in some cases other data). So all the correspondence tells you is about the deduction process.
GCMs do take in CO2 concentrations and much other atmosphere information, and do indicate that GHGs cause warming. That is usually shown by running them with and without CO2 increase. But the quantitative estimate of GHGs as forcing is derived from other information, including GCM output. So the argument that outputs are simply related to forcings is circular.
Nick Stokes: Willis’ contention is that you can take some published forcing numbers and derive GCM output surface temperatures by simple formulae. My objection is that those forcing numbers were not the input from which the GCM output was calculated, but were in fact deduced from the output (and in some cases other data). So all the correspondence tells you is about the deduction process.
That isn’t what you wrote and you mischaracterize what Willis did: he showed by statistical analysis that the GCM output changes are nearly linear effects of the CO2 forcing changes, despite the complexity of the models.
This comment by Nick jogged an insight, or maybe a confusion (you decide – Sorry, Nick, if I confuse your clarification):
So the argument that outputs are simply related to forcings is circular.
Isn’t this one of the accusations made by some skeptics?– that forcings are derived in such a way that they support certain outputs? — that certain forcings are “anticipated” by the inputs (hence, the people “inputers”) in order to arrive at those forcings?
Nick writes
And they do it using an imperfectly modelled atmosphere. They can’t model lapse rate accurately so they can’t hope to model changes to the lapse rate as a result of the GHG concentration changes. Consequently the effective forcing is more “set” than you think it is.
matthewrmarler,
“you mischaracterize what Willis did”
I’m not trying to characterise the work he did with CERES data. I’m talking about what he says earlier, which is the blue/red moral that he wants to draw from it. On its own, the CERES analysis does not appear to conflict greatly with any “blue” theory. My issue is the set of statements about someone believing that “temperature slavishly follows forcing”. What he has said in support of that in earlier posts is based, not on someone saying it, but on the correspondence between TOA forcing related to GHG and temperature in GCM output. And that is what he refers to here. My point is that that link is weak. What is needed to establish that the blue team has that belief is a quote of someone actually saying it.
Nick Stokes July 13, 2017 at 5:06 pm
Nick, I gave you a link to one hundred and fifty papers citing the Schwartz work linked to in the head post. That is a hundred and fifty papers, call it at least three scientists per paper, four hundred fifty scientists actually saying it.
I know it’s shocking, Nick, but sometimes you’re just wrong …
w.
PLease once and for all inform us all WHAT THE BLUE TEAM IS SAYING? You have all the time in the world to write lengthy diatribes but don’t want to answer this very basic question.
RW, who are you addressing? Also, who are the members of the “Blue Team”?
w.
Nick Stokes, Willis Eschenbach etc are the reason people hear the term climatology and become disgusted at what fakes can do to a branch of science.
I have wondered what happened at TOA when the surface temperature dropped so drastically in 2008 – I can’t see anything very obvious here. The temperature dropped by about 0.7 C over the year. Any ideas why?
Willis, as always, a complicated and well thought out post….BUT, the simple question is…Why the %$^# does anyone want to save the ice in the North ?? If I want ice, I’ll open my freezer door !
..P.S. Nothing much that lives in the frozen North or in my freezer depends on ICE !!
That’s far simpler Butch2, but I suspect your’s is a rhetorical question.
Can I have a go at answering his rhetorical then?
Earth’s troposphere as it is today is something like a giant stirling engine – the cold ends are necessary to keep the engine pumping.
Get rid of the ice and not only do you halt the engine, but you have overheated it. A stagnant hot steamy world we wouldn’t want to live in will be the result.
You are right, and completely wrong at the same time, a sterling engine is a good analogy(though I’d have to ponder the operation to decide if it’s representative or not), it’s just the warm end is the surface, and the cold end is space, and it’s always cold.
I used the term ‘something like’ advisadly. A stirling engine works on heat rejection – space is where the heat is rejected to, the cold ends of the engine are the poles.
Small point – glad you liked the analogy.
Jack, the poles are radiators due to geometry. As far as I know no one has demonstrated how CO2 changes geometry.
RWTurner,
“Jack, the poles are radiators due to geometry.”
Exactly – we have the good fortune to live in a well organised engine. Geometry is not changing CO2 levels, we are. We are raising the octane rating of the fuel to a level the engine cannot handle – or rather, produces an output we don’t want.
jack, you are forgetting that without ice at the north pole more heat is lost to space . the ice acts as insulation for the ocean beneath.
Bitchilly, that’s not so either. Just as in your gin and tonic, more ice is better if you want it chilled, and the driver of Earth’s powerful circulation system is the heat difference between lower latitudes and higher latitudes. The alternating annual melt and freeze at the poles is also part of the delicate dance. By heating the poles, we’re stuffing that up.
Jack, bad (second) analogy. In my Gin and Tonics, there is nowhere in the glass the melted ice can get to where it will refreeze.
Dixon, yes you’re right – but I enjoyed the G&T. What I should have said is yes, Bitchilly, the sea will radiate more energy, but at the same time it is absorbing far more than would the ice, which is reflective. As too much of the radiated energy is trapped in the greenhouse, the overall effect of losing sea ice is an increase in the rate of heat gain – which we don’t want.
at low incident angles open water has an albedo in the same range as ice. In summer 3/4 arctic open water radiates far more than it receives except for solar noon, and only for a couple months.
Open arctic water cools the planet.
The observed saturation effect above 28 °C ( this happens in the WPWP) is explainable with the “Iris” which is real from obs., see http://onlinelibrary.wiley.com/doi/10.1002/2016JD025827/abstract
The “knee” in the land data is really interesting. Latitude related?
The other thing I noticed was that you pointed out the El Nino at the end of the data. IIRC there was an El Nino about 2010 but it was much smaller and it doesn’t seem to be reflected in the TOA. Seems to imply there might be a threshold for the TOA to be affected?
The first thing is to ask the right question.
While I’m not a Will Smith fan, and I would have preferred a faithful screen play of “Caves of Steel”, that was a pretty good movie all the same.
Awesome job Willis! Kind of reminiscent of Spencer & Braswell.
Percy W.Bridgman the Harvard physicist and winner of the 1946 Nobel Prize in the field of high pressure physics reminds everyone of the importance of verification in science and of the danger of talking about the future.
He believed in the “inscrutable character of the future”. He thought that statements about the future belonged in the category of pseudo-statements.
” I personally do not think that one should speak of making statements about the future. For me,a statement implies the possibility of verifying its truth, and the truth of a statement about the future cannot be verified.”
Verification is important as he says, because ” Where there is no possibility of verification, there can be no error and ‘ truth’ becomes meaningless”.( ” The Way things are”, P.W. Bridgman, 1959.)
Global Warming issues involve the projection of increasing levels of CO2 into the inscrutable future.
It is not possible to determine global temperature in advance by reference solely to the laws of chemistry and physics.
(h/t ” The Age of Global Warming-A History”, Rupert Darwall.)
Very perceptive point from Mr. Bridgman. There are parts of the universe that we simply cannot know.
To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.
The atmosphere merely reduces energy loss to space
A few meters under our feet the temperature is completely set by geothermal energy. Same for the deep oceans. The sun only warms a few (centi)meters of the soil, and the upper 200 meters or so of the oceans.
The temperature of deeper soil/water is completely caused by the enormous amount of heat inside planet Earth.
Think solar Joules INCREASING the temperature of pre-heated soil/water iso solar W/m^2 being in radiative balance using SB and the whole climate system makes perfect sense.
Ben Wouters,
Sorry, that’s all arse about face. Being more thermally dense than air, soil and sea actively suck the heat trapped by CO2 out of the air. Soil radiates the heat back at night, but conduction and convection spread the heat throughout the ocean, to surprising depth.
We don’t live several meters below our feet, we live in the region between our feet and 2 meters above them.
Care to explain why the temperature increases ~25k for every km you go down below the surface ?
(geothermal gradient)
Miracle CO2 at work?
Ben – its under emmense pressure and it’s radioactive. There’s heat remaining from Earth’s original thermal collapse. Nobody’s disputing it’s hot.
Jack Davis July 13, 2017 at 3:13 pm
Great. Then why is the 255K radiative balance temperature used, that assumes a body that will be at 0K without incoming radiation?
More relevant is the average surface temperature of our moon: 197K.
Do you actually believe that backradiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth?
If so, where are the backradiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.
Ben Wouters,
You say: “To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.”
Then in the next sentence you contradict yourself when you say: “The atmosphere merely reduces energy loss to space”
Well, exactly right. The introduction of an atmosphere to an atmosphere-less rocky planet REDUCES the rate at which energy can flow to space, thus causing the surface temperature to INCREASE to a higher level in order to maintain radiative balance. So no “central mistake” there…
Then you go off into an irrelevance spiral of nonsense about geothermal heat. You seem to misunderstand the difference between the QUANTITY of heat held by a body (a function of its thermal capacity) and the RATE at which the heat can flow away from that body (a function of its conductivity). The centre of the earth is very hot indeed. However the mean rate at which heat flows up to the surface is estimated at 0.087 watt/square metre. That’s 0.03 percent of the solar power absorbed by the Earth. So forget it. Please…
Finally, you ask Jack Wouters: “Do you actually believe that back radiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth? If so, where are the back radiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.”
This is a howler of the utmost naivety. It has been corrected by me and others on WUWT and elsewhere countless times. The well-known K&T energy balance diagram does NOT, repeat NOT, imply a downward flow of “almost twice the average radiation as the sun delivers”. That would be a violation of the second law of thermodynamics!
Just google “K&T diagram” and take another much closer look. According t their figures, the Sun delivers 161W/m2 radiation downwards to the earth’s surface and the earth’s surface radiates upwards just 63W/m2 (the other balancing upward flows are due to convection and evaporation). Your confusion may have arisen from the fact that the diagram shows radiative potentials, not radiative energy flows. You need to subtract the K&T downward radiative potential of 333W/m2 from the upward radiative potential of 396W/m2 to get the true energy flow figure of 63W/m2. Please go study the physics of radiation…
Ben Wouters,
You say: “To me the central mistake in current thinking about climate is the idea that the atmosphere can somehow INCREASE the temperature of the surface, or even worse the deep oceans.”
Then in the next sentence you contradict yourself when you say that: “The atmosphere merely reduces energy loss to space”
Well, exactly right. The introduction of an atmosphere to an atmosphere-less rocky planet REDUCES the rate at which energy can flow to space, thus causing the surface temperature to INCREASE to a higher level in order to maintain radiative balance. So no “central mistake” there…
Then you go off into an irrelevant spiral of nonsense about geothermal heat. You seem to misunderstand the difference between the QUANTITY of heat held by a body and the RATE at which the heat can flow away from that body. The centre of the earth is very hot indeed but the mean rate at which heat flows up to the surface is estimated at 0.087 watt/square metre. That’s 0.03 percent of the solar power absorbed by the Earth. So forget it. Please…
Finally, you ask Ben Wouters: “Do you actually believe that backradiation of the atmosphere is the explanation for the over 90K higher average temperatures on Earth? If so, where are the backradiation panels? Almost twice the average radiation as the sun delivers according K&T, available 24/7. Would be a perfect energy source if it were physical reality.”
This is a howler of the utmost naivety. It has been corrected by me and others on WUWT and elsewhere many times. The well-known K&T energy balance diagram does NOT, repeat NOT, imply a downward flow of “almost twice the average radiation as the sun delivers”.
Just google “K&T diagram” and take another closer look at it. The Sun delivers 161W/m2 downwards to the earth’s surface; and the earth’s surface radiates upwards just 63W/m2 (the rest goes upwards by convection and evaporation). Your confusion may have arisen from the fact that the diagram shows radiative potentials, not radiative energy flows. You need to subtract the K&T downward radiative potential of 333W/m2 from the upward radiative potential of 396W/m2 to get the correct energy flow figure of 63W/m2. Please go study the physics…
David Cosserat July 15, 2017 at 4:57 am
What happens on some rocky planet is not really relevant here. With an average surface temperature of ~290K and assuming emissivity=1.0 Earth would radiate ~400 W/m^2 directly to space. Due to the atmosphere Earth only emits ~240 W/m^2. Where i live this is called “reducing energy loss”.
On Earth the surface temperatures are NOT in radiative balance with incoming solar. Daytime temperatures on the moon however come close, nighttime temps there are much to high.
On planet Earth we have an ENERGY balance between incoming solar radiation and outgoing longwave radiation.
For continental crust average is more like 65 mW/m^2, but you don’t seem to understand how conduction works. For a flux to exist we must have a temperature difference. Flux is from the hot mantle through the crust to the surface. So the entire crust is heated by the hot mantle.
Sun only warms the upper (centi)meters of the soil a few degrees. The base temperature of the soil just below our feet is roughly equal to the average surface temperature at that location, and COMPLETELY caused by geothermal energy.
Interested to hear your explanation for the over 90K higher average surface temperature on Earth compared to that of the moon. (albedo is also lower on the moon!)
Radiative POTENTIAL versus Radiative ENERGY FLOW
Unfortunately Ben Wouters’ reply (July 15, 2017 at 10:19am) is as incoherent as his original comment (July 13, 2017 at 4:30am) to which I had responded (July 15, 2017 at 4:57am). He has not directly addressed the points I made, instead just generating additional confusion. So I fear that further communication with him is unlikely to be productive.
However in the interests of others here who may have been puzzled by his ramblings…
1. The mean surface temperature of the earth has been estimated at about 288K (~15degC). So, using the Stefan-Boltzmann formula with emissivity = 1, we can calculate that the surface will assert a mean radiative POTENTIAL of about 390W/m2. This calculation concurs very closely with the K-T energy balance diagram value of 396W/m2.
2. The earth’s air is also warm. It contains radiative gases that collectively assert a downward radiative POTENTIAL towards the earth’s surface. Most of these downward contributions come from the region in the lower atmosphere close to the surface where emissions can potentially get through to the surface without being absorbed by other molecules on the way. The K-T diagram’s estimate for this downward radiative POTENTIAL is 333W/m2. This figure is similar to but somewhat less than the upward radiative POTENTIAL of 390W/m2. This is because the contributing atmospheric molecules are not all at the surface temperature of 288K. They are ranged at various heights, and, in accordance with the atmospheric lapse rate, temperatures are lower with increasing distance above the ground. Using the Stefan-Boltzmann formula, we find that a 333W/m2 downward radiative POTENTIAL would be asserted by a solid body having a temperature of 277K. So this figure can regarded as the atmosphere’s effective surface temperature.
3. Now for the key point. Standard textbook thermodynamics (as opposed to Wouters in Wonderland Physics) tells us that, when two bodies assert radiative POTENTIALS towards one another, the rate at which radiative ENERGY is transferred between them is simply equal to the difference between the POTENTIALS.
4. So in the case of the earth’s surface-atmosphere interface, the radiative ENERGY transferred is 396 – 333 = 63W/m2 (as opposed to the colossal 333W/m2 downward flow of radiative ENERGY FLOW that Wouters erroneously claims the K&T diagram implies). Also note that the direction of the 63W/m radiative ENERGY FLOW is upwards not downwards, from warmer earth at 288K to the slightly cooler atmosphere at its effective surface temperature of 277K. This, of course, is in full conformance with the Second Law of Thermodynamics.
Like most of us here, Ben Wouters is sceptical of the case for CAGW, and undoubtedly his heart is in the right place. But, sadly, his enthusiasm for the cause is completely undermined by getting his physics so horribly wrong. His muddled approach is in danger of damaging the sceptical cause. I think this is dangerous to the extent that it can so easily give succour to climate alarmists.
_____________
P.S. He is also entirely wrong about geothermal heat. It contributes around 0.087W/m2 to the incoming energy FLOW to the earth’s surface from below, compared with the Sun’s incoming energy FLOW of 161W/m2 from above. Go figure…
David Cosserat July 16, 2017 at 7:59 am
David, this might make sense but only if you give us a clear definition of just what you mean by “radiative POTENTIAL”. I’ve never seen the term used in that manner, and Google appears to be equally clueless.
w.
except that isnt what it is radiating to. The optical window is 70F or over 100F colder than the ground, depending on absolute humidity all day long, as long as there are no clouds, which could reduce the difference to as little as 10F.

The rest of the spectrum also changes during the day, mostly because water vapor is storing energy in the day, and releasing it at night to limit the drop in surface temps. That is your gh effect, and because the release of the energy stored is dependent on air temps dropping near dew point temps, and doesn’t release much when temps are not near dew point, this is a negative feedback on co2. And this shows this in action
And when you look at the overall impact on surface stations you see it controls daily Min T, not Co2.
Since you don’t seem to understand the difference between the geothermal FLUX through the crust and the geothermal TEMPERATURE of that crust let’s have a look at the deep ocean floor.
Geothermal flux through the oceanic crust is ~101 mW/m^2.
http://onlinelibrary.wiley.com/doi/10.1029/93RG01249/abstract
For energy to flow from the crust to the deep ocean bottom water the TEMPERATURE of the ocean floor has to be slightly higher than the temperature of that water. So the ENTIRE oceanic crust is warmer than the deep ocean water, otherwise the conductive flux would not exist.
Same for the continental crust. Confusing apparently is that the sun warms the upper (centi)meters of that crust, but the crust just below our feet is ENTIRELY warmed from below. The sun only increases the temperature of the top soil a bit above the geothermal temperature.
If we remove the atmosphere of planet Earth the surface would radiate directly to space, and loose ~400 W/m^2 (and obviously start to cool down rapidly since the sun does not provide this kind of energy.)
WITH atmosphere Earth emits only ~240 W/m^2 to space which the sun can match, so we have a balanced ENERGY budget. Normally this is called “reducing energy loss” by the atmosphere.
If you are unable to understand that solar radiation can increase the temperature of the geothermally pre-heated soil a few degrees to the observed surface temperatures, I’m afraid nothing will make you understand.
Radiative POTENTIAL versus Radiative ENERGY FLOW
Unfortunately Ben Wouters’ reply (July 15, 2017 at 10:19am) is as incoherent as his original comment (July 13, 2017 at 4:30am) to which I had responded (July 15, 2017 at 4:57am). He has not directly addressed the points I made, instead just generating additional confusion. So I fear that further communication with him is unlikely to be productive.
However in the interests of others here who may have been puzzled by his ramblings…
1. The mean surface temperature of the earth has been estimated at about 288K (~15degC). So, using the Stefan-Boltzmann formula with emissivity = 1, we can calculate that the surface will assert a mean radiative POTENTIAL of about 390W/m2. This calculation concurs very closely with the K-T energy balance diagram value of 396W/m2.
2. The earth’s air is also warm. It contains radiative gases that collectively assert a downward radiative POTENTIAL towards the earth’s surface. Most of these downward contributions come from the region in the lower atmosphere close to the surface where emissions can potentially get through to the surface without being absorbed by other molecules on the way. The K-T diagram’s estimate for this downward radiative POTENTIAL is 333W/m2. This figure is similar to but somewhat less than the upward radiative POTENTIAL of 390W/m2. This is because the contributing atmospheric molecules are not all at the surface temperature of 288K. They are ranged at various heights, and, in accordance with the atmospheric lapse rate, temperatures are lower with increasing distance above the ground. Using the Stefan-Boltzmann formula, we find that a 333W/m2 downward radiative POTENTIAL would be asserted by a solid body having a temperature of 277K. So this figure can regarded as the atmosphere’s effective surface temperature‘.
3. Now for the key point. Standard textbook thermodynamics (as opposed to Wouters Wonderland Physics) tells us that, when two bodies assert radiative POTENTIALS towards one another, the rate at which radiative ENERGY is transferred between them is simply equal to the difference between the POTENTIALS.
4. So in the case of the earth’s surface-atmosphere interface, the radiative energy transferred is 396 – 333 = 63W/m2 (as opposed to the colossal 333W/m2 downward flow of radiative ENERGY FLOW that Wouters erroneously claims the K&T diagram implies). Also note that the direction of the 63W/m2 radiative ENERGY FLOW is upwards, from warmer earth at 288K to the slightly cooler atmosphere at its effective surface temperature of 277K. This, of course, is in full conformance with the Second Law of Thermodynamics.
Like most of us here, Ben Wouters is sceptical of the case for CAGW, and undoubtedly his heart is in the right place. But, sadly, his enthusiasm for the cause is completely undermined by getting his physics so horribly wrong. His muddled approach is in danger of damaging the sceptical cause. I think this is dangerous to the extent that it can so easily give succour to climate alarmists.
_____________
P.S. He is also entirely wrong about geothermal heat. It contributes around 0.087W/m2 to the incoming energy FLOW to the earth’s surface from below, compared with the Sun’s incoming energy FLOW of 161W/m2 from above. Go figure…
Willis,
Re. your query of July 16, 2017 at 8:58 am, I’m very glad to have this discussion with you and others. It is something I have been banging on about for a long time without much response. I think it is the definitional key to stopping some earnest well-meaning sceptics falling into the trap of looking ridiculous in the eyes of the CAGW crowd, thereby endangering all our sceptical contributions to the climate debate.
I do believe the term ‘back radiation’ is a useful way of characterising a phenomenon in the real physical world but only if it means the POTENTIAL to radiate energy (in W/m2 as calculated by the S-B equation R = kT^4) from a cooler body in the direction of a warmer body, and not the ACTUAL transfer of radiative energy. Likewise for consistency, one can define ‘forward radiation’ to mean the POTENTIAL to radiate from a warmer body towards a cooler body but again, not the actual transfer of radiative energy.
You showed both these POTENTIALs in your famous ‘steel greenhouse’ articles so many years ago, perhaps without realising at the time , as I did not, that they could best be considered as potentials, not flows.
Given these definitions, the ACTUAL radiative ENERGY FLOW (also in W/m2) that takes place between two opposing bodies is then simply the difference between the two independently calculated radiative POTENTIALS:
I = (k1.T1^4 – k2.T2^4)
and the direction of the resultant energy flow is (by definition) always from the warmer to the cooler body, thus satisfying the 2LT.
There is nothing remotely revolutionary about this. The equations are standard and can be found in every thermodynamics textbook. The R = kT^4 equation is actually an explanatory abstraction representing a non-physical situation. It is akin, say, to the theoretical concept of a magnetic monopole. But in the real world, all bodies exert radiative POTENTIALs towards one (or more*) other bodies, which in turn exert radiative POTENTIAL(s) back. This is even true in space where a body might be exerting its radiative POTENTIAL only towards the cosmic microwave background – but the latter is equivalent to an extremely cold body exerting back a radiative POTENTIAL of around 0.000003W/m2 corresponding to a temperature of 2.7K.
So my concept of a radiative POTENTIAL is simply a definitional approach, a reminder for climate sceptics everywhere to help prevent them making arses of themselves, as I am afraid they still do every now-and-then, by attempting to ridicule warmists such as Trenberth, or whoever, for saying (which they certainly do not) that the surface of our planet is being kept warm by energy flowing from a magical ‘back radiation’ source of 333W/m2; or that pyrgeometer measuring instruments are fakes. And so on, …and on…in a vain effort to over-complicate what is actually an elegant and simple situation.
As we know, the surface is actually being kept warm by 161W/m2 of incoming solar radiation, balanced by outgoing energy flows of 63W/m2 radiation + 80W/m2 evapotranspiration + 17 thermals or thereabouts. (K&T say that the 1 W/m2 discrepancy is heating the planet, which sounds to me like fiddling with the homework.)
[*Note1: The mathematics for a body that asserts a radiative POTENTIAL towards more than one other body in fractional proportions such that, consequently, those other bodies assert radiative potentials towards the body in the same proportions, is dealt with using fractional multipliers called View Factors. It all fits together with what I have said here but it is not a relevant complication in the case of the K-T diagram issue where only 2 bodies – atmosphere and surface – are involved. I only mention it because somebody is bound to bring it up as a killer spoiler argument against what I am saying.]
[Note 2: Modern physicists don’t need all this stuff because they deal in photon streams. So they are quite happy to think of back radiation and forward radiation as real physical flows of energy and can’t understand what all the fuss is about. But since one flow is always netted off against the other, they come to no different conclusions. However I am not interested in that debate because modern physicists don’t (generally) make arses of them selves over this issue.]
All the best,
David
David, happily you have articulated a concept I had struggled with since viewing Trenberth’s diagram. There is no “flow” from the atmosphere to the surface. Your description of “potentials” neatly describes the physics of energy transfer from warmer to cooler.
Within the margins of error of total energy flow in the real atmosphere, CO2 plays an unmeasurable role. 0.6 +/- 17 W/m*2 is laughable “science.” Please note the most recent diagrams don’t have the +/- 17 listed anywhere.
David Cosserat July 16, 2017 at 11:57 am
The POTENTIAL to radiate? I asked what that means. You’ve given me nothing. How does one measure such a POTENTIAL, and what does it mean?
I fear that when you invent a brand new concept, as you have done with the “POTENTIAL to radiate”, you need to define it unequivocally … which you absolutely have NOT done.
Give it another shot. Start by saying “Radiation potential is …” and go on from there.
w.
I read it as the equivalent of a voltage potential. ie SB flux potential. Now make what you will of that, and or wait for David.
This wouldn’t be too unusual a use in an antenna field where you’re looking at field strength, and such.
Dave Fair July 16, 2017 at 5:54 pm
There is indeed a flow from the atmosphere to the surface. It is smaller than the flow from the surface to the atmosphere. The fact that it is real is obvious from the fact that downwelling radiation is MEASURED EVERY DAY ALL AROUND THE WORLD.
Truly, Dave, you should research before writing … grab any decent thermo textbook and they’ll tell you the same. The flows in both directions are REAL.
Sheesh … it’s taking longer than I thought …
w.
Willis Eschenbach July 16, 2017 at 6:41 pm
The pyrgeometers used are basically IR thermometers, with a filter on top that allows only certain IR bands to pass. From the measured temperature a flux is CALCULATED.
Ever been to a sauna? Air temperature 90 centigrade or so. Most people do survive saunas very well.
Jump in a pool of water at 90 centrigrade and your chances of survival are negligible.
Air (even with a lot of water vapor) has a much lower energy density than eg water at the same temperature.
I don’t have a pyrgeometer, but I’m pretty sure that when you point one to some air and then to some water at the same temperature the reading would be the same as well.
Willis,
Thanks for responding.
The concept of a ‘potential’ in physics is well founded – e.g. the potential of a battery to pass energy to another system (unit: volt); the potential of a wound-up spring to do work (unit: newton); the potential for the water in a reservoir to flow down to a turbine (unit: metre). Engineers find it useful to discuss and measure all these potentials and many others.
In the textbooks, the standard formula Qdot[Watts] = kAT^4 is introduced to students to specify the maximum radiative energy flow rate from a body. This is for the theoretical case where there is no other body radiating back. In other words it is the potential to radiate into a hypothetical black universe at 0K, which does not exist. Beyond that, for practical applications students are taught that they must offset each body’s radiation against the other to obtain the net energy transfer.
I do share your frustration that some people will not buy the idea that, in a radiative interaction between two bodies, the ‘photonic’ paradigm is correct, namely that there is a two way energy transfer (where the hotter body always wins, so no violation of the 2nd law occurs). But the problem remains that some people forget (or have never learned) that radiation is an interaction between two bodies and that the cooler body is not just a passive recipient of whatever is thrown at it. On the contrary, the magnitude of its own radiative potential is absolutely pivotal in determining the RATE at which the transfer takes place. A failure to appreciate this leads to the claim that radiative gases in the atmosphere ‘do not act like a blanket slowing down heat loss to space’. Whereas that is EXACTLY what they do.
Consequently, people have put enormous amount of effort into offering bizarre alternative theories, such as Ben Wouters’ nonsense theory that geothermal energy (0.087W/m2) is the source of the Earth’s surface warming (boosted presumably just a little by the Sun’s 161W/m2). And his claim that the K-T diagram is crazily wrong because it depicts 333W/m2 of radiative energy coming from back radiation panels in the sky.
He is not alone. Hence my effort to find a way of explaining to my fellow sceptics that the K-T diagram is not conceptually wrong and that they must stop knocking it with baseless objections that just display their ignorance. Doing so does harm to the credibility of the climate sceptic cause, which I believe, despite the Wouters of this world, is growing stronger every day.
Cheers
David
David Cosserat July 17, 2017 at 4:55 pm
David, thanks for your reply. What you say above is true. So what? The fact that a “potential” exists in some situations means nothing about some imagined “potential to radiate”.
I fear you need to read the textbooks again. The formula is for the amount that the body radiates at a given temperature, period. At that temperature, it radiates the same whether there is a body radiating back or not. What, do you think a body stays the same temperature but changes the amount it radiates if we remove objects from its vicinity? SHOW ME THE TEXTBOOK THAT SAYS THAT!
I certainly agree that many people do not understand that there is a flow of photons going in both directions, one from the atmosphere to the surface and the other one from the surface to the atmosphere. And I applaud your efforts to fix that.
However, introducing a new and unknown concept such as “radiative potential” makes things worse, not better.
Regards,
w.
David Cosserat July 17, 2017 at 4:55 pm
Apparently still clueless about the difference between TEMPERATURE and FLUX.
This is stuff we learn in high school over here.
quick google: http://www.ewp.rpi.edu/hartford/~ernesto/F2014/MMEES/Papers/ENERGY/7AlternativeEnergy/Ground/Florides-GroundTemperatureMeasurement.pdf
Care to explain why the temperatures in deep mines are so much higher than the surface?
https://en.wikipedia.org/wiki/TauTona_Mine
Sun is not warming a blackbody from 0K to 255K establishing radiative balance.
It just increases the temperature of the surface a bit above the GEOTHERMALLY caused base temperature.
Problem is the people who believe the thin, cold, low density, low energy content atmosphere can somehow INCREASE the surface temperature of soil and oceans.
The atmosphere just reduces the energy loss to space. Period.
Hi micro6500,
At July 17, 2017 at 7.22am you said to Willis: I read [radiative potential] as the equivalent of a voltage potential. ie SB flux potential. Now make what you will of that, and or wait for David. This wouldn’t be too unusual a use in an antenna field where you’re looking at field strength, and such.
Right on the money! As an electrical engineer I applaud your example.
Another electrical example is the concept of ‘back emf’. The term ’emf’ (electromotive force) is a synonym for ‘voltage’, and so is also measured in volts. In practice, the term is typically used for the counter-voltage, called the ‘back emf’ that is asserted with opposite polarity by a recipient of electrical energy, such as an electric motor, towards its electrical energy source. This effective reduction in net voltage (not measurable on the wire) limits the rate of energy transfer from the source to the sink:
effective voltage = source emf – back emf
What a wonderful analogy to the ‘back radiation’ effect.
All the best,
David
In motors, it’s the back voltage generated as one of the magnetic fields collapses. So while it repeats, it is short lived. And this kind of fits what the atm does, it self regulates temperatures at the surface late at night by stealing energy from water vapor.
The surface is the regulated side of a heat engine using water as the working fluid that cycles once a day.
Think about that 🙂
Oh, and because it’s temperature regulated by dew point, changes to co2 just change “when” water vapor warming turns on at night. Since it’s nonlinear, as the days get a little longer co2 no longer has an effect on min T.
I’m going to keep trying to get people to understand this. Water vapor regulates air temps, co2 while a radiative gas, changes have little to no effect because water vapor just cancels it out.
I know some of you can get this if you just think about it.
There is no global warming from co2, and there no need to try and average 140 million temps to see what they are doing. It’s a simple logic problem, where everyone already knows what the results are, they just don’t realize it’s actively regulated.
And David it goes to your potentials, the optical window when it’s clear out is open to space all day long, and it’s cold! I’ve seen over 100F colder than the ground, on a sunny day.
There’s always a cold sink, so why does it nearly stop cooling in the middle of a clear night?
David Cosserat July 18, 2017 at 2:50 am
If that is the case, I wouldn’t hire either of you for electrical work. Look, the term “potential” means there is an OPTION. In an antenna, it can transmit or not transmit, so it has a potential to transmit. Or “potential energy”, which means there is an OPTION for it to be changed into some kind of other energy.
But where is the option with something radiating according to the S/B formula. Can you make it stop radiating that amount at that temperature? No. Can you make it radiate more or less at that temperature. No.
Heck, you can look at the dictionary definition of “potential” to see that it doesn’t apply:
Just exactly what is it that you expect a radiating object to “become or develop into something in the future”?
So no, there’s no “potential” in thermal radiation. It just radiates according to the formula, period. Nothing in the slightest about it that says “potential”.
Regards,
w.
Of course there’s potential willis, first a better definition
https://isaacphysics.org/s/Xlrabk
And my IR thermometer measures temperature field potential. And when your temperature potential between 2 objects is large enough, you can turn that potential into work.
Willis,
Terminology is only useful if it is not misleading. In electrical engineering it is common and perfectly sensible to talk about the ‘potential difference’ (measured in volts), between the plus and minus terminals of a source of electrical power such as a battery, irrespective of whether those terminals are (or are not) connected to a circuit. In the one case energy flows. In the other case it does not.
Despite the above, I do understand the point you are making in the particular case of electromagnetic radiation where the modern photonic theory of EMR assumes there are real flows of energy-carrying photons in both directions between two radiating bodies, with only the difference resulting in net energy transfer, always from hotter to cooler. I subscribe to that theory too, as do most professional engineers and physicists.
But I think we are both equally sick and tired of climate sceptics who bang on about back radiation (meaning energy flow from a cooler to a hotter body) being unreal ‘because it violates the 2LT’. For some reason they simply cannot grasp the concept that the back and forth radiation flows between two bodies are inextricably interlinked. Yet this is geometrically undeniable because they are, to use the jargon, ‘in the view’ of each other. Given this reality, the net flow of energy is inevitably from the hotter to the cooler surface, and there is no violation of the 2LT.
So they look at the K-T diagram, and see lots of energy flows with numbers on them. In particular their eyes alight on the ‘huge’ 333W/m2 back radiation figure from atmosphere to surface and treat it as if it were a stand-alone independent flow that they can cast mighty scorn upon. In their fury they seem blinded to the greater (and inextricably interlinked) figure of 396W/m2 of forward radiation from the surface, thus resulting in a modest net radiation flow upwards of only 63W/m2.
Having, as they think, demolished K-T, they then proceed to provide crazy alternative explanations for why the surface is warmer than it would be with no atmosphere, such as Wouter’s ludicrous idea that geothermal heat (at 0.086W/m2) is the true cause of the earth’s elevated atmospheric surface temperature.
There’s probably no hope of changing such people’s minds, but, in a modest attempt to help others falling into the same intellectual trap, I was simply offering an alternative way of looking at the issue for people who are unconvinced by, or indeed unaware of, statistical thermodynamics.
All the best
David
David Cosserat July 20, 2017 at 7:22 am
Suggest to go back to school and study conduction. The HOT mantle loses energy by conduction through the crust, and at the surface a flux remains of ~65 mW/m^2 (continental crust). But the ENTIRE crust is warmed from below. So the soil just below our feet is WARMER than the surface due to GEOTHERMAL ENERGY.
Consequently the whole idea that the sun is unable to warm the surface to our observed values is wrong.
The atmosphere does NOT need to warm the surface above what the sun has already done, it merely reduces energy loss to space.
Still awaiting your explanation for the 25K/km increasing temperature when going down into the crust or the 330K plus temperatures of the rockwand in deep mines.
Since you dismiss geothermal, what is the real cause?
Backconductive potential perhaps, or deep penetrating backradiation from the atmosphere?
The formulas used in the pyrgeometers that measure backradiation don’t seem to have a factor for emissivity. Is the emissivity of the atmosphere 1.0? Higher than eg oceanwater?
PS posting the same nonsense twice as you do regularly doesn’t make it any more credible.
Willis,
Terminology is only useful if it is not misleading. In electrical engineering it is common and perfectly sensible to talk about the ‘potential difference’ (measured in volts), between the plus and minus terminals of a source of electrical power such as a battery, irrespective of whether those terminals are (or are not) connected to a circuit. In the one case energy flows. In the other case it does not.
Despite the above, I do understand the point you are making in the particular case of electromagnetic radiation where the modern photonic theory of EMR assumes there are real flows of energy-carrying photons in both directions between two radiating bodies, with only the difference resulting in net energy transfer, always from hotter to cooler. I subscribe to that theory too, as do most professional engineers and physicists.
But I think we are both equally sick and tired of climate sceptics who bang on about back radiation (meaning energy flow from a cooler to a hotter body) being unreal ‘because it violates the 2LT’. For some reason they simply cannot grasp the concept that the back and forth radiation flows between two bodies are inextricably interlinked. Yet this is geometrically undeniable because they are, to use the jargon, ‘in the view’ of each other. Given this reality, the net flow of energy is inevitably from the hotter to the cooler surface, and there is no violation of the 2LT.
So they look at the K-T diagram, and see lots of energy flows with numbers on them. In particular, their eyes alight on the ‘huge’ 333W/m2 ‘back’ radiation figure from atmosphere to surface and treat it as if it were a stand-alone independent flow hat they can cast mighty scorn upon. In their fury they seem blinded to the greater (and inextricably interlinked) figure of 396W/m2 of forward radiation from the surface, thus resulting in a modest net radiation flow upwards of only 63W/m2.
Having, as they think, demolished K-T, they then proceed to provide crazy alternative explanations for why the surface is warmer than it would be with no atmosphere, such as Wouter’s idea that geothermal heat (at 0.087W/m2) is the true cause of the earth’s elevated atmospheric surface temperature.
There’s probably no hope of changing such people’s minds, but, in a modest attempt to help others falling into the same intellectual trap, I was simply offering an alternative way of looking at the issue for people who are unconvinced by, or indeed unaware of, statistical thermodynamics.
All the best
David
It is important to note that because the system is inhomogeneous and the radiation depends on the average of T^4 values while the T depends on the average of T values one can easily show with area weighted averages for a large inhomogeneous system that it is possible to change the average T in either direction and simultaneously have the average T^4 go in the other direction within a few kelvin. This can be shown with a simple excel spreadsheet. The large disparity in temperatures around the surface matters quite a bit when discussing total radiation from the surface.
Exactly. Average temperature is a poor metric for climate change because it assumes the spatial distribution of temperature will remain unchanged as the climate changes, which is a nonsense.
For comparisons with the means calculated from , eg : ToA power measurements , temperatures should be converted to energies , averaged , then converted back .
This is what I started doing with my last run.
But, after spending 10 years looking at surface data, it was sort of a waste of time. While it clearly shows that the temperature record is not being forced by a slight, increasing forcing, and it shows that when water vapor level drop, air temps drop like a rock, and rel humidity is actually going down slightly, and this alnoe is proof there is little warming from the increases in co2.
You only need need to look at the temperature drop and radiation at night under clear clam skies to prove co2 has little to no effect on Min T (and surface data confirms it follows dew point).
At night water vapor actively tries to reduce how cold it gets. Deserts and tropical jungles are extreme examples of this in operation. That warm muggy feel, is water vapor condensing and liberating it’s stored heat of evaporation, that helps warm the air, and changes the fundamental cooling rate, this is above the ^4th power reduction, plus most people actually use the wrong equilibrium temp, the optical window, half of the spectrum space is clear to space for all bands except 1 water vapor line, and when it’s clear and low humidity it’s 100F or more colder than the surface, and tracks air temps, so it’s still 100F colder at 5am as it was at 6pm.
You can’t see this process optically (and no it is not fog!!!!), but I captured a short video of the road in the afternoon after a short shower.
https://micro6500blog.files.wordpress.com/2017/06/20170626_185905.mp4
Well at night as the atm column cools, water vapor will start to sink, and condense, and reeveporate, and you can see it on this hot asphalt with rain. And that same asphalt is still a lot warmer than grass at 5 am.
I don’t think you can covert temperatures to energy. Using SB, you can convert them to power, be we all know power and energy are two different things.
No it’s not converted to power, it’s converted to a instantaneous flux. To get power you multiply energy(flux) by time.
Surely the genius that built all the sophisticated global climate models (and spent many $100 M doing so over the last 40 years), did include that very basic algebraic knowledge. Surely !!!
But does anyone here among WUWT comments know if that is the case?
And if they did, they surely have NOT, done it at sufficient granularity (e.g., for the hundreds of individual cumulonimbus clouds from the tropics up to the mid-latitudes, on any given afternoon). I am looking a about 10 of them from my window now.
I meant “hundreds of thousands” of cumulonimbus clouds, where ever a summer afternoon is happening on the planet (and that happens constantly, across some large regions, over land, and parts of ocean).
Keep in mind that the delta T under consideration is roughly 1 K out of 288 K. Any calculation of the IR spectrum of CO2 in the atmosphere has to be good to 3 9’s, or it isn’t applicable to the problem.
Add to this the spatial time and temperature inhomogeneities already mentioned. Now consider that T only applies to the kinetic energy component of energy. Energy comes in as kinetic, but is partitioned into kinetic and potential once it gets to the planet. All by itself this partitioning will cause a drop in observed radiant T. We know it partitions because coal is chemically stored solar energy, and we have quite a bit of it. To calculate temperature changes on the order that is important to the “global warming” problem, we need to understand the spatial distribution of T and the changes in spatial distribution and the partition of KE into KE and PE within the system and how the partitioning changes with time, and we have to know all of this at least well enough to calculate a T change (KE only) to about 1 part per thousand. That’s for a coarse guess. To really nail it down it would be good to have another decimal place on that calculation. It looks like the entire effect is in the noise of the calculation. If so, that would explain the variation seen in the calculations. Within a few kelvin you could end up anywhere.
It also means that if you do not measure the entire planet 24×7, you do not really know what outgoing radiation, not to 0.1w/m^2 by any means.
“…a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree …”
Isn’t that what Lindzen has been saying for a long time? Is it for the same reason?
Lindzen and Choi came at this from a slightly different perspective, if I remember correctly. I think this is more akin to Spence and Braswell. Both came up with climate sensitivities less than 1°C. But it’s been a while since a read either paper… And I’m not sure I understood Spencer’s methodology very well.
Interestingly, Trenberth et al’s rebuttal to Lindzen and Choi only managed to push the climate sensitivity up to 2.3°C. Both Lindzen’s and Trenberth’s sensitivity estimates were sensitive to the time range of the analysis.
I studied both papers, and think both are flawed. They rely on arbitrary lags. Both papers have also been pretty much refuted by later papers.
Yet observations seem to support the lower climate sensitivity estimates. I’ll go with observations over other papers.
All day long. Observations deliver a climate sensitivity from 0.5 to 1.75 C (2.35 in the case of Trenberth’s cherry picking). Models deliver >3 C.
I’m not sure if it’s the source of the hook in the land data, but land temps are actively regulated over the 24 hour solar cycle, and cooling is very nonlinear.
During the night, sensible heat from condensing water vapor in the collapsing atm column at the surface slows cooling, regulating min T to dew point.
Yes, I experienced this when I lived in Bahrain, which has an oppressive summer climate. High temps then average 99 F, but the humidity is very high, and with a dew point of 78 F, the night time low only gets down to 88F. Not very comfortable.
Interesting, as usual, Willis.
For fig 5 you said “First, correlation over the land is slightly positive, and over the ocean, it is slightly negative.”
Did you mean “First, correlation over the ocean is slightly positive, and over the land, it is slightly negative.
R
“If you want a red team to pursue the blues on this, you need to establish first what the blue team is actually saying.”
Of course. If someone criticizes what you say, just say you didn’t say that. You’ll never be wrong, Racehorse.
Andrew
Excellent! Willis, you are very good at making your analysis understandable to laymen. I hate to say “dumbing down”, but you are consistently good at dumbing topics down, which is really necessary to connect with the broad stroke of society, ie., laypersons. To the community who take the time to read and comment, bravo! But may I suggest you take a lesson from Willis and recognize that your excellent comments need to be “dumbed down” in language only so that laypersons can reap the value of them. Use small words to reach a larger audience. I love this website! Thank you.
Willis ….. I think your figure 1 figure 2 are the smoking gun that the temperature records are cooked. They should agree! I remember work by Lindzen and maybe Spencer that show TOA corresponds to SST. Yet ….. F1 and F2 don’t agree here.
Shouldn’t we treat the TOA data analogously to an exterior calculus integration over the whole planet? If more energy goes in than comes out, the interior will be heating up. That flux imbalance seems to be the case. There is enough weirdness inside that integration to account for the fact that we don’t know exactly how the excess energy is sequestered on the surface – but we know it’s lurking somewhere.
Jack Davis, exactly so. A more pedestrian example is a bathtub with more water coming in than going out.
Yeah Tom, I should have used that. It was the talk earlier on about quantum mechanics and how we can make precise predictions of useful outcomes (the phone I’m on for instance) without knowing what is actually going on down at the base of reality. I tried to apply that to the planet. Once you do that, a lot of the argument here is seen as sophistry – as ‘how many angels can dance on the head of a pin’ territory.
Let me know when you find the “Hot Spot” then I may start worrying about “Co2 induce global warming”.
“There are some interesting results there. First, correlation over the land is slightly positive, and over the ocean, it is slightly negative” Quote is from the article. Color coded plot of the earth above this quote seems it indicate the opposite… correlation over the land is slightly negative, an over the ocean is slightly positive.
Interesting that if you take the Earth’s average surface emission of 390 W/m2 (or 15.0C in temperature) and you increase that emission level by 1.0 W/m2, the temperature should increase by 0.18C according to the Stefan Boltzmann equations. That is exactly what Willis calculated from Ceres TOA radiation.
The issue in climate science is the theory does work from the surface but from the average emmission level balancing incoming solar of 240 W/m2 or -18C. In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.
Now the combination of these two changes can be calculated as 0.81C per 1.0 W/m2 of GHGs. And they just stick with that. Fundamentally flawed.
Go back to actually measuring what is happening, go back to the surface which is what we are concerned with and actually measure how the proposed feedbacks are actually operating. Use the Stefan Boltzmann equations for calcutions because this has been proven to work perfectly everywhere in the universe.
This is what real science would do and what Willis has done here.
On average each sq meter gets 3,741 WHr/ 155.8W/m^2, average temp goes up 9.8C.
That’s 0.06C/W/m^2 measured at the surface with PMOD averaged TSI.
Based on the Air Forces surface data summary.
This is basically using the seasonal change in forcing in the hemisphere’s and the resulting change in temps.
The units are Degree’s F/Whr/day to get C/w/m^2, divide by 13.3 ( / 24 (day to hour) / 9 * 5(F to C))
” In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.”
And the feedbacks are yet more GHGs, which in turn continue the feedback, which we know is pure sophistry. Never once in Earth’s 4 billion year history of climate has a runaway greenhouse effect occurred.
One of the basic mistakes in this whole CO2 hypothesis is that it is well mixed. Yes, in a closed container in a lab, it would be well mixed. But most CO2 is generated at the surface and is sunk at the surface. Does ANYONE actually believe that the percentage of CO2 at the surface is, uh, well mixed? If most of the CO2 concentration is at the surface, and it is, then why do the models use the other end of the atmosphere – the top?
CO2 IS well mixed — it doesn’t have to be perfectly mixed to be “well mixed”. The constant churning of the atmosphere through day/night heating and cooling, winds, and seasons sees to that. The concentrations at high altitudes in the troposphere are not that different from typical concentrations at the surface. This stuff is easy to measure (unlike many other climate-related variables).
Yes, a variation of a few ppmv around an average of 400 ppmv IS well mixed.
But at the surface, ie., low altitudes one can see variations of several hundred ppm
See for example:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_2005-07-14.jpg
And
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
CO2 is anything but well mixed at low altitude and that is why the IPCC rejected the Beck reinstatement of historical chemical analysis of CO2
The Scripps Institute manages the atmospheric CO2 measurement program initiated by Charles Keeling in the 1950s. http://scrippsco2.ucsd.edu/ Not only are “continuous” measurements made at the Mauna Loa Observatory, but weekly flask samples are taken at 11 other stations at varying latitudes. They show that atmospheric CO2 isn’t perfectly mixed, but overall it’s not far off. The OCO2 satellite shows much the same thing once you get past the false colors used to emphasize small differences in CO2 concentration.
See my comment above.
At high altitude, CO2 is a well mixed gas (ie., sat about +/- 10 ppm around 395 ppm), but at low altitude CO2 is anything but well mixed and there can be local variations (depending on season, windspeed, temperature, geography, topography, vegation etc) of several hundred ppm
The scatterplots are good examples of what appear to be chaotic strange attractors. To me this means that the physical system (the climate system itself that gives the results of ocean surface and surface air temperatures) is chaotic [highly non-linear]. For examples of scatterplots that are attractors, see: Chaos & Climate – Part 4: An Attractive Idea or google “images strange attractors”.
It must be remembered that the Stefan-Boltzmann Law, when applied to a non-equilibrium state, is itself highly non-linear, and only its approximations and “linearized” versions produce the straight red line in the essay above. In the real world, where the atmosphere hits space, the red line of S-B is not straight by any means.
the red line is not straight , it shows the fourth power curvature over the limited range of temperatures found in the record.
Goodman ==> Perhaps I should have said “narrow” or “regular” or some other word to relate to its visual.
S-B under non-equilibrium does not produce such a line….more probably something far more similar to the bluw, that would be, if we could actually solve the thing at all — which I do not believe we can at this time.
And during clear calm skies at night, as rel humidity goes up, and the amount of water vapor condensing goes up, this reduces the rate temperatures drop. You can see it in both temp and net radiation in this chart.
And explained here
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
micro ==> Interesting….
Hansen => ” is itself highly non-linear,” T^4 is itself highly non-linear ! Perhaps you should have said what you meant.
If you read the link, they as well as myself show where it’s at least 2 different nonlinear rates. Besides under clear skies, low humidity the zenith temp will be 80-100F colder than the surface. And has the same difference at both cooling rates. You can even see net radiation drop in the middle of the night under clear skies.
I’m not sure I understand the following speculation:
Are you speculating about a possible relationship between temperature’s spatial response to the spatial variation in radiation imbalance and its temporal response the temporal variation in forcing? (As I understand it, forcing is not the same as imbalance. Roughly, the forcing associated with a given CO2 concentration is the imbalance that would initially result from a sudden step increase to that concentration from equilibrium at a baseline, presumably pre-industrial concentration. Imbalance would decay in the fullness of time, but the attributed forcing would not.)
I ask because I wasn’t immediately able to see the logical relationship between the two quantities, and I didn’t want to ponder the question further if you meant something else.
Where there is water or ice on a surface, the temperature of that surface will approach the dew point temperature of the air measured near that surface (ocean, top of thunderclouds, moist soil, leaves). Those surfaces will radiate at that temperature. Air temperature is a measure of molecular collisions (wet and dry adiabats) that is never colder than the measured dew point. Thus, water vapor in the air is the primary temperature controller. Man made impervious surfaces do not contain water, thus, the heat island effect.
Yet another excellent critique of the climate models , but again on their agenda. Yes it’s a complex, non linear stochastic, and undersubscribed statistical model of a poorly modelled system, That uses decisions regarding which effects are relevant that the “scoentists”, AKA numerical modellers, try to prove and how much gain to apply to them, rather like the economc models used to support or deny Brexit, world growth, etc. They then measure correlation and claim this proves some science, when it can only prove correlation, and can never prove any science, no control planet, etc..
As with weather and economic forecasts, it is not deterministic and proves no laws, it’s pseudo science. Further, as anyone familiar with Neural Nets and slack variables knows, the extrapolation of non linear data outside its known range in a noisy multivariate system is notoriously unreliable over any significant time, never mind the multi lifetime natural periodicities of climate change. But that’s not the problem I now focus on. What about plant denial?
Modellers assumptions about plants seem to deny the data record of their effect on CO2 control. From 95% to <0.20% fairly sharpsh once the the oceans formed, m reduced to a level just enough the keep an efficient carbon cycle optimised , for them and us, and holding that very low figure through mutiple mass extinctions and real catastrophes over Billions of years, by increasing and decreasing amount of plants and rate of plant photosynthesis. To make the models work dynamic response had to be discounted vs the evidence. When the plants grew and started reducing the CO2 we produce, they were said by modellers, I won't call these people scientists, because they deny the basic scientific method, to be unexpected response that would be "overwhelemed" by our few hundred ppm of CO2, showing how blatantly basic controls were discounted by their models, and now again denying that plants will continue to grow wherever it suits them until the atmospheric CO2 levels drop. That's my hypothesis re CO2.
Here's Catt's Hypothesis re serious climate change – out of the oceans, by volcanicity. with extremess due to orbital eccentricity. Probaly also accounts for smaler but still significant variations over 100's years.
I suggest significant climate change has more obvious and easy to quantify primary cause. THis is oceanic magma release, the most during maximum gravitational variation extremes of Milankovitch cycles, Simply put, why not, this drags tectonic plates and the core around enough to rattle our thin crust around enough to trigger interglacials from the longer term "relatively steady state" ice age.
Data? 30% gravitational variation pa, actual force is 200 times the Moons, ocean floor is 7km basalt crust on a 12,000Km hot rock pudding that is permanently leaking 1,200 degree rock into the ocean, at seperating plate juntions and hot spots like Hawaii, ring of fire, etc.. I suggest this direct ocean heating simply steps up at an order of magnitude or more every 100K years. Takes about 200,000 Mount Fujis of basalt to raise the oceans 12 degrees, on the back of my envelope. no modellers required, proven physics of if, then.
I have made VERY rough guess at this. The amount of magma arriving into the oceans seems poorly documented. If it's 1×10^13 Tonnes when unstressed – 1,000 undersea Mount Fujis plus 40,000 Km worth of 7Km deep crack filing with no overflow @ 2cm pa tectonic seperation, it would need to increase by 7,000 times to meet the 12 degree heat delivery of an intergacial event. A Milankovitch eccentricity caused volcanic forcing event. AS a practical engineer and phsyicist wh has seen how well simple process models work in Chemical Engineering at Imperial Colege amongst other examples, I like the clear probabiitity of hot rock delivered direct to the oceans as the warming effect that supports both ice age conditions and the short interglacials, and the significant events in between, a lot more than blaming an atmospheric gas at trace levels that created and maintained the stabe atmospheric conditions for life in the carbon cycle, and is probably not guilty anyway.
nb: 3.8×10^11 tonnes basalt Mount Fuji has only been there for one ice age… so is a total amount of 200 Mount Fujis worth of magma emitted into the oceans pa over 1.000 years likely, under these conditions of extreme gravitational stress? That'll do it. No forcing required, just basic heat transfer from our on board nuclear reactor. Simples!
CONCLUSION: 1. Climate modeller are basically plant deniers.The plants control CO2, always have, ignoring natural responses at the levels we know have occured in the past is simply estabishing bogus hypotheses regarding CO2 that they then have to force to make their "models" correlate as promised, This is just a show trial of CO2 by a religious court. Science abuse.
Hardly the deterministic scientific method of trying to disprove your hypothesis – by doubting and not testing the most obvious control of plants, assuming low gains for that proven response that don't increase enough as CO2 increases.. Why not ask the model how much plant growth is required to control likely CO2 emissions? And this still doesn't prove more CO2 causes anything except more plant growth, or is more than a simple consequence of fires, volcanoes, etc. that is absorbed by rocks, plants, etc. and recycled by plate tectonics. Of course the bad news is that if the plants do it without us doing anything, there is no easu climate change money flowing into innefective or regressive projects that pretend to solve the "climate change catastrophic disaster".
And, of course, it is more likely in a reasonably quiescent world that natural CO2 increases as a consequence of warming oceans that warm the atmosphere, hence the correlation, not a cause of any significance. Oceans are where the surface heat of the planet is, over 1,000 times more than the atmosphere at 6×10^24 Joules per Degree K or so, And there is a LOT more heat on the inside, being generated all the time and trying to get out. And succeeding all the time, at varying and renewable rates, through our very leaky selection of loosely linked crustal plates, especially the thin ocean plates, with very little net mass loss as its all recycled every 200 Million years back into the core.
2. There is at least one more likely and provable cause of warming, via the oceans that drive the atmosphere, that fits the ice core evidence, we can see happening and have documented, via the ocean, and without any need for "forcing". Leaks in the thin crust of earth we live on release massive amounts of heat direct into the oceans where it can be held and cause serious long term change to the atmosphere, not the reverse situation where these "climate pseudo scientists" heads are, chasing the government and renewable lobbyists' $$$$$.
I suggest we look under the oceans for the smoking gun that can do the climate change job as advertised, deliver 70×10^24 Joules to the Oceans over 1,000 years or so to create a 12 degree rise for an interglacial, for example. I suggest the rather puny atmospheric climate cannot,certainly not due to human CO2 emissions. Real climate change that makes the trip to Europe a walk over Dogger Land and The Great Barrier Reef an interesting white 300 foot rocky white ridge a short drive East from Cairns, is a consequence of greater and more powerful controls, primarily solar gravity and radiation variance, CO2 is not the cause, the atmosphere simply an effect or consequence of the larger controls.,
And solar radiation, while powerful, is usually in balance, and orbital eccetricity does not create significant effects in any way I have seen credibly proposed. It is interesting that most of the climate discussion around Milankovitch cycles considers the bizarre fringe effects on the atmosphere of obliquity/precession plus eccentricity on the atmosphere, a low energy capacity sink, rather than the unbalanced gravtitational effect on a serious heat source that can change ocean temperatures the 12 degrees required for an interglacial, what my approach is grounded in. See what Jupiter and its moons do to Io if you doubt the power of gravtiational stress,
CO2 is innocent. It was the rocks what done it. The so called scientists like Michael Mann have taken the easy money for supporting political agendas with actual science denial that frames CO2 for climate change using statistical models nor science, picking data that is in the noise of an interglacial on amplitude and period, when CO2 is most probably only a consequence of volcanoes and plants, also to boost their own egos as high priests of science become religion with its own inquisition and distorted language. Their narrow presumptive focus on atmospheric effects of CO2 deny the larger real effects and obvious science facts – the established world of the natural carbon cycle and the interaction of our mostly hot soft rock with the oceans that drive the joined up planetary systems. IMO. Rebuttals /critiques of my data and results with other I can check aways welcome.
CEng, CPhys, MBA
There are probably a number of typos above, but that's all the time I have for now. The message is clear, I hope.
brianrlcatt July 13, 2017 at 6:57 am
Pretty close 😉
Latest provable large magmatic event is the Ontong Java one, possibly 100 million km^3 magma erupting in the oceans. No surpprise the deep ocean were ~18K warmer then today at the end of those eruptions, around 85 mya.
see http://www.sciencedirect.com/science/article/pii/S0012821X06002251
1 million km^3 magma carries enough energy to warm ALL ocean water 1K.
We need another eruption like the Ontong Java one to lift us out of the current ice age.
Cliamate modellers, nort real scientists, academic statisticians, are so looking the worng way to prove CO2 guilty on a bum wrap , amanipulating and witholding evidence like some inquisition court. . Heads in the religious clouds when the action is under the oceans. This was not my idea, but badly presented by the person who first suggested the principal. This close a 121 fit of magma heat content with Ocean temperature change, no forcing required, CO2 follows as a cisnequence, not a cause. http://news.nationalgeographic.com/news/2013/09/130905-tamu-massif-shatsky-rise-largest-volcano-oceanography-science/
Nice job!
Next analysis can detect ocean currents that distributes heat into the oceans before it can radiate?
I note that the objections that Nick Stokes raises here represent exactly the sort of thinking that makes this post by Eschenbach necessary.
Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:
This may indeed refer possibly to multiple time scales, but that is not sufficient for full representation of the problem as all this refers to a ∆T applying to the system as a whole. If the Biot number for the system is very small, then this solution of homogeneous temperature works just dandy. The Biot number itself is a function of heat transfer mechanisms and scale size, and “smallness” is a function of temperature resolution among other things. If the Biot number is not small the temperature distribution at any point during a transition from one equilibrium state to another is a complex function of time and space. In this case a mean temperature can always be calculated, but depending on how the one monitors the problem (distribution of measuring instruments, scale effects, schedule, instrument resolution) one can arrive at different mean temperatures, and find that a mean value of any sort may have no pertinence to particular locations.
I have used this example for a long time to explain my skepticism about “mean air temperature” or “global mean temperature” (GMT), which a person can calculate in any situation may not be unique and may have little importance in a practical sense. I suppose the counter argument to what I have just explained is a “yes but it still is a useful monitor of system change”. Yet in view of the GMT being a non-unique and context dependent entity with time dependence, this counter argument seems logically doubtful. And it doesn’t even begun to discuss the “corrections” made to observations to calculate a GMT in the first place.
There are many issues for a Red Team to calculate, but the most important are beyond these technical issues, and revolve around costs in relation to benefits, and what strategies are likely to be practical or whether such strategies are even needed.
“Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:”
Yes. Schwartz is describing a lumped element model, with prospects of reasonable success. But engineers who use such models do not do so claiming that:
“[the] system has a secret control knob with a linear and predictable response”
And neither do climate scientists. Again, it comes back to Willis’ excellent advice
“QUOTE THE EXACT WORDS YOU ARE DISCUSSING”
As far as GMT, or more carefully, global mean surface temperature anomaly, is concerned, it is pretty much unique – that is, it doesn’t depend much on whichever valid set of measurements you use. It’s true that regions may differ from the average, as with many things we observe. But it is a consistent observable pattern. The analogy is the DJIA. It doesn’t predict individual stocks. Different parts of the economy may behave differently. But DJIA is still useful.
Impressive demonstration Willis. Are climate scientist proponents of excessive global warming not using all the wonderful tools they put up above us.
I realize you are looking at S-B/Global T relationship and not imbalances in incoming and out going radiation. Enthalpy in melting ice at constant temperature and endothermic rapid greening of the planet including phytoplankton (cooling effect) don’t reduce the fit of the Stefan-Boltzmann / global T noticeably, but the imbalance should be a measure of enthalpy changes I’d imagine.
I recall Hansen using imbalance as ‘proof’ of serious warming. Perhaps both poles alternately freezing and thawing balance out but the greening is a long term issue and must be part of the imbalance. Ferdinand Englbeem in a reply to a post of mine suggested that changes to Oxygen which is recorded to ppmv accuracy give a fair estimate of greening. Might your excellent facility with global scale analysis using satellite data be an approach to investigating the imbalance question as a measure of enthalpy in the system? Could the departure of the S-B to the warm side at the top be the cooling from greening?
All this is very good, but the National Academy of Science and the Royal Society has put this overview out as certain evidence of human caused climate change.
http://nas-sites.org/climate-change/qanda.html#.WWZ4B_WcG1t
This statement from number 18 would seem to settle the matter even if one did not understand anything about climate, but has at least a little common sense. I would not walk on bridges built by engineers who made similar statements.
“Nevertheless, understanding (for example, of cloud dynamics, and of climate variations on centennial and decadal timescales and on regional-to-local spatial scales) remains incomplete. ”
They go on to say:
“Together, field and laboratory data and theoretical understanding are used to advance models of Earth’s climate system and to improve representation of key processes in them, especially those associated with clouds, aerosols, and transport of heat into the oceans. This is critical for accurately simulating climate change and associated changes in severe weather, especially at the regional and local scales important for policy decisions.”
Without admitting that they understand so little of it. So we can look forward to $Billion more spent for them to tweak their models, without finding the most basic errors in the models, because they refuse admit the role of “negative feedbacks” that have stabilized climate, while focusing only on CO2 as the driver of changes.
Willis always makes it interesting, but has he really identified science’s real misunderstanding?
My research shows the temperature limits are limited by the range of TSI.
My research shows changes in SST are a simple linear lagged function of TSI forcings.
The effort to use TOA can be misleading. The main action of solar energy is upon the ocean, which then acts on the atmosphere. On any given day the air temperature within the troposphere is a response to the ocean surface temperature (which is a lagged response to TSI) and present day TOA. You will completely miss the lagged influence of former TSI when you don’t include it, an influence that is more powerful. It’s no wonder you claim there isn’t a lagged or linear response to TSI.
if it were so that 85% or so of the energy needed for warming the ocean did not come from the sun, from where did the other 85% or so of the necessary energy for warming came from besides the sun, and why is it so widely believed that there is such another greater source of tangible heat than the sun that no one can identify, measure, or feel? We humans have a pretty good sense of the solar daily heating effect, so why can’t we feel the heat 5X stronger than sunlight, day and night?
NO evidence exists for an energy source 5X more powerful than sunlight!
In my view the IPCC solar POV blue team is defending the indefensible, and has everyone chasing their tail looking for other forcings and feedbacks.”
***
Willis has never argued for a real heat source 5X more energetic than sunlight – he couldn’t find one if he ever tried, because there isn’t one. Isn’t that interesting?
Two thoughts:
Net TOA Radiative Forcing trend 0.08 + – 0.24 W/m2, signal-to-noise ration 1 to 3, stating a “trend” is a stretch.
CERES Surface Temperature ending in March 2016 right before El Nino warming ends and cooling begins, clearly cherry-picked.
H.D. Hoese and Michael Moon
Yep. The biggest problems in climate science are:
It’s a hypothetical field of investigation i.e. theoretical science, yet it has crossed ethical boundaries into being taken as fact and acted upon.
For example, not one of the temperature anomaly data sets is actual data. They all have uncertainties of at least +/- 0.5K or more. We don’t even know if temperatures have decreased from 1910 or remained flat because it’s all in the noise.
Same for other climate related data sets. We don’t have the resolution.
Hence the data uncertainty and how the data was measured, does not support the conclusions in any real-world applications (if that is the way to say it). Results and conclusions are only consistent (or maybe not) against the constructed artifice around AGW.
Or in simple terms within the set of assumptions they use, climate science has a certain amount of consistency. But very little of which stand up to experimentation.
When people call AGW a scam, it shouldn’t be the science. It’s the application of shody data as if it is platinum-coated verified.
Advocates of taking action are similar to a hypothetical set of people who would lobby, say New York or London, to force every home and business owner to fit special anti-slip tiles on their roof, costing in the thousands of dollars or pounds, just so that the Health and Safety risk to Santa Claus and his reindeer would be minimised by about 10%.
Just to emphasise this:
From CERES itself the rms erro for LW TOA is 2.5W/m2
CERES DQ Summary for EBAF_TOA Level 3B – net flux has been energy balanced
Hypotheticals are not just a problem in climate science, this paper, even quoting from Pielke’s book, takes marine science types to task over the problem with advocacy. Cowan is a good biologist, might argue a couple of points, but what he examines has led, among others worse, to Google Earth posting fish skeletons all over the world, click Oceans.
https://benthamopen.com/ABSTRACT/TOFISHSJ-2-87
Michael Moon July 13, 2017 at 8:24 am
To the contrary, Michael. I add a complete year (12 months) of new data to the existing CERES data as soon as it becomes available. The data I used ends at the end of the last complete year in the CERES record. As soon as the complete data for the 2016 year becomes available I will use it.
In other words, your claim of cherry picking is just a reflection of your preconceptions, unburdened by reality. Please go project them on someone else. I have always used the full amount of CERES data available and I will continue to do so.
w.
Willis,
I did not say or even imply that you cherry-picked. The CERES bosses did, as they get numbers from their satellite every day. They chose to show the years that gave the largest warming trend, not you.
“ratio”
Willis Eschenbach, thank you for another insightful and informative essay.
Hi Willis, very interesting stuff.
I’m not sure how you manage to see this as confirmation of “a governed, thermally regulated system.” Maybe because you do not define what you mean that term. But knowing your past posts along those lines you propose a governor like a switch on an AC unit which clamps the max temperature and turns on the AC.
What your figure 9 shows is slightly non-linear feedback : the planck f/b based on the T^4 S-B relationship. This could be reasonably well approximated by a straight linear negative f/b over the limited range of temps in the dataset.
You may cite the flatter section at the top of the temp range as evidence of a much stronger negative f/b at the top end of the scale. This is presumably the tropics, One of you colour coded graphs with latitude and colour coded variable would confirm this. If you can establish that is flat you can claim a governor, I would suggest a much stronger neg. f/b is a better description.
That graph, if correct, seems to be formal observational proof that the water vapour feedback is NOT doubling CO2 forcing ( over water at least and roughly over a good proportion of land. ).
What does need explaining is the orthogonal section of the land data where a less negative TOA imbalance ( ie more retained heat ) is leading to a drop in surface temperature. That is counter intuitive on the face of it though there is little scatter and a very clear, linear relationship.
The first thing to establish is whether this is a particular geographical region. It probably is.
Best regards.
Ah, causality the wrong way around. Drop in temp leading to less outgoing IR, hence the TOA change. The slopes are close to perpendicular which means gradients are the reciprocal of each other.
It would be enlightening to determine what this division is: night/day ; summer/winter or geographical.
Sea seems a lot simple. This is probably another reason why simply averaging temperature of land and sea is physically invalid as I pointed out on Judith’s blog last year.
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/
Regarding “a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree”: Also said was “Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero”. The zero feedback climate sensitivity figure is 1.1 degree C per 2xCO2, unless one provides a cite for something lower.
OK, I just looked at the math at the end of the article that mentions Figure 9 and the numbers in Figure 9. The math does not consider that as the surface warms, so does the level of the atmosphere that downwelling IR comes from. So that if the surface warms from 290 to 291 K, the amount of radiation leaving it increases from 401 to 406.6 W/m^2. (Numbers chosen because 405 W/m^2 was mentioned for sea surface.) So, radiation from the surface increases by 5.6 W/m^2 from 1 degree K of warming. Dividing 3.7 W/m^2 per 2xCO2 by that does indeed result in a figure of .66 degree K per 2xCO2. But the zero feedback figure is higher, because some of that 5.6 W/m^2 is returned to the surface by increase of downwelling IR from greenhouse gases in the atmosphere. That is not counted as a feedback, but part of the explanation of temperature change from change of CO2 due to radiative transfer alone.
At this rate, the .67 degree C per 2xCO2 mentioned in Figure 9 is not essentially zero feedback, but indicative of negative feedback. I figure about 2 W/m^2-K negative feedback is indicated, using 1.1 degree per 2xCO2 as the figure with no feedbacks due to albedo, lapse rate effects or change of water vapor, etc.
I just noticed something else about Figure 9: TOA radiation imbalance seems to roughly match the difference between surface outgoing radiation at the surface temperature in question and the outgoing surface radiation at 291 K. This seems to mean that everywhere in the world’s oceans has half its radiation imbalance being used to make the temperature of each place in the oceans different from 291 K, and the other half causing heating/cooling advected to somewhere else in the world. (I hope I got this right.) So if a change equivalent to a 2x change of CO2 has half of it used to change the temperature of that location by .67 degree C and the other half used to change the temperature of elsewhere in the world, I think that indicates global climate sensitivity of 1.34 degree C per 2x CO2.
I’m not confident about the half-and-half part that I said above; the amounts may be different. That means global climate sensitivity may be other than 1.34, but more than .67 degrees C per 2xCO2.
The temperature is definitely not linear to forcing. The way forcing is defined by the IPCC is somewhat ambiguous and this leads to the error. If Pi is the instantaneous power entering the planet and Po is the power leaving, their difference is considered ‘forcing’. This can be expressed as the equation, Pi = Po + dE/dt, where E is the energy stored by the planet and dE/dt is the energy flux in and out of thermal store of the planet which in the steady state becomes 0 when Pi == Po.
While the energy stored, E and the temperature of the matter storing E is linear (1 calorie increases the temperature of 1 gm of water by 1C), this doesn’t account for the fact that E is continually decreasing owing to surface emissions, thus dE/dt is not linear to dT/dt. So, forcing would be linear to temperature if and only if the matter whose temperature we care about (the ‘surface’) is not also radiating energy into space. The consensus completely disregards the FACT that Po is proportional to T^4 and the satellite data supporting this is absolutely unambiguous, but once this is acknowledged, the high sensitivity they claim becomes absolutely impossible.
Note the ambiguity in the definition of forcing. 1 W/m^2 entering from the top is equivalent to an extra 1 W/m^2 being absorbed by the atmosphere. The 1 W/m^2 entering from the top is all received by the surface while the 1 W/m^2 entering from the bottom is split up so about half exits into space and half is returned to the surface. The distribution of energy absorbed by the atmosphere owing to its emission area being twice the area over which energy is absorbed seems to be widely denied by main stream climate science and this alone represents a factor of 2 error.
BTW, when you look at the math and the energy balance, if more than half of what the atmosphere absorbs is returned to the surface, the resulting sensitivity is reduced, not increased or the atmosphere must be absorbing far less than expected.
Willis
My compliments on very informative graphs and analysis of the data.
Re: “Figure 5. Correlation of TOA forcing and temperature anomalies”
You note: “Antarctica is strongly negatively correlated.” Note a similar effect over Greenland.
Re: “Figure 8. Scatterplot, temperature versus TOA radiation imbalance”
The lower left land (red) data seems to show a strong anti-Stephan-Boltzman correlation over the temperature range from -20 deg C to -60 deg C. That appears to be over the polar regions including Antarctica and Greenland.
Preliminary Hypothesis: Variations in albedo from changing cloud cover over polar regions might cause this anti-Stephan-Boltzman correlation between temperature and TOA forcing.
Is there a way to distinguish such albedo variations due to changing cloud cover?
Such cloud and albedo variations might vary with galactic cosmic rays and thus with solar cycles per Svensmark’s model and Forbush evidence.
Best wishes on your explorations
Surely albedo from ice and snow in the polar region is the prime reason for out going rad ‘violating’ S-B relation. B
That’s certainly a good bet Gary. Another possible contribution MIGHT be that CERES instruments seem (mostly?) to be (to have been) put on satellites in sun synchronous orbits That has a lot of advantages, but it means the satellites never go closer than about 8 degrees to latitude of the poles. And there are other issues like no solar input for much of the year and low angle illumination the rest of the year. I’m no longer as smart as I once was and can’t begin to guess the impact of those things.
It is something called temperature inversion and radiative cooling. From a discussion at Science of Doom: “About temperature inversion: “The intensity maximum in the CO2 band above Antarctica has been observed in satellite spectra [Thomas and Stamnes, 1999, Figure 1.2c], but its implication for the climate has not been discussed so far.”
“However, if the surface is colder than the atmosphere, the sign of the second term in equation (1) is negative. Consequently, the system loses more energy to space due to the presence of greenhouse gases.”
“This implies that increasing CO2 causes the emission maximum in the TOA spectra to increase slightly, which instantaneously enhances the LW cooling in this region, strengthening the cooling of the planet.”
“This observation is consistent with the finding that in the interior of the Antarctic continent the surface is often colder than the stratosphere; therefore, the emission from the stratospheric CO2 is higher than the emission from the surface.”
And even over Antarctica climate models have systematic bias: “This suggests that current GCMs tend to overestimate the surface temperature at South Pole, due to their difficulties in describing the strong temperature inversion in the boundary layer. Therefore, GCMs might underestimate a cooling effect from increased CO2, due to a bias in the surface temperature.”
So what about the inversion over Greenland plateau then?”
Citations from: How increasing CO2 leads to an increased negative greenhouse effect in Antarctica. Authors Holger Schmithüsen, Justus Notholt, Gert König-Langlo, Peter Lemke, Thomas Jung, 2015. http://onlinelibrary.wiley.com/doi/10.1002/2015GL066749/full
Other maps with blue colour over Antrctica and Greenland.
https://scienceofdoom.com/2017/02/17/impacts-vii-sea-level-2-uncertainty/
Great work, Willis!
@Nick “forcings are not inputs but diagnostics”
Say what?
CO2 is a “forcing”, therefore it is a diagnostic as well, and not an input to the system. Atmospheric water is not a forcing (according to the models), but a feedback only. One hesitates to guess the label for feedback only, metadiagnostic?
It seems we are in grave danger of losing all the actual inputs…
“Say what?”
From the post
“This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).”
Those “forcings” in W/m2 are not inputs to GCM’s. They are deduced. Yes, CO2 etc are inputs.
what do you mean “deduced”? Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.
The only thing which is deduced from models is overall sensitivity and that is pre-loaded by ASSUMPTIONS about things like cloud “amount”, constancy of rel. humidity, the scaling needed to volcanic forcing. etc.
” Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.”
Evidence? It is generally not true. Treatment of volcanoes is variable – often omitted entirely in future years. Radiative effect of CO2 is not an input – GCM’s don’t work that way anyway. What is input is either gas concentration or emissions of gas. The task of figuring the net CO2 radiative forcing from that is a by-product of the whole GCM process.
OK, you meant calculated , not deduced. GSMs don’t to deductions that would require AI. I thought you meant deduced from model output as is done to get CS. Just a confusion of wording.
Scaling of volcanic forcing is a fiddle factor used to tweak models to reproduce recent past climate. Volcanic forcing calculated in Lacis et al from basic physics and El Chichon data in 1992 was 30. This got reduced to 21 by Hansen et al ( same group at GISS different lead author ) .
The motivation for lowering volcanic forcing was to reconcile model output. This is not science based, it fudging. They ended up with models which are too sensitive to all radiative forcings which balanced out reasonably well when both volcanoes and CO2 were present. There has been negligible AOD since around 1995 and models run too hot.
So by diagnostic you mean tuning set point.
Agreed. The logic of what Nick is saying escapes me. Maybe there is something we are missing here. But it seems like Willis has shown that there is a model with a lot of irrelevant detailed variables in it, but when you come down to it, only one input is needed to duplicate its output.
So then the reply is, the value of this variable is not an input to the model, its a result of some model assumptions. So what? Willis’ point still stands. You have a very complicated model apparently taking account of lots of different factors, but when you come down to it they none of them matter, because if you just remove them all, you get the same results by using the one variable.
I don’t get it. Nick is a bright and well informed guy, so please, explain why this is wrong.
Basically this post comes down to an assertion about a “central misunderstanding” of climate science. Ideally, the way to find out about that central misunderstanding would be to quote the actual words of someone expressing it. We never get that. Instead there is an indirect argument that scientists must believe it because GCM surface temperatures can generally be deduced from published TOA forcings. My point is that that argument fails idf the forcings were actually deduced from the GCM output (or from a late stage in GCM processing).
Wasn’t it Gavin Schmidt that said they ensemble model outputs to determine the overall models’ internal response to forcings? Or am I mis-remembering?
The criticism is valid, Willis should have followed his own golden rule and quoted something.
However:
This is the problem they are not “deduced” they INDUCED. Model parameters are tweaked to get the desired results. There are dozens of poorly defined parameters that can be freely changed within quite a large range. I’m talking primarily about volcanic forcing and cloud amount and the questionable assumption that rel.humidity is constant.
Through many iterations these fudge factors are juggled to come up with a combination which gets fairly close to reproducing 1960-1995 climate record. They consistently ignore the fact that it does not reproduce the early 20th c. warming.
This is an ill-conditioned problem, with far too many poorly constrained variables and a combination which fits ( a limited portion ) of the record and fails outside that is very likely NOT the right combination.
So your “if” is not satisfied. The forcings are not deduced they are induced.
“Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015”
The data/labels for “temperature” and “seasonal component” appear to be reversed. The seasonal component appears to be slightly larger than the temperature.
I don’t get why it is when discussing climate models there is always much talk of Stefan Boltzmann but nothing about combined gas laws. It seems the notion of a “surface temperature” for Earth is a difficult simplification to make with the difficulties glossed over. Even when abstracting out a “top of atmosphere ” idea things seem ill defined. Maybe I just don’t get the assumptions.
BTW did you read the small-print on climate forcing? All would-be climate forcers please wait on line for 6,500 years, the normal lag between insolation forcing and resultant change in the climate, as shown (recently bt Javier) on this relationship between Milankovich obliquity forcing and 6,500 year lagged temperature:
wait in line (fat thumb)
Nick Stokes July 13, 2017 at 10:05 am
First, this is not generally true. Take for example the volcanic forcing. The models didn’t make that up. It is calculated from the physics of the volcanic ejecta and the measured time that the ash and sulfates remained in the atmosphere. There’s a good description here:
Note that they do NOT say “forcing datasets OUTPUT BY the GISS global climate models”. They say “forcing datasets USED by the GISS global climate models”. On my planet that means they are inputs, not outputs.

There is also this:
Lots of those are observation based.
In short, forcings are indeed described by both GISS and Miller as INPUTS to the climate model, not outputs.
However, suppose that you are right, and that the forcing is the result of a climate model and not an input. You say above that they are calculated from the model outputs.
But if that is the case, my results mean the same thing as your claim—FORCING AND TEMPERATURE IN THE MODELS ARE LINEARLY RELATED.
So in fact, as others have pointed out above, you’re making my argument for me.
w.
Also, Nick, you’ve asked why I call the equation a central climate paradigm. In addition to evidence from the models, there’re a hundred and fifty peer-reviewed papers citing the Schwartz paper linked to in the head post here …
w.
Willis,
My request is that you quote the words. What did Schwartz actually say that makes it a paradigm? What did the citing papers actually say? Schwartz’ paper was quite controversial.
Nick Stokes July 13, 2017 at 7:44 pm
Nick, what I’d said was that the idea that changes in temperature are a constant “lambda” times the changes in forcing is a central paradigm of current climate science. The constant “lambda” is usually called the “climate sensitivity”, and is the subject of endless discussion in the climosphere.
Your claim that the idea of climate sensitivity is NOT central to our current climate paradigm doesn’t pass the laugh test. And I’m not going to quote what dozens and dozens of people including yourself have said about climate sensitivity. That’s dumb as a bag of ball bearings. Google “climate sensitivity” if you’re unfamiliar with the term.
w.
Willis, while I agree that volcanic forcing is an input ( AOD data ) it is worse than that. The scaling is one of the principal fudge factors of the models. It USED TO BE calculated from basic physics around 1990 but the GISS team abandoned that idea in favour of tweaking it to reconcile model output with the climate record around Y2K.
Lacis, Hansen & Sato found a scaling of 30 using proper science methods. It is now typically 21 per Hansen et al 2002. That is one friggin enormous frig factor.
I quote the relevant papers in my article at Judith’s C.Etc. ;
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
relevant refs are 2,3,and 4 cited at the end of the article. Search “Lacis” in the article for relevant sections.
See my earlier comments on this. They simple chose a combination of fudge factors which produce output that fits a limited portion of the climate record and fails outside that period. It is pretty clear that they have not got the right combination of fiddle factors.
From Hansen et al 2002 [4] ( Emphasis added. )
my bold.
Hansen is quite clear about all this in his published work. Shame not one takes any notice.
Willis,
I really like your writing style. You are consistently clear and to the point. Thanks for your hard work.
Thanks for your kind words, bsmith. My intention is to write for the scientifically interested lay person.
w.
“Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound.”
I expected a discussion of the importance of understanding cloud formation on temperature to follow this paragraph. Alas, they are only concluding remarks.
This article and the comments are interesting, but as an outsider, I am not interested in getting into the weeds so much. I wonder more about the elephants in the room. Is classical physics the right tool for modeling long-term climate? Will the current modeling techniques ever produce useful information for policymakers?
For some physicists, the answer is no. Dr. Rosenbaum at Caltech posited that nature cannot be modeled with classical physics but theoretically might be modeled with quantum physics. Last year, CERN CLOUD experiments produced data on cloud formation under tropospheric conditions. CERN reports “that global model simulations have not been directly based on experimental data” and that “…the multicomponent inorganic and organic chemical system is highly complex and is likely to be impossible to adequately represent in classical nucleation theories…” They conclude that model calculations should be replaced with laboratory measurements (2 DECEMBER 2016 • VOL 354 ISSUE 6316).
Simple question: Are the CERN CLOUD experiments relevant to a discussion of “Temperature and Forcing.”
Willis states: “This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance”
Technically that statement is nothing more than a statement about conservation of energy. If you want to deny conservation of energy then go ahead but you are going to need a lot of evidence before anyone will believe you. If there is an imbalance between the amount of energy received and the amount radiated by the earth then it must be warming (or cooling). The interesting questions are surely whether (a) there is radiation imbalance and (b) what is causing it.
Absolutely Geminio. As someone said earlier on in this discussion, if your running the tub and don’t allow it to drain at the same rate…..
‘You’re’
… and (c) how does the system change to restore balance.
After seeing Figures 1 & 2, it ives me an impression that Residual pattern at surface is in opposition to TOA, particularly the peaks [upward and downward]. What does it means?
Dr. S. Jeevananda Reddy
The K-T power flux balance diagram has 160 W/m^2 net reaching the “surface.” There is exactly zero way 400 W/m^2 can leave.
Over 2,900!! Just a dozen shy of 3,000 (up 11 hundred since 6/9) views on my WriterBeat papers which were also sent to the ME departments of several prestigious universities (As a BSME & PE felt some affinity.) and a long list of pro/con CAGW personalities and organizations.
NOBODY has responded explaining why my methods, calculations and conclusions in these papers are incorrect. BTW that is called SCIENCE!!
SOMEBODY needs to step up and ‘splain my errors ‘cause if I’m correct (Q=UAdT runs the atmospheric heat engine) – that’s a BIGLY problem for RGHE.
Step right up! Bring science.
http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-
http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C
http://writerbeat.com/articles/16255-Atmospheric-Layers-and-Thermodynamic-Ping-Pong
nickreality65 July 13, 2017 at 6:27 pm
Actually, the K-T diagram shows ~ 160 W/m^2 of shortwave solar energy entering the surface, along with about 340 W/m2 of longwave downwelling radiation for a total of about half a kilowatt total downwelling radiation on a 24/7 global average basis.

This is balanced by the thermal radiation from the surface, which is about 400 W/m2, along with about a hundred W/m2 in parasitic losses (sensible and latent heat).
Regards,
w.
Just out of curiosity, the typical value given for the flux of solar radiation at the top of the atmosphere is 1361 W/m². (Google “Solar Constant”). The chart above starts with 341.3 w/m^2. What happened to the other 1020 W/m^2?
Maybe this answers my question: “Hence the average incoming solar radiation, taking into account the angle at which the rays strike and that at any one moment half the planet does not receive any solar radiation, is one-fourth the solar constant (approximately 340 W/m²)”
https://en.wikipedia.org/wiki/Solar_constant
But, perhaps the averaging conceals more than it reveals, and is not a good basis for modeling. Because, it does not consider the differences between land and water and day and night.
I agree. You to not learn how something works by throwing much of your information way, or treating a dynamic process as if it is static. You have to look from multiple points of view.
Walter Sobchak July 14, 2017 at 3:06 pm
Walter Sobchak July 14, 2017 at 3:13 pm
Good question, Walter. Here’s the problem. Say we want to know how much solar energy a certain area gets. To do that we have to account for the fact that part of the time it gets none, and part of the time it gets some.
The incoming solar is ~ 1360 W/m2. But that is on a flat plane perpendicular to the solar rays. We’re not interested in that. We need to know how much on average hits the surface of a sphere.
And since the area of a sphere is four times the area of the great circle of the sphere, to get that average we need to divide the 1360 by four.
I’d love to have hourly or even daily data … but it’s not available at present. Sadly, we always have to work with the data we have, not the data we wish we had …
Regards,
w.
PMOD solar data is reported daily.
I is it to calculate flat surface energy every hour, for every stations lat and alt, from I think the 50s. I calc a relative value, then just multiple by tsi. PMOD starts in 79, so I average the entire series and use that as well, so I have both average , and daily if it exists, and can select which one to use.
Willis Eschenbach July 13, 2017 at 7:36 pm
With the backradiation being on average twice the solar radiation, why don’t we see backradiation panels iso solar panels? Twice the energy and available 24/7.
An infrared camera can convert longwave easily, so just increase the efficiency and our energy problems are solved.
“why don’t we see backradiation panels iso solar panels?”
It can’t work. People wrongly say that down IR can’t warm the surface because it comes from a cooler place. It can and does. What it can’t do is do work, thermodynamically. A collector for down IR must be exposed to the source (sky), which is cooler. Net heat is necessarily lost. You can’t get usable energy that way.
“we’re extremely unlikely to ever double the atmospheric CO2 to eight hundred ppmv from the current value of about four hundred ppmv.”
Dear Willis: As usual, you have written a very thought provoking post.
Incidentally, I read a new article about fossil fuel and its effect on CO2. because it was linked by a story in the Wall Street Journal. The article is:
“Will We Ever Stop Using Fossil Fuels?” by Thomas Covert, Michael Greenstone, & Christopher R. Knittel in Journal of Economic Perspectives vol. 30, no. 1, Winter 2016 at 117-38. DOI: 10.1257/jep.30.1.117
https://www.aeaweb.org/articles?id=10.1257/jep.30.1.117
The downloads available at that URL include an Appendix which which sets forth estimates of total fossil fuel resources and how much CO2 will be released by their combustion. Table 3 in the Appendix includes an estimate of the eventual concentration of CO2 in the atmosphere. If I am reading the Table correctly, they are estimating eventual concentrations of up to 2400 ppm, 6x the current level.
I know that this differs from your estimate dramatically, If you have a chance to look at it, i would appreciate your thoughts on it.
I just took a look at Figure 3, Walter. They estimate the total potential fossil fuel resource at ten times the total amount burnt since 1870 … not sure how they got there, but that seems way out of line in terms of size. Not buying it.
w.
I don’t always get the math right away but perhaps the derivative you use to calculate the slope
dT/dW = (W / (sigma epsilon))^(1/4) / (4 * W)
could be stated:
dT/dW = W / (4 sigma epsilon)
but if wrong happy to learn here
Mike, I’m not seeing it. Perhaps the simplest way that we can check it by substituting the actual numbers. Let’s assume that W = 390 (average surface radiation). Sigma is Stefan Boltzmann constant (5.67e-8) and epsilon is generally taken as 1.
The first equation gives us
(390 / 5.67e-8)^1/4 / (4 * 390) = 0.18
This agrees with the values in the head post
On the other hand, the second equation gives us
390 / (4 * 5.67e-8) = 1,719,576,720
Regards,
w.
Willis,
I realize this is a bit off topic, but in my quest to find a solar cycle in the atmosphere I wrote some software to take the binary RSS data of the lower stratosphere and averaged each longitude band for each month. So I have a dataset that is decomposed by latitude and year, which gave me 472 months by 72 latitudes. I then computed the FFT for each latitude and displayed the spectrum. Theoretically I thought I should see some signal around 11 years in the Northern Hemisphere, but to my surprise I found a very strong signal in the Southern Hemisphere as well as a weaker one in the Northern Hemisphere. This only works on the actual temperature data and not the anomaly data. So my question is, in your extensive analysis a while back of solar signals did you always use anomaly data or did you ever try your methodology on actual temperature data?
Latitude vs Year
https://photos.google.com/photo/AF1QipNfBeVwrtmsbs-hVgLqf23rimZv4cpUvABoV_C3
Latitude vs Spectrum
https://photos.google.com/photo/AF1QipMn5J7xk6GVX7M7hYmL5iut7uK905W7uCmQOOAH
~LT
Oops, Bad links
Latitude vs Year
https://photos.app.goo.gl/c3uI5EovWpn7o11J3
Latitude vs Spectrum
https://photos.app.goo.gl/HyPYFtJjgKmhC8Zy1
LT, if you’ve gotten a sunspot signal in the actual temperature data and not the anomaly data, you need to look at how you’ve calculated the anomaly.
The problem is that when you calculate an anomaly, you are taking out a regular annual cycle which doesn’t change over time. As a result, there SHOULD be no way that you are removing an e.g ~ 11-year sunspot cycle when you calculate an anomaly.
So I suspect the problem is in your anomaly calculations.
Finally, I’d be careful using the RSS data. Given the direction of their changes, those guys seem to be driven in part by climate politics rather than climate science …
Let me know what you find out.
w.
LT, I forgot to add that if you are looking at 72 datasets you are almost guaranteed to find something that is significant at the 0.05 level … you need to look at the Bonferroni correction.
w.
Well, I used the two RSS datasets, one is anomaly and one is temperature in K., but it is the same set of code that does the analysis. I suppose I could attempt to compute the anomaly myself. Does UAH have a binary dataset that allows the global monthly dataset?
The data is actually 472 months with 72 latitudes and 144 longitudes, so for each month and for each latitude I summed the longitudes so that the resulting temperature dataset will allow you to see common latitudes and how they change with each month.
LT I do something similar in one paper/blog I wrote. I look at the changing day to day temp, as the length of sunlight change, and compare the change in surface insolation with the change in temperature at the same location. I calculate this for the extratropics in latitude bands as Degrees F/Whr/m^2 of insolution change. You can convert this to C/W/m^2 dividing by 13.3
https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity/
This is based on NCDC’s/Air Forces Summary of Days surface data set, and includes all station with an set of geo-coordinates, is in the range specified, and has the minimum number of reporting/year specified(360).
Ok, so I have both anomaly and actual temperature working now and there is no doubt that an anomaly dataset is not the most accurate way to analyze a time variant signal. Essentially when you remove the seasonal cycle you are imposing a waveform in the signal. The FFT can detect an embedded waveform in a signal but the relative strength of the waveform can never be determined if you have applied a bias to the signal. Removing a seasonal cycle also is not a zero phase process, therefore there will be a phase shift implied to the data that can potentially cause cancellation of certain waveforms that will not show up accurately in any type of cyclical analysis process. It is my opinion that attempting to use anomaly data for anything other than looking at delta amplitudes is plagued with phase errors that will mask the validity of any type of process that involves correlations. There is indeed a significant signal in both the Troposphere and the Stratosphere with a cycle of 11.8 years, which happens to be the orbital period of Jupiter. I cannot say if it is a solar cycle, but I can tell you that it is not noise and is almost certainly a natural phenomena that is perturbing Earth’s climate in some way.
~LT
Willis writes
From Roy, an early comment on the latest changes to the data
As Feynman explains with regard to the Millikan oil drop experiment
One might imagine that every one of our independent measurements for temperature was biased and only now are we slowly adjusting towards the correct warmer value.
Except that’s exactly NO