Guest Post by Willis Eschenbach
Over at Dr. Curry’s excellent website, she’s discussing the Red and Blue Team approach. If I ran the zoo and could re-examine the climate question, I’d want to look at what I see as the central misunderstanding in the current theory of climate.
This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).
As evidence of the centrality of this misunderstanding, I offer the fact that the climate model output global surface temperature can be emulated to great accuracy as a lagged linear transformation of the forcings. This means that in the models, everything but the forcing cancels out and the temperature is a function of the forcings and very little else. In addition, the paper laying out those claimed mathematical underpinnings is one of the more highly-cited papers in the field.
To me, this idea that the hugely complex climate system has a secret control knob with a linear and predictable response is hugely improbable on the face of it. Complex natural systems have a whole host of internal feedbacks and mechanisms that make them act in unpredictable ways. I know of no complex natural system which has anything equivalent to that.
But that’s just one of the objections to the idea that temperature slavishly follows forcing. In my post called “The Cold Equations” I discussed the rickety mathematical underpinnings of this idea. And in “The TAO That Can Be Spoken” I showed that there are times when TOA forcing increases, but the temperature decreases.
Recently I’ve been looking at what the CERES data can tell us about the question of forcing and temperature. We can look at the relationship in a couple of ways, as a time series or a long-term average. I’ll look at both. Let me start by showing how the top-of-atmosphere (TOA) radiation imbalance varies over time. Figure 1 shows three things—the raw TOA forcing data, the seasonal component of the data, and the “residual”, what remains once we remove the seasonal component.

Figure 1. Time series, TOA radiative forcing. The top panel shows the CERES data. The middle panel shows the seasonal component, which is caused by the earth being different distances from the sun at different times of the year. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data.
And here is the corresponding view of the surface temperature.

Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015.
Now, the question of interest involves the residuals. If there is a month with unusually high TOA radiation, does it correspond with a surface warming that month? For that, we can use a scatterplot of the residuals.

Figure 3. Scatterplot of TOA radiation anomaly (data minus seasonal) versus temperature anomaly (data minus seasonal). Monthly data, N = 192. P-value adjusted for autocorrelation.
From that scatterplot, we’d have to conclude that there’s little short-term correlation between months with excess forcing and months with high temperature.
Now, this doesn’t exhaust the possibilities. There could be a correlation with a time lag between cause and effect. For this, we need to look at the “cross-correlation”. This measures the correlation at a variety of lags. Since we are investigating the question of whether TOA forcing roolz or not, we need to look at the conditions where the temperature lags the TOA forcing (positive lags). Figure 4 shows the cross-correlation.

Figure 4. Cross-correlation, TOA forcing and temperature. Temperature lagging TOA is shown as positive. In no case are the correlations even approaching significance.
OK, so on average there’s very little correlation between TOA forcing and temperature. There’s another way we can look at the question. This is the temporal trend of TOA forcing and temperature on a 1° latitude by 1° longitude gridcell basis. Figure 5 shows that result:

Figure 5. Correlation of TOA forcing and temperature anomalies, 1° latitude by 1° longitude gridcells. Seasonal components removed in all cases.
There are some interesting results there. First, correlation over the land is slightly positive, and over the ocean, it is slightly negative. Half the gridcells are in the range ±0.15, very poorly correlated. Nowhere is there a strong positive correlation. On the other hand, Antarctica is strongly negatively correlated. I have no idea why.
Now, I said at the onset that there were a couple of ways to look at this relationship between surface temperature and TOA radiative balance—how it evolves over time, and how it is reflected in long-term averages. Above we’ve looked at it over time, seeing in a variety of ways if monthly changes or annual in one are reflected in the other. Now let’s look at the averages. First, here’s a map of the average TOA radiation imbalances.

Figure 6. Long-term average TOA net forcing. CERES data, Mar 2000 – Feb 2016
And here is the corresponding map for the temperature, from the same dataset.

Figure 7. Long-term average surface temperature. CERES data, Mar 2000 – Feb 2016
Clearly, in the long-term average we can see that there is a relationship between TOA imbalance and surface temperature. To investigate the relationship, Figure 8 shows a scatterplot of gridcell temperature versus gridcell TOA imbalance.

Figure 8. Scatterplot, temperature versus TOA radiation imbalance. Note that there are very few gridcells warmer than 30°C. N = 64,800 gridcells.
Whoa … can you say “non-linear”?
Obviously, the situation on the land is much more varied than over the ocean, due to differences in things like water availability and altitude. To view things more clearly, here’s a look at just the situation over the ocean.

Figure 9. As in Figure 8, but showing just the ocean. Note that almost none of the ocean is over 30°C. N = 43,350 gridcells.
Now, the interesting thing about Figure 8 is the red line. This line shows the variation in radiation we’d expect if we calculate the radiation using the standard Stefan-Boltzmann equation that relates temperature and radiation. (See end notes for the math details.) And as you can see, the Stefan-Boltzmann equation explains most of the variation in the ocean data.
So where does this leave us? It seems that short-term variations in TOA radiation are very poorly correlated with temperature. On the other hand, there is a long-term correlation. This long-term correlation is well-described by the Stefan-Boltzmann relationship, with the exception of the hot end of the scale. At the hot end, other mechanisms obviously come into play which are limiting the maximum ocean and land temperatures.
Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero. This is what we would expect in a governed, thermally regulated system. In such a system, sometimes the feedback acts to warm the surface, and other times the feedback acts to cool the surface. Overall, we’d expect them to cancel out.
Is this relationship how we can expect the globe to respond to long-term changes in forcing? Unknown. However, if it is the case, it indicates that other things being equal (which they never are), a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree …
However, there’s another under-appreciated factor. This is that we we’re extremely unlikely to ever double the atmospheric CO2 to eight hundred ppmv from the current value of about four hundred ppmv. In a post called Apocalypse Cancelled, Sorry, No Ticket Refunds. I discussed sixteen different supply-driven estimates of future CO2 levels over the 21st century. These peak value estimates ranged from 440 to 630 ppmv, with a median value of 530 ppmv … a long ways from doubling.
So, IF in fact the net feedback is zero and the relationship between TOA forcing and surface temperature is thus governed by the Stefan-Boltzmann equation as Figure 9 indicates, the worst-case scenario of 630 ppmv would give us a temperature increase of a bit under half a degree …
And if I ran the Red Team, that’s what I’d be looking at.
Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound. Normally there’s not much noise here in the forest, but tonight it’s sensory-deprivation quiet … what a world.
My best regards to everyone, there are always more questions than answers,
w.
PS—if you comment please QUOTE THE EXACT WORDS YOU ARE DISCUSSING, so we can all understand your subject.
THE MATH: The Stefan-Boltzmann equation is usually written as
W = sigma epsilon T^4
where W is the radiation, sigma is the Stefan-Boltzmann constant 5.67e-8, epsilon is emissivity (usually taken as 1) and T is temperature in kelvin.
Differentiating, we get
dT/dW = (W / (sigma epsilon))^(1/4) / (4 * W)
This is the equation used to calculate the area-weighted mean slope shown in Figure 9. The radiation imbalance was taken around the area-weighted mean oceanic thermal radiation of 405 W/m2.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Interesting that if you take the Earth’s average surface emission of 390 W/m2 (or 15.0C in temperature) and you increase that emission level by 1.0 W/m2, the temperature should increase by 0.18C according to the Stefan Boltzmann equations. That is exactly what Willis calculated from Ceres TOA radiation.
The issue in climate science is the theory does work from the surface but from the average emmission level balancing incoming solar of 240 W/m2 or -18C. In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.
Now the combination of these two changes can be calculated as 0.81C per 1.0 W/m2 of GHGs. And they just stick with that. Fundamentally flawed.
Go back to actually measuring what is happening, go back to the surface which is what we are concerned with and actually measure how the proposed feedbacks are actually operating. Use the Stefan Boltzmann equations for calcutions because this has been proven to work perfectly everywhere in the universe.
This is what real science would do and what Willis has done here.
On average each sq meter gets 3,741 WHr/ 155.8W/m^2, average temp goes up 9.8C.
That’s 0.06C/W/m^2 measured at the surface with PMOD averaged TSI.
Based on the Air Forces surface data summary.
This is basically using the seasonal change in forcing in the hemisphere’s and the resulting change in temps.
The units are Degree’s F/Whr/day to get C/w/m^2, divide by 13.3 ( / 24 (day to hour) / 9 * 5(F to C))
” In addition, the theory is that for every 1.0 W/m2 from GHGs, you get feedbacks of another 2.0 W/m2.”
And the feedbacks are yet more GHGs, which in turn continue the feedback, which we know is pure sophistry. Never once in Earth’s 4 billion year history of climate has a runaway greenhouse effect occurred.
One of the basic mistakes in this whole CO2 hypothesis is that it is well mixed. Yes, in a closed container in a lab, it would be well mixed. But most CO2 is generated at the surface and is sunk at the surface. Does ANYONE actually believe that the percentage of CO2 at the surface is, uh, well mixed? If most of the CO2 concentration is at the surface, and it is, then why do the models use the other end of the atmosphere – the top?
CO2 IS well mixed — it doesn’t have to be perfectly mixed to be “well mixed”. The constant churning of the atmosphere through day/night heating and cooling, winds, and seasons sees to that. The concentrations at high altitudes in the troposphere are not that different from typical concentrations at the surface. This stuff is easy to measure (unlike many other climate-related variables).
Yes, a variation of a few ppmv around an average of 400 ppmv IS well mixed.
But at the surface, ie., low altitudes one can see variations of several hundred ppm
See for example:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_2005-07-14.jpg
And
http://www.ferdinand-engelbeen.be/klimaat/klim_img/giessen_background.jpg
CO2 is anything but well mixed at low altitude and that is why the IPCC rejected the Beck reinstatement of historical chemical analysis of CO2
The Scripps Institute manages the atmospheric CO2 measurement program initiated by Charles Keeling in the 1950s. http://scrippsco2.ucsd.edu/ Not only are “continuous” measurements made at the Mauna Loa Observatory, but weekly flask samples are taken at 11 other stations at varying latitudes. They show that atmospheric CO2 isn’t perfectly mixed, but overall it’s not far off. The OCO2 satellite shows much the same thing once you get past the false colors used to emphasize small differences in CO2 concentration.
See my comment above.
At high altitude, CO2 is a well mixed gas (ie., sat about +/- 10 ppm around 395 ppm), but at low altitude CO2 is anything but well mixed and there can be local variations (depending on season, windspeed, temperature, geography, topography, vegation etc) of several hundred ppm
The scatterplots are good examples of what appear to be chaotic strange attractors. To me this means that the physical system (the climate system itself that gives the results of ocean surface and surface air temperatures) is chaotic [highly non-linear]. For examples of scatterplots that are attractors, see: Chaos & Climate – Part 4: An Attractive Idea or google “images strange attractors”.
It must be remembered that the Stefan-Boltzmann Law, when applied to a non-equilibrium state, is itself highly non-linear, and only its approximations and “linearized” versions produce the straight red line in the essay above. In the real world, where the atmosphere hits space, the red line of S-B is not straight by any means.
the red line is not straight , it shows the fourth power curvature over the limited range of temperatures found in the record.
Goodman ==> Perhaps I should have said “narrow” or “regular” or some other word to relate to its visual.
S-B under non-equilibrium does not produce such a line….more probably something far more similar to the bluw, that would be, if we could actually solve the thing at all — which I do not believe we can at this time.
And during clear calm skies at night, as rel humidity goes up, and the amount of water vapor condensing goes up, this reduces the rate temperatures drop. You can see it in both temp and net radiation in this chart.
And explained here
https://micro6500blog.wordpress.com/2016/12/01/observational-evidence-for-a-nonlinear-night-time-cooling-mechanism/
micro ==> Interesting….
Hansen => ” is itself highly non-linear,” T^4 is itself highly non-linear ! Perhaps you should have said what you meant.
If you read the link, they as well as myself show where it’s at least 2 different nonlinear rates. Besides under clear skies, low humidity the zenith temp will be 80-100F colder than the surface. And has the same difference at both cooling rates. You can even see net radiation drop in the middle of the night under clear skies.
I’m not sure I understand the following speculation:
Are you speculating about a possible relationship between temperature’s spatial response to the spatial variation in radiation imbalance and its temporal response the temporal variation in forcing? (As I understand it, forcing is not the same as imbalance. Roughly, the forcing associated with a given CO2 concentration is the imbalance that would initially result from a sudden step increase to that concentration from equilibrium at a baseline, presumably pre-industrial concentration. Imbalance would decay in the fullness of time, but the attributed forcing would not.)
I ask because I wasn’t immediately able to see the logical relationship between the two quantities, and I didn’t want to ponder the question further if you meant something else.
Where there is water or ice on a surface, the temperature of that surface will approach the dew point temperature of the air measured near that surface (ocean, top of thunderclouds, moist soil, leaves). Those surfaces will radiate at that temperature. Air temperature is a measure of molecular collisions (wet and dry adiabats) that is never colder than the measured dew point. Thus, water vapor in the air is the primary temperature controller. Man made impervious surfaces do not contain water, thus, the heat island effect.
Yet another excellent critique of the climate models , but again on their agenda. Yes it’s a complex, non linear stochastic, and undersubscribed statistical model of a poorly modelled system, That uses decisions regarding which effects are relevant that the “scoentists”, AKA numerical modellers, try to prove and how much gain to apply to them, rather like the economc models used to support or deny Brexit, world growth, etc. They then measure correlation and claim this proves some science, when it can only prove correlation, and can never prove any science, no control planet, etc..
As with weather and economic forecasts, it is not deterministic and proves no laws, it’s pseudo science. Further, as anyone familiar with Neural Nets and slack variables knows, the extrapolation of non linear data outside its known range in a noisy multivariate system is notoriously unreliable over any significant time, never mind the multi lifetime natural periodicities of climate change. But that’s not the problem I now focus on. What about plant denial?
Modellers assumptions about plants seem to deny the data record of their effect on CO2 control. From 95% to <0.20% fairly sharpsh once the the oceans formed, m reduced to a level just enough the keep an efficient carbon cycle optimised , for them and us, and holding that very low figure through mutiple mass extinctions and real catastrophes over Billions of years, by increasing and decreasing amount of plants and rate of plant photosynthesis. To make the models work dynamic response had to be discounted vs the evidence. When the plants grew and started reducing the CO2 we produce, they were said by modellers, I won't call these people scientists, because they deny the basic scientific method, to be unexpected response that would be "overwhelemed" by our few hundred ppm of CO2, showing how blatantly basic controls were discounted by their models, and now again denying that plants will continue to grow wherever it suits them until the atmospheric CO2 levels drop. That's my hypothesis re CO2.
Here's Catt's Hypothesis re serious climate change – out of the oceans, by volcanicity. with extremess due to orbital eccentricity. Probaly also accounts for smaler but still significant variations over 100's years.
I suggest significant climate change has more obvious and easy to quantify primary cause. THis is oceanic magma release, the most during maximum gravitational variation extremes of Milankovitch cycles, Simply put, why not, this drags tectonic plates and the core around enough to rattle our thin crust around enough to trigger interglacials from the longer term "relatively steady state" ice age.
Data? 30% gravitational variation pa, actual force is 200 times the Moons, ocean floor is 7km basalt crust on a 12,000Km hot rock pudding that is permanently leaking 1,200 degree rock into the ocean, at seperating plate juntions and hot spots like Hawaii, ring of fire, etc.. I suggest this direct ocean heating simply steps up at an order of magnitude or more every 100K years. Takes about 200,000 Mount Fujis of basalt to raise the oceans 12 degrees, on the back of my envelope. no modellers required, proven physics of if, then.
I have made VERY rough guess at this. The amount of magma arriving into the oceans seems poorly documented. If it's 1×10^13 Tonnes when unstressed – 1,000 undersea Mount Fujis plus 40,000 Km worth of 7Km deep crack filing with no overflow @ur momisugly 2cm pa tectonic seperation, it would need to increase by 7,000 times to meet the 12 degree heat delivery of an intergacial event. A Milankovitch eccentricity caused volcanic forcing event. AS a practical engineer and phsyicist wh has seen how well simple process models work in Chemical Engineering at Imperial Colege amongst other examples, I like the clear probabiitity of hot rock delivered direct to the oceans as the warming effect that supports both ice age conditions and the short interglacials, and the significant events in between, a lot more than blaming an atmospheric gas at trace levels that created and maintained the stabe atmospheric conditions for life in the carbon cycle, and is probably not guilty anyway.
nb: 3.8×10^11 tonnes basalt Mount Fuji has only been there for one ice age… so is a total amount of 200 Mount Fujis worth of magma emitted into the oceans pa over 1.000 years likely, under these conditions of extreme gravitational stress? That'll do it. No forcing required, just basic heat transfer from our on board nuclear reactor. Simples!
CONCLUSION: 1. Climate modeller are basically plant deniers.The plants control CO2, always have, ignoring natural responses at the levels we know have occured in the past is simply estabishing bogus hypotheses regarding CO2 that they then have to force to make their "models" correlate as promised, This is just a show trial of CO2 by a religious court. Science abuse.
Hardly the deterministic scientific method of trying to disprove your hypothesis – by doubting and not testing the most obvious control of plants, assuming low gains for that proven response that don't increase enough as CO2 increases.. Why not ask the model how much plant growth is required to control likely CO2 emissions? And this still doesn't prove more CO2 causes anything except more plant growth, or is more than a simple consequence of fires, volcanoes, etc. that is absorbed by rocks, plants, etc. and recycled by plate tectonics. Of course the bad news is that if the plants do it without us doing anything, there is no easu climate change money flowing into innefective or regressive projects that pretend to solve the "climate change catastrophic disaster".
And, of course, it is more likely in a reasonably quiescent world that natural CO2 increases as a consequence of warming oceans that warm the atmosphere, hence the correlation, not a cause of any significance. Oceans are where the surface heat of the planet is, over 1,000 times more than the atmosphere at 6×10^24 Joules per Degree K or so, And there is a LOT more heat on the inside, being generated all the time and trying to get out. And succeeding all the time, at varying and renewable rates, through our very leaky selection of loosely linked crustal plates, especially the thin ocean plates, with very little net mass loss as its all recycled every 200 Million years back into the core.
2. There is at least one more likely and provable cause of warming, via the oceans that drive the atmosphere, that fits the ice core evidence, we can see happening and have documented, via the ocean, and without any need for "forcing". Leaks in the thin crust of earth we live on release massive amounts of heat direct into the oceans where it can be held and cause serious long term change to the atmosphere, not the reverse situation where these "climate pseudo scientists" heads are, chasing the government and renewable lobbyists' $$$$$.
I suggest we look under the oceans for the smoking gun that can do the climate change job as advertised, deliver 70×10^24 Joules to the Oceans over 1,000 years or so to create a 12 degree rise for an interglacial, for example. I suggest the rather puny atmospheric climate cannot,certainly not due to human CO2 emissions. Real climate change that makes the trip to Europe a walk over Dogger Land and The Great Barrier Reef an interesting white 300 foot rocky white ridge a short drive East from Cairns, is a consequence of greater and more powerful controls, primarily solar gravity and radiation variance, CO2 is not the cause, the atmosphere simply an effect or consequence of the larger controls.,
And solar radiation, while powerful, is usually in balance, and orbital eccetricity does not create significant effects in any way I have seen credibly proposed. It is interesting that most of the climate discussion around Milankovitch cycles considers the bizarre fringe effects on the atmosphere of obliquity/precession plus eccentricity on the atmosphere, a low energy capacity sink, rather than the unbalanced gravtitational effect on a serious heat source that can change ocean temperatures the 12 degrees required for an interglacial, what my approach is grounded in. See what Jupiter and its moons do to Io if you doubt the power of gravtiational stress,
CO2 is innocent. It was the rocks what done it. The so called scientists like Michael Mann have taken the easy money for supporting political agendas with actual science denial that frames CO2 for climate change using statistical models nor science, picking data that is in the noise of an interglacial on amplitude and period, when CO2 is most probably only a consequence of volcanoes and plants, also to boost their own egos as high priests of science become religion with its own inquisition and distorted language. Their narrow presumptive focus on atmospheric effects of CO2 deny the larger real effects and obvious science facts – the established world of the natural carbon cycle and the interaction of our mostly hot soft rock with the oceans that drive the joined up planetary systems. IMO. Rebuttals /critiques of my data and results with other I can check aways welcome.
CEng, CPhys, MBA
There are probably a number of typos above, but that's all the time I have for now. The message is clear, I hope.
brianrlcatt July 13, 2017 at 6:57 am
Pretty close 😉
Latest provable large magmatic event is the Ontong Java one, possibly 100 million km^3 magma erupting in the oceans. No surpprise the deep ocean were ~18K warmer then today at the end of those eruptions, around 85 mya.
see http://www.sciencedirect.com/science/article/pii/S0012821X06002251
1 million km^3 magma carries enough energy to warm ALL ocean water 1K.
We need another eruption like the Ontong Java one to lift us out of the current ice age.
Cliamate modellers, nort real scientists, academic statisticians, are so looking the worng way to prove CO2 guilty on a bum wrap , amanipulating and witholding evidence like some inquisition court. . Heads in the religious clouds when the action is under the oceans. This was not my idea, but badly presented by the person who first suggested the principal. This close a 121 fit of magma heat content with Ocean temperature change, no forcing required, CO2 follows as a cisnequence, not a cause. http://news.nationalgeographic.com/news/2013/09/130905-tamu-massif-shatsky-rise-largest-volcano-oceanography-science/
Nice job!
Next analysis can detect ocean currents that distributes heat into the oceans before it can radiate?
I note that the objections that Nick Stokes raises here represent exactly the sort of thinking that makes this post by Eschenbach necessary.
Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:
This may indeed refer possibly to multiple time scales, but that is not sufficient for full representation of the problem as all this refers to a ∆T applying to the system as a whole. If the Biot number for the system is very small, then this solution of homogeneous temperature works just dandy. The Biot number itself is a function of heat transfer mechanisms and scale size, and “smallness” is a function of temperature resolution among other things. If the Biot number is not small the temperature distribution at any point during a transition from one equilibrium state to another is a complex function of time and space. In this case a mean temperature can always be calculated, but depending on how the one monitors the problem (distribution of measuring instruments, scale effects, schedule, instrument resolution) one can arrive at different mean temperatures, and find that a mean value of any sort may have no pertinence to particular locations.
I have used this example for a long time to explain my skepticism about “mean air temperature” or “global mean temperature” (GMT), which a person can calculate in any situation may not be unique and may have little importance in a practical sense. I suppose the counter argument to what I have just explained is a “yes but it still is a useful monitor of system change”. Yet in view of the GMT being a non-unique and context dependent entity with time dependence, this counter argument seems logically doubtful. And it doesn’t even begun to discuss the “corrections” made to observations to calculate a GMT in the first place.
There are many issues for a Red Team to calculate, but the most important are beyond these technical issues, and revolve around costs in relation to benefits, and what strategies are likely to be practical or whether such strategies are even needed.
“Let’s take a reasonably successful, but elementary, engineering model of heat transfer–the lumped element model, which involves energy balance and rates of assumed transfer mechanisms. The equations that Stokes refers to I repeated here, and represent just such a model:”
Yes. Schwartz is describing a lumped element model, with prospects of reasonable success. But engineers who use such models do not do so claiming that:
“[the] system has a secret control knob with a linear and predictable response”
And neither do climate scientists. Again, it comes back to Willis’ excellent advice
“QUOTE THE EXACT WORDS YOU ARE DISCUSSING”
As far as GMT, or more carefully, global mean surface temperature anomaly, is concerned, it is pretty much unique – that is, it doesn’t depend much on whichever valid set of measurements you use. It’s true that regions may differ from the average, as with many things we observe. But it is a consistent observable pattern. The analogy is the DJIA. It doesn’t predict individual stocks. Different parts of the economy may behave differently. But DJIA is still useful.
Impressive demonstration Willis. Are climate scientist proponents of excessive global warming not using all the wonderful tools they put up above us.
I realize you are looking at S-B/Global T relationship and not imbalances in incoming and out going radiation. Enthalpy in melting ice at constant temperature and endothermic rapid greening of the planet including phytoplankton (cooling effect) don’t reduce the fit of the Stefan-Boltzmann / global T noticeably, but the imbalance should be a measure of enthalpy changes I’d imagine.
I recall Hansen using imbalance as ‘proof’ of serious warming. Perhaps both poles alternately freezing and thawing balance out but the greening is a long term issue and must be part of the imbalance. Ferdinand Englbeem in a reply to a post of mine suggested that changes to Oxygen which is recorded to ppmv accuracy give a fair estimate of greening. Might your excellent facility with global scale analysis using satellite data be an approach to investigating the imbalance question as a measure of enthalpy in the system? Could the departure of the S-B to the warm side at the top be the cooling from greening?
All this is very good, but the National Academy of Science and the Royal Society has put this overview out as certain evidence of human caused climate change.
http://nas-sites.org/climate-change/qanda.html#.WWZ4B_WcG1t
This statement from number 18 would seem to settle the matter even if one did not understand anything about climate, but has at least a little common sense. I would not walk on bridges built by engineers who made similar statements.
“Nevertheless, understanding (for example, of cloud dynamics, and of climate variations on centennial and decadal timescales and on regional-to-local spatial scales) remains incomplete. ”
They go on to say:
“Together, field and laboratory data and theoretical understanding are used to advance models of Earth’s climate system and to improve representation of key processes in them, especially those associated with clouds, aerosols, and transport of heat into the oceans. This is critical for accurately simulating climate change and associated changes in severe weather, especially at the regional and local scales important for policy decisions.”
Without admitting that they understand so little of it. So we can look forward to $Billion more spent for them to tweak their models, without finding the most basic errors in the models, because they refuse admit the role of “negative feedbacks” that have stabilized climate, while focusing only on CO2 as the driver of changes.
Willis always makes it interesting, but has he really identified science’s real misunderstanding?
My research shows the temperature limits are limited by the range of TSI.
My research shows changes in SST are a simple linear lagged function of TSI forcings.
The effort to use TOA can be misleading. The main action of solar energy is upon the ocean, which then acts on the atmosphere. On any given day the air temperature within the troposphere is a response to the ocean surface temperature (which is a lagged response to TSI) and present day TOA. You will completely miss the lagged influence of former TSI when you don’t include it, an influence that is more powerful. It’s no wonder you claim there isn’t a lagged or linear response to TSI.
if it were so that 85% or so of the energy needed for warming the ocean did not come from the sun, from where did the other 85% or so of the necessary energy for warming came from besides the sun, and why is it so widely believed that there is such another greater source of tangible heat than the sun that no one can identify, measure, or feel? We humans have a pretty good sense of the solar daily heating effect, so why can’t we feel the heat 5X stronger than sunlight, day and night?
NO evidence exists for an energy source 5X more powerful than sunlight!
In my view the IPCC solar POV blue team is defending the indefensible, and has everyone chasing their tail looking for other forcings and feedbacks.”
***
Willis has never argued for a real heat source 5X more energetic than sunlight – he couldn’t find one if he ever tried, because there isn’t one. Isn’t that interesting?
Two thoughts:
Net TOA Radiative Forcing trend 0.08 + – 0.24 W/m2, signal-to-noise ration 1 to 3, stating a “trend” is a stretch.
CERES Surface Temperature ending in March 2016 right before El Nino warming ends and cooling begins, clearly cherry-picked.
H.D. Hoese and Michael Moon
Yep. The biggest problems in climate science are:
It’s a hypothetical field of investigation i.e. theoretical science, yet it has crossed ethical boundaries into being taken as fact and acted upon.
For example, not one of the temperature anomaly data sets is actual data. They all have uncertainties of at least +/- 0.5K or more. We don’t even know if temperatures have decreased from 1910 or remained flat because it’s all in the noise.
Same for other climate related data sets. We don’t have the resolution.
Hence the data uncertainty and how the data was measured, does not support the conclusions in any real-world applications (if that is the way to say it). Results and conclusions are only consistent (or maybe not) against the constructed artifice around AGW.
Or in simple terms within the set of assumptions they use, climate science has a certain amount of consistency. But very little of which stand up to experimentation.
When people call AGW a scam, it shouldn’t be the science. It’s the application of shody data as if it is platinum-coated verified.
Advocates of taking action are similar to a hypothetical set of people who would lobby, say New York or London, to force every home and business owner to fit special anti-slip tiles on their roof, costing in the thousands of dollars or pounds, just so that the Health and Safety risk to Santa Claus and his reindeer would be minimised by about 10%.
Just to emphasise this:
From CERES itself the rms erro for LW TOA is 2.5W/m2
CERES DQ Summary for EBAF_TOA Level 3B – net flux has been energy balanced
Hypotheticals are not just a problem in climate science, this paper, even quoting from Pielke’s book, takes marine science types to task over the problem with advocacy. Cowan is a good biologist, might argue a couple of points, but what he examines has led, among others worse, to Google Earth posting fish skeletons all over the world, click Oceans.
https://benthamopen.com/ABSTRACT/TOFISHSJ-2-87
Michael Moon July 13, 2017 at 8:24 am
To the contrary, Michael. I add a complete year (12 months) of new data to the existing CERES data as soon as it becomes available. The data I used ends at the end of the last complete year in the CERES record. As soon as the complete data for the 2016 year becomes available I will use it.
In other words, your claim of cherry picking is just a reflection of your preconceptions, unburdened by reality. Please go project them on someone else. I have always used the full amount of CERES data available and I will continue to do so.
w.
Willis,
I did not say or even imply that you cherry-picked. The CERES bosses did, as they get numbers from their satellite every day. They chose to show the years that gave the largest warming trend, not you.
“ratio”
Willis Eschenbach, thank you for another insightful and informative essay.
Hi Willis, very interesting stuff.
I’m not sure how you manage to see this as confirmation of “a governed, thermally regulated system.” Maybe because you do not define what you mean that term. But knowing your past posts along those lines you propose a governor like a switch on an AC unit which clamps the max temperature and turns on the AC.
What your figure 9 shows is slightly non-linear feedback : the planck f/b based on the T^4 S-B relationship. This could be reasonably well approximated by a straight linear negative f/b over the limited range of temps in the dataset.
You may cite the flatter section at the top of the temp range as evidence of a much stronger negative f/b at the top end of the scale. This is presumably the tropics, One of you colour coded graphs with latitude and colour coded variable would confirm this. If you can establish that is flat you can claim a governor, I would suggest a much stronger neg. f/b is a better description.
That graph, if correct, seems to be formal observational proof that the water vapour feedback is NOT doubling CO2 forcing ( over water at least and roughly over a good proportion of land. ).
What does need explaining is the orthogonal section of the land data where a less negative TOA imbalance ( ie more retained heat ) is leading to a drop in surface temperature. That is counter intuitive on the face of it though there is little scatter and a very clear, linear relationship.
The first thing to establish is whether this is a particular geographical region. It probably is.
Best regards.
Ah, causality the wrong way around. Drop in temp leading to less outgoing IR, hence the TOA change. The slopes are close to perpendicular which means gradients are the reciprocal of each other.
It would be enlightening to determine what this division is: night/day ; summer/winter or geographical.
Sea seems a lot simple. This is probably another reason why simply averaging temperature of land and sea is physically invalid as I pointed out on Judith’s blog last year.
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/
Regarding “a doubling of CO2 to 800 ppmv would warm the earth by about two-thirds of a degree”: Also said was “Figure 9 also indicates that other than the Stefan-Boltzmann relationship, the net feedback is about zero”. The zero feedback climate sensitivity figure is 1.1 degree C per 2xCO2, unless one provides a cite for something lower.
OK, I just looked at the math at the end of the article that mentions Figure 9 and the numbers in Figure 9. The math does not consider that as the surface warms, so does the level of the atmosphere that downwelling IR comes from. So that if the surface warms from 290 to 291 K, the amount of radiation leaving it increases from 401 to 406.6 W/m^2. (Numbers chosen because 405 W/m^2 was mentioned for sea surface.) So, radiation from the surface increases by 5.6 W/m^2 from 1 degree K of warming. Dividing 3.7 W/m^2 per 2xCO2 by that does indeed result in a figure of .66 degree K per 2xCO2. But the zero feedback figure is higher, because some of that 5.6 W/m^2 is returned to the surface by increase of downwelling IR from greenhouse gases in the atmosphere. That is not counted as a feedback, but part of the explanation of temperature change from change of CO2 due to radiative transfer alone.
At this rate, the .67 degree C per 2xCO2 mentioned in Figure 9 is not essentially zero feedback, but indicative of negative feedback. I figure about 2 W/m^2-K negative feedback is indicated, using 1.1 degree per 2xCO2 as the figure with no feedbacks due to albedo, lapse rate effects or change of water vapor, etc.
I just noticed something else about Figure 9: TOA radiation imbalance seems to roughly match the difference between surface outgoing radiation at the surface temperature in question and the outgoing surface radiation at 291 K. This seems to mean that everywhere in the world’s oceans has half its radiation imbalance being used to make the temperature of each place in the oceans different from 291 K, and the other half causing heating/cooling advected to somewhere else in the world. (I hope I got this right.) So if a change equivalent to a 2x change of CO2 has half of it used to change the temperature of that location by .67 degree C and the other half used to change the temperature of elsewhere in the world, I think that indicates global climate sensitivity of 1.34 degree C per 2x CO2.
I’m not confident about the half-and-half part that I said above; the amounts may be different. That means global climate sensitivity may be other than 1.34, but more than .67 degrees C per 2xCO2.
The temperature is definitely not linear to forcing. The way forcing is defined by the IPCC is somewhat ambiguous and this leads to the error. If Pi is the instantaneous power entering the planet and Po is the power leaving, their difference is considered ‘forcing’. This can be expressed as the equation, Pi = Po + dE/dt, where E is the energy stored by the planet and dE/dt is the energy flux in and out of thermal store of the planet which in the steady state becomes 0 when Pi == Po.
While the energy stored, E and the temperature of the matter storing E is linear (1 calorie increases the temperature of 1 gm of water by 1C), this doesn’t account for the fact that E is continually decreasing owing to surface emissions, thus dE/dt is not linear to dT/dt. So, forcing would be linear to temperature if and only if the matter whose temperature we care about (the ‘surface’) is not also radiating energy into space. The consensus completely disregards the FACT that Po is proportional to T^4 and the satellite data supporting this is absolutely unambiguous, but once this is acknowledged, the high sensitivity they claim becomes absolutely impossible.
Note the ambiguity in the definition of forcing. 1 W/m^2 entering from the top is equivalent to an extra 1 W/m^2 being absorbed by the atmosphere. The 1 W/m^2 entering from the top is all received by the surface while the 1 W/m^2 entering from the bottom is split up so about half exits into space and half is returned to the surface. The distribution of energy absorbed by the atmosphere owing to its emission area being twice the area over which energy is absorbed seems to be widely denied by main stream climate science and this alone represents a factor of 2 error.
BTW, when you look at the math and the energy balance, if more than half of what the atmosphere absorbs is returned to the surface, the resulting sensitivity is reduced, not increased or the atmosphere must be absorbing far less than expected.
Willis
My compliments on very informative graphs and analysis of the data.
Re: “Figure 5. Correlation of TOA forcing and temperature anomalies”
You note: “Antarctica is strongly negatively correlated.” Note a similar effect over Greenland.
Re: “Figure 8. Scatterplot, temperature versus TOA radiation imbalance”
The lower left land (red) data seems to show a strong anti-Stephan-Boltzman correlation over the temperature range from -20 deg C to -60 deg C. That appears to be over the polar regions including Antarctica and Greenland.
Preliminary Hypothesis: Variations in albedo from changing cloud cover over polar regions might cause this anti-Stephan-Boltzman correlation between temperature and TOA forcing.
Is there a way to distinguish such albedo variations due to changing cloud cover?
Such cloud and albedo variations might vary with galactic cosmic rays and thus with solar cycles per Svensmark’s model and Forbush evidence.
Best wishes on your explorations
Surely albedo from ice and snow in the polar region is the prime reason for out going rad ‘violating’ S-B relation. B
That’s certainly a good bet Gary. Another possible contribution MIGHT be that CERES instruments seem (mostly?) to be (to have been) put on satellites in sun synchronous orbits That has a lot of advantages, but it means the satellites never go closer than about 8 degrees to latitude of the poles. And there are other issues like no solar input for much of the year and low angle illumination the rest of the year. I’m no longer as smart as I once was and can’t begin to guess the impact of those things.
It is something called temperature inversion and radiative cooling. From a discussion at Science of Doom: “About temperature inversion: “The intensity maximum in the CO2 band above Antarctica has been observed in satellite spectra [Thomas and Stamnes, 1999, Figure 1.2c], but its implication for the climate has not been discussed so far.”
“However, if the surface is colder than the atmosphere, the sign of the second term in equation (1) is negative. Consequently, the system loses more energy to space due to the presence of greenhouse gases.”
“This implies that increasing CO2 causes the emission maximum in the TOA spectra to increase slightly, which instantaneously enhances the LW cooling in this region, strengthening the cooling of the planet.”
“This observation is consistent with the finding that in the interior of the Antarctic continent the surface is often colder than the stratosphere; therefore, the emission from the stratospheric CO2 is higher than the emission from the surface.”
And even over Antarctica climate models have systematic bias: “This suggests that current GCMs tend to overestimate the surface temperature at South Pole, due to their difficulties in describing the strong temperature inversion in the boundary layer. Therefore, GCMs might underestimate a cooling effect from increased CO2, due to a bias in the surface temperature.”
So what about the inversion over Greenland plateau then?”
Citations from: How increasing CO2 leads to an increased negative greenhouse effect in Antarctica. Authors Holger Schmithüsen, Justus Notholt, Gert König-Langlo, Peter Lemke, Thomas Jung, 2015. http://onlinelibrary.wiley.com/doi/10.1002/2015GL066749/full
Other maps with blue colour over Antrctica and Greenland.
https://scienceofdoom.com/2017/02/17/impacts-vii-sea-level-2-uncertainty/
Great work, Willis!
@Nick “forcings are not inputs but diagnostics”
Say what?
CO2 is a “forcing”, therefore it is a diagnostic as well, and not an input to the system. Atmospheric water is not a forcing (according to the models), but a feedback only. One hesitates to guess the label for feedback only, metadiagnostic?
It seems we are in grave danger of losing all the actual inputs…
“Say what?”
From the post
“This is the mistaken idea that changes in global temperature are a linear function of changes in the top-of-atmosphere (TOA) radiation balance (usually called “forcing”).”
Those “forcings” in W/m2 are not inputs to GCM’s. They are deduced. Yes, CO2 etc are inputs.
what do you mean “deduced”? Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.
The only thing which is deduced from models is overall sensitivity and that is pre-loaded by ASSUMPTIONS about things like cloud “amount”, constancy of rel. humidity, the scaling needed to volcanic forcing. etc.
” Volcanic forcing as scaled AOD is an input. Basic radiative effect of CO2 derived from atmposheric conc. and ASSUMED forcing form projected emissions estimations are inputs.”
Evidence? It is generally not true. Treatment of volcanoes is variable – often omitted entirely in future years. Radiative effect of CO2 is not an input – GCM’s don’t work that way anyway. What is input is either gas concentration or emissions of gas. The task of figuring the net CO2 radiative forcing from that is a by-product of the whole GCM process.
OK, you meant calculated , not deduced. GSMs don’t to deductions that would require AI. I thought you meant deduced from model output as is done to get CS. Just a confusion of wording.
Scaling of volcanic forcing is a fiddle factor used to tweak models to reproduce recent past climate. Volcanic forcing calculated in Lacis et al from basic physics and El Chichon data in 1992 was 30. This got reduced to 21 by Hansen et al ( same group at GISS different lead author ) .
The motivation for lowering volcanic forcing was to reconcile model output. This is not science based, it fudging. They ended up with models which are too sensitive to all radiative forcings which balanced out reasonably well when both volcanoes and CO2 were present. There has been negligible AOD since around 1995 and models run too hot.
So by diagnostic you mean tuning set point.
Agreed. The logic of what Nick is saying escapes me. Maybe there is something we are missing here. But it seems like Willis has shown that there is a model with a lot of irrelevant detailed variables in it, but when you come down to it, only one input is needed to duplicate its output.
So then the reply is, the value of this variable is not an input to the model, its a result of some model assumptions. So what? Willis’ point still stands. You have a very complicated model apparently taking account of lots of different factors, but when you come down to it they none of them matter, because if you just remove them all, you get the same results by using the one variable.
I don’t get it. Nick is a bright and well informed guy, so please, explain why this is wrong.
Basically this post comes down to an assertion about a “central misunderstanding” of climate science. Ideally, the way to find out about that central misunderstanding would be to quote the actual words of someone expressing it. We never get that. Instead there is an indirect argument that scientists must believe it because GCM surface temperatures can generally be deduced from published TOA forcings. My point is that that argument fails idf the forcings were actually deduced from the GCM output (or from a late stage in GCM processing).
Wasn’t it Gavin Schmidt that said they ensemble model outputs to determine the overall models’ internal response to forcings? Or am I mis-remembering?
The criticism is valid, Willis should have followed his own golden rule and quoted something.
However:
This is the problem they are not “deduced” they INDUCED. Model parameters are tweaked to get the desired results. There are dozens of poorly defined parameters that can be freely changed within quite a large range. I’m talking primarily about volcanic forcing and cloud amount and the questionable assumption that rel.humidity is constant.
Through many iterations these fudge factors are juggled to come up with a combination which gets fairly close to reproducing 1960-1995 climate record. They consistently ignore the fact that it does not reproduce the early 20th c. warming.
This is an ill-conditioned problem, with far too many poorly constrained variables and a combination which fits ( a limited portion ) of the record and fails outside that is very likely NOT the right combination.
So your “if” is not satisfied. The forcings are not deduced they are induced.
“Figure 2. Time series, global average surface temperature. The top panel shows the data. The middle panel shows the seasonal component. The bottom panel shows the residual, what is left over after the seasonal component is subtracted from the data. Note the El Nino-related warming at the end of 2015”
The data/labels for “temperature” and “seasonal component” appear to be reversed. The seasonal component appears to be slightly larger than the temperature.
I don’t get why it is when discussing climate models there is always much talk of Stefan Boltzmann but nothing about combined gas laws. It seems the notion of a “surface temperature” for Earth is a difficult simplification to make with the difficulties glossed over. Even when abstracting out a “top of atmosphere ” idea things seem ill defined. Maybe I just don’t get the assumptions.
BTW did you read the small-print on climate forcing? All would-be climate forcers please wait on line for 6,500 years, the normal lag between insolation forcing and resultant change in the climate, as shown (recently bt Javier) on this relationship between Milankovich obliquity forcing and 6,500 year lagged temperature:
wait in line (fat thumb)
Nick Stokes July 13, 2017 at 10:05 am
First, this is not generally true. Take for example the volcanic forcing. The models didn’t make that up. It is calculated from the physics of the volcanic ejecta and the measured time that the ash and sulfates remained in the atmosphere. There’s a good description here:
Note that they do NOT say “forcing datasets OUTPUT BY the GISS global climate models”. They say “forcing datasets USED by the GISS global climate models”. On my planet that means they are inputs, not outputs.

There is also this:
Lots of those are observation based.
In short, forcings are indeed described by both GISS and Miller as INPUTS to the climate model, not outputs.
However, suppose that you are right, and that the forcing is the result of a climate model and not an input. You say above that they are calculated from the model outputs.
But if that is the case, my results mean the same thing as your claim—FORCING AND TEMPERATURE IN THE MODELS ARE LINEARLY RELATED.
So in fact, as others have pointed out above, you’re making my argument for me.
w.
Also, Nick, you’ve asked why I call the equation a central climate paradigm. In addition to evidence from the models, there’re a hundred and fifty peer-reviewed papers citing the Schwartz paper linked to in the head post here …
w.
Willis,
My request is that you quote the words. What did Schwartz actually say that makes it a paradigm? What did the citing papers actually say? Schwartz’ paper was quite controversial.
Nick Stokes July 13, 2017 at 7:44 pm
Nick, what I’d said was that the idea that changes in temperature are a constant “lambda” times the changes in forcing is a central paradigm of current climate science. The constant “lambda” is usually called the “climate sensitivity”, and is the subject of endless discussion in the climosphere.
Your claim that the idea of climate sensitivity is NOT central to our current climate paradigm doesn’t pass the laugh test. And I’m not going to quote what dozens and dozens of people including yourself have said about climate sensitivity. That’s dumb as a bag of ball bearings. Google “climate sensitivity” if you’re unfamiliar with the term.
w.
Willis, while I agree that volcanic forcing is an input ( AOD data ) it is worse than that. The scaling is one of the principal fudge factors of the models. It USED TO BE calculated from basic physics around 1990 but the GISS team abandoned that idea in favour of tweaking it to reconcile model output with the climate record around Y2K.
Lacis, Hansen & Sato found a scaling of 30 using proper science methods. It is now typically 21 per Hansen et al 2002. That is one friggin enormous frig factor.
I quote the relevant papers in my article at Judith’s C.Etc. ;
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/
relevant refs are 2,3,and 4 cited at the end of the article. Search “Lacis” in the article for relevant sections.
See my earlier comments on this. They simple chose a combination of fudge factors which produce output that fits a limited portion of the climate record and fails outside that period. It is pretty clear that they have not got the right combination of fiddle factors.
From Hansen et al 2002 [4] ( Emphasis added. )
my bold.
Hansen is quite clear about all this in his published work. Shame not one takes any notice.
Willis,
I really like your writing style. You are consistently clear and to the point. Thanks for your hard work.
Thanks for your kind words, bsmith. My intention is to write for the scientifically interested lay person.
w.
“Here, it’s after midnight and the fog has come in from the ocean. The redwood trees are half-visible in the bright moonglow. There’s no wind, and the fog is blanketing the sound.”
I expected a discussion of the importance of understanding cloud formation on temperature to follow this paragraph. Alas, they are only concluding remarks.
This article and the comments are interesting, but as an outsider, I am not interested in getting into the weeds so much. I wonder more about the elephants in the room. Is classical physics the right tool for modeling long-term climate? Will the current modeling techniques ever produce useful information for policymakers?
For some physicists, the answer is no. Dr. Rosenbaum at Caltech posited that nature cannot be modeled with classical physics but theoretically might be modeled with quantum physics. Last year, CERN CLOUD experiments produced data on cloud formation under tropospheric conditions. CERN reports “that global model simulations have not been directly based on experimental data” and that “…the multicomponent inorganic and organic chemical system is highly complex and is likely to be impossible to adequately represent in classical nucleation theories…” They conclude that model calculations should be replaced with laboratory measurements (2 DECEMBER 2016 • VOL 354 ISSUE 6316).
Simple question: Are the CERN CLOUD experiments relevant to a discussion of “Temperature and Forcing.”