Global Energy Balances … Except When It Doesn’t

Guest Post by Willis Eschenbach.

I came across an interesting 2014 paper called The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models. In it, they make a number of comparisons between observational data and 43 climate models regarding the large-scale energy flows of the planet. Here’s a typical graphic:

Figure 1. ORIGINAL CAPTION: “Fig. 7 Average biases (model—observations) in downward solar radiation at Earth’s surface calculated in 43 CMIP5 models at 760 sites from GEBA. Units Wm−2”. The “CMIP5” is the “Computer Model Intercomparison Project 5”, the fifth iteration of a project which compares the various models and how well they perform.

Now, what this is showing is how far the forty-three models are from the actual observations of the amount of solar energy that hits the surface. Which observations are they comparing to? In this case, it’s the observations stored in the “Global Energy Balance Archive), GEBA. Per the paper:

Observational constraints for surface fluxes primarily stem from two databases for worldwide measurements of radiative fluxes at the Earth surface, the global energy balance archive (GEBA) and the database of the Baseline Surface Radiation Network (BSRN).

GEBA, maintained at ETH Zurich, is a database for the worldwide measured energy fluxes at the Earth’s surface and currently contains 2500 stations with 450‘000 monthly mean values of various surface energy balance components. By far the most widely measured quantity is the solar radiation incident at the Earth’s surface, with many of the records extending back to the 1950s, 1960s or 1970s. This quantity is also known as global radiation, and is referred to here as downward solar radiation. Gilgen et al. (1998) estimated the relative random error (root mean square error/mean) of the downward solar radiation values at 5 % for the monthly means and 2 % for yearly means.

So downwelling solar radiation at the surface is very well measured at a number of sites over decades. And surprisingly, or perhaps unsurprisingly given their overall poor performance, the climate models do a really, really bad job of emulating even this most basic of variables—how much sunshine hits the surface.

Now, bear in mind that for these models to be even remotely valid, the total energy entering the system must balance the energy leaving the system. And if the computer models find a small imbalance between energy arriving and leaving, say half a watt per square metre or so, they claim that this is due to increasing “net forcing, including CO2 and other GHGs” and it is going to slowly heat up the earth over the next century.

So their predictions of an impending Thermageddon are based on half a watt or a watt of imbalance in global incoming and outgoing energy … but even after years of refinement, they still can’t get downwelling sunlight at the surface even roughly correct. The average error at the surface is seven watts per square metre, and despite that, they want you to believe that they can calculate the energy balance, which includes dozens of other energy flows, to the nearest half a watt per square metre?

Really?

Now, I wrote my first computer program, laboriously typed into Hollerith cards, in 1963. And after more than half a century of swatting computer bugs, I’ve learned a few things.

One thing I learned is the mystic power that computers have over peoples’ minds. Here’s what I mean by “mystic power”—if you take any old load of rubbish and run it through a computer, when it comes out the other end, there will be lots of folks who will believe it is absolutely true.

For example, if I were to tell you “I say that in the year 2100 temperatures will average two degrees warmer than today”, people would just point and laugh … and rightly so. All I have to back it up are my assumptions, claims, prejudices, and scientific (mis)understandings. Anybody who tells you they know what the average temperature will be in eighty years is blowing smoke up your astral projection. Nobody can tell you with any degree of certainty what the average temperature will be in two years, so how can they know what the temperature will be in eighty years?

But when someone says “Our latest computer model, which contains over a hundred thousand lines of code and requires a supercomputer to run it, says that in the year 2100 temperatures will be two degrees warmer than today”, people scrape and bow and make public policy based on what is nothing more than the physical manifestation of the programmers’ same assumptions, claims, prejudices, and scientific (mis)understandings made solid.

And how do we know that is a fact, rather than just a claim that I’m making based on a half-century of experience programming computers?

Because despite the hundred thousand lines of code, and despite the supercomputers, and despite the inflated claims, the computer models can’t even calculate how much sunshine hits the surface … and yet people still believe them.


Here, after a week of rain, we had sunshine today and we’re looking at a week more. And if you think the models are bad at figuring out the sunshine, you really don’t want to know how poorly they do regarding the rain …

My best to everyone,

w.

0 0 votes
Article Rating
112 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Nick Schroeder
January 24, 2019 6:16 pm

So what, exactly, do they presume to measure?

Emissivity & the Heat Balance

Emissivity is defined as the amount of radiative heat leaving a surface to the theoretical maximum or BB radiation at the surface temperature. The heat balance defines what enters and leaves a system, i.e.
Incoming = outgoing, W/m^2 = radiative + conductive + convective + latent

Emissivity = radiative / total W/m^2 = radiative / (radiative + conductive + convective + latent)

In a vacuum (conductive + convective + latent) = 0 and emissivity equals 1.0.

In open air full of molecules other transfer modes reduce radiation’s share and emissivity, e.g.:
conduction = 15%, convection =35%, latent = 30%, radiation & emissivity = 20%

The Instruments & Measurements

But wait, you say, upwelling (& downwelling) LWIR power flux is actually measured.

Well, no it’s not.

IR instruments, e.g. pyrheliometers, radiometers, etc. don’t directly measure power flux. They measure a relative temperature compared to heated/chilled/calibration/reference thermistors or thermopiles and INFER a power flux using that comparative temperature and ASSUMING an emissivity of 1.0. The Apogee instrument instruction book actually warns the owner/operator about this potential error noting that ground/surface emissivity can be less than 1.0.

That this warning went unheeded explains why SURFRAD upwelling LWIR with an assumed and uncorrected emissivity of 1.0 measures TWICE as much upwelling LWIR as incoming ISR, a rather egregious breach of energy conservation.

This also explains why USCRN data shows that the IR (SUR_TEMP) parallels the 1.5 m air temperature, (T_HR_AVG) and not the actual ground (SOIL_TEMP_5). The actual ground is warmer than the air temperature with few exceptions, contradicting the RGHE notion that the air warms the ground.

https://www.linkedin.com/feed/update/urn:li:activity:6494216722554458112

https://www.linkedin.com/feed/update/urn:li:activity:6457980707988922368

commieBob
Reply to  Nick Schroeder
January 24, 2019 6:51 pm

As you demonstrate, this really is a place where errors are at least as systematic as they are random. The accuracy of a global figure may be improved if you have a lot of data and the errors are precisely random plus a bunch of other constraints. That is absolutely not the case when there are systematic errors. In that case, the accuracy of the global figure is at least as bad as the systematic errors.

It always blows my mind that ‘they’ take data that sometimes isn’t even measured within plus or minus twenty percent and calculate results to within a tenth of a percent.

IMHO, the only way ‘they’ produce plausible looking energy balances is by applying Cook’s constant.

Greg
Reply to  commieBob
January 25, 2019 2:08 am

A quick scan of the paper seems to show that they are not using just ground based data .

The first paragraph on section entitled : Observational data and models” states they are using CERES, which IIRC Willis has already pointed out has a massive 5W/m^2 energy imbalance. ( A net warming imbalance to huge that we know it is wrong because we would already be living the thermageddon.)

So it is unclear what fig 1 is showing us. Are models exaggerating the 5W/m^2 energy imbalance or does that get swept under the carpet an reset to zero before comparing to models. Is that graph regional or global. I really do not know what we are looking at here.

Greg Goodman
Reply to  commieBob
January 25, 2019 2:43 am

The uncertainty in the
solar reflected TOA fluxes from CERES due to uncer-
tainty in absolute calibration is ~2 % (2 − σ), or equiva-
lently 2 Wm−2 . The uncertainty of the outgoing thermal
flux at the TOA as measured by CERES due to calibra-
tion is ~3.7 W m−2 (2 − σ) (Loeb et al. 2009). In the
CERES energy balanced and filled (EBAF) dataset, solar
and thermal TOA fluxes are adjusted within their range
of uncertainty to be consistent with independent esti-
mates of the global heating rate based upon in situ ocean
observations, and are made available on a 1° grid (Loeb
et al. 2009).

So it seems the cumulative 5.7W/m^2 uncertainty has been used to allow “correction” of CERES flux to match ocean observations.

Willis’ post here only concerns the individual land based sites, it seems. So that is a side issue to what he posted about here.

We apply a linear regression
between the model biases and their respective land means
shown in Fig. 12 (significant at the 95 % level). We use the
orthogonal regression method that minimizes the distances
orthogonal to the regression line, in contrast to the standard
least y-squares regression that only minimizes the distances
along the vertical axes.

Wow, at last someone in climatology who is at least aware of how use OLS. Although orthogonal regression is not a panacea since it effectively assumes similar magnitude of errors in both variable, it is big step in the right direction.

Michael S. Kelly, LS, BSA, Ret.
Reply to  commieBob
January 25, 2019 3:57 am

Wow, that’s precariously close to infringing on my own dimensionless number, the Kelly Number (Nk). It is defined as: the right answer divided by the answer I got (the dimension of “answer” cancelling), and though ideally Nk = 1.0, it can take on any real or complex value. Multiplying by the answer I got then in all cases yields the right answer.

It got me through my Masters’ in Mechanical Engineering.

Greg
Reply to  Michael S. Kelly, LS, BSA, Ret.
January 25, 2019 6:21 am

Wow, a frig factor with am imaginary part, pure genius.

If you could expand it to 26 dimensions and apply it to string theory you could probably save modern physics and win a Nobel prize.

commieBob
Reply to  Greg
January 25, 2019 7:25 am

I think, but can’t prove, that these guys beat you to it.

DD More
Reply to  Michael S. Kelly, LS, BSA, Ret.
January 25, 2019 3:06 pm

So the Nk is not to be confused with Flannegan’s Finangling Fudge Factor?

From my high school math teacher’s post board –
Flannegan’s Finangling Fudge Factor – “That quantity which, when multiplied by, divided by, added to, or subtracted from the answer you get, gives you the answer you should have gotten.”

Also known as SKINNER’S CONSTANT
Only successfully used when the correct answer is known.

John
Reply to  commieBob
January 25, 2019 5:54 pm

Commie Bob, how do they explain tropical caps and ice ages, no humans then. It’s a scam that man has any influence on climate.

Ed Bo
Reply to  Nick Schroeder
January 25, 2019 12:44 pm

Nick:

You’ve been told time after time that your definitions and analysis are completely wrong. Yet you keep repeating the same fundamental mistakes again and again.

Your definition of emissivity is ridiculously wrong. Any textbook or reference will explain that emissivity is the ratio of thermal radiation actually emitted to the maximum possible (“blackbody”) thermal radiation that could be emitted at that temperature. Period.

It has nothing to do with conductive or convective transfers that may or may not be occurring at the same time, as you wrongly assert. You have been challenged before to find any reference that uses your definition, and you have not done so.

You claim to have formally studied these topics, but you continually get the most basic concepts of thermodynamics and heat transfer totally wrong. The pink unicorn brigade at PSI may fall for your fantasies, but not here.

Reply to  Ed Bo
January 25, 2019 6:43 pm

That’s right Ed. Emissivity is for radiation. Thermal conductivity is for conduction. Coefficient of heat convection is for convection (this is used in engineering but not much in meteorology)

Moreover, the vacuum does not have emissivity because it does not radiate. Or theoretically the vacuum field radiates according to quantum mechanics but this is Hawking radiation due to gravitational field

Dan DaSilva
January 24, 2019 6:21 pm

Does wattsupwiththat have virus? Why is Google Safe Browsing giving a warning to enter site? Is this real or a trick?

Latitude
Reply to  Dan DaSilva
January 24, 2019 6:31 pm

does it direct you to Skeptical Science?

Dan DaSilva
Reply to  Latitude
January 24, 2019 6:34 pm

Ha, ha

Frank Mansillas
Reply to  Dan DaSilva
January 24, 2019 6:54 pm

Same happened to me, thought I had malware.

Reply to  Frank Mansillas
January 24, 2019 8:07 pm

This is coming from an embedded advertisement. I’ve turned off an add on the right sidebar. Please advise if you see it again.

Reply to  Dan DaSilva
January 24, 2019 7:04 pm

I assume you are using Chrome?

I quit using Chrome over 2 years ago because of Alphabet’s/Google’s policies, hidden search bias, and URL re-direct manipulations.

Amazingly, MS’s Bing and MS Explorer are coming back into vogue again because of some shady shit that Google is doing.

Dan DaSilva
Reply to  Joel O'Bryan
January 24, 2019 7:28 pm

Yes, Chrome. Thanks

RicDre
Reply to  Joel O'Bryan
January 24, 2019 7:49 pm

“I quit using Chrome over 2 years ago …”

I quit using Chrome recently because I happened to be watching the data Sent and Received numbers on my internet link and noticed that Chrome was sending an unusual amount of data somewhere. When I went back to Internet Explorer the problem disappeared so I deleted Chrome.

Ian W
Reply to  RicDre
January 25, 2019 7:26 am

You may need to avoid MS browsers in future too, as I have read that the plan by MS is to use Chrome as the engine for Edge replacements.

https://www.theverge.com/2018/12/4/18125238/microsoft-chrome-browser-windows-10-edge-chromium

RicDre
Reply to  Dan DaSilva
January 24, 2019 7:41 pm

“Does wattsupwiththat have virus?”

I think Griff was having a similar problem. Griff, if you are out there, did you ever resolve the problem you were having with getting a warning from Norton about WUWT having a virus?

Patrick MJD
Reply to  Dan DaSilva
January 24, 2019 7:46 pm

I was getting the same issue, but seems to have “gone” now.

Hivemind
Reply to  Dan DaSilva
January 25, 2019 12:33 am

I get the same thing on my laptop with Firefox trying to access DuckDuckGo.

Jaap Titulaer
Reply to  Dan DaSilva
January 25, 2019 1:35 am

There is a new MS thingy for which they hired some lefty external company. That rates sites as to whether they are left or right and anything to the right of center get’s flagged as BAD.
Propaganda at its best, Dr. G. would be proud.
I kid you not.

Carbon Bigfoot
Reply to  Dan DaSilva
January 25, 2019 4:09 am

Periodically I get the “Can’t Display This Page” BS—in fact this AM. I know that clicking the fix button will resole the issue but new folks probably move on.

Editor
January 24, 2019 6:35 pm

Willis, thank you for finding and commenting on Wild et al. 2014.

Regards,

Bob

Terry Jay
January 24, 2019 6:44 pm

The old Garbage in, Gospel out ploy

Steven Mosher
January 24, 2019 6:50 pm

“square metre or so, they claim that this is due to increasing CO2 and it is going to slowly heat up the earth over the next century.”

no they dont argue it is due to co2.

it is due to the sume of all net forcing, including c02 and other GHGs.

you cant attack a theory you mispresent.

basics

Reply to  Steven Mosher
January 24, 2019 7:29 pm

Steven,

When will you learn. The ONLY forcing influence is the Sun. Anything else is represented by the change in solar forcing that has the same effect on the surface temperature as some change to the system keeping the solar forcing constant.

The total solar forcing is 240 W/m^2 which results in about 390 W/m^2 of net average surface emissions representative of the average temperature. Assuming all Joules are equivalent, this is about 1.62 W/m^2 of surface emissions per W/m^2 of forcing. It’s unambiguously clear that the next W/m^2 of solar input (or the last one) can not have changed the surface emissions by 4.3 W/m^2 as consequential to the presumed 0.8C temperature change. The only possible sensitivity is about 0.3C per W/m^2 which is less than the lower limit presumed by the IPCC!

Feedback doesn’t amplify the Planck sensitivity as the ‘consensus’ incorrectly presumes. The best you can claim is that feedback increases the power sensitivity of the surface from 1 W/m^2 of surface emissions per W/m^2 of forcing e=1 (ideal black body) up to the 1.62 W/m^2 observed for the Earth (e=0.62).

You still haven’t answered the question about how the planet can tell the next Joule (or the last one) from all the others so that it can be so much more powerful at warming (or cooling) the surface than any other?

Unless you have a coherent answer to this question, you simply can not support any ECS greater than about 0.35C per W/m^2.

Reply to  co2isnotevil
January 24, 2019 8:44 pm

Even Christopher Monckton has gone along lately with feedbacks being slightly net positive. One of the positive feedbacks is the surface albedo feedback, whose magnitude varies with global temperature. I see it as having been more strongly positive during the comings and goings of ice age glaciations, at times even making global climate unstable until it changes quickly to a state that has some stability. And I see the surface albedo feedback as less strongly positive during the interglacial periods.

Note recent studies by Nick Lewis, and Lewis & Curry, supporting ECS being around 1.5 degrees C/K per 2xCO2, which is about .4 degree C/K per W/m^2.

Reply to  Donald L. Klipstein
January 24, 2019 9:50 pm

Donald,

The concept of feedback, relative to the way it was applied to the climate system, is irrelevant. Bode’s linear feedback amplifier model that it was based on has no correspondence to the actual climate system.

To the extent that the surface is 620 mw per W/m^2 warmer than it would be based on the forcing alone indicates an apparent gain > 1, while the open loop gain (i.e. the gain of an ideal BB) is exactly 1, so the net ‘feedback’ must be positive. However; since the open loop gain is only 1 and there is no source of output Joules beyond the forcing Joules, positive feedback has no possibility of causing runaway or any of the other scary things often associated with positive feedback.

David Stone
Reply to  co2isnotevil
January 25, 2019 2:54 am

In fact the important point is that positive feedback in an amplifier depends on there being an external power supply (energy source) to provide the amplified output! All of these models which assume “gain” cannot work unless the external source of energy is defined, which it is not, they just assume that it is free. An amplifier (gain) is also inefficient in the process, so the energy input is always more than the sum of the input and output power.

Phil.
Reply to  co2isnotevil
January 25, 2019 8:05 am

A fact that Monckton refuses to address in connection with his model.

Reply to  co2isnotevil
January 25, 2019 8:58 pm

Positive feedback doesn’t necessarily cause runaway or other scary things, that requires positive feedback sufficient to make gain infinite. Then again, the climate did become unstable at times during ice sheet advances and retreats when such advances and retreats made major changes in surface absorption of sunlight.

As for power supply to an amplifier: Consider the tunnel diode oscillator, and an amplifier that can be made with a similar circuit using a tunnel diode. Sunlight is the analog of the power supply in a tunnel diode oscillator. (Yes, I know the tunnel diode is an obsolete component, but they were made and they are great examples for amplifier theory.)

As for Christopher Monckton’s slightly positive feedback: It is a serious oversimplification that does not consider global temperature feedback varying with global temperature, and his oversimplification does not consider albedo feedbacks; those would add to the positivity he found. Notably, Christopher Monckton seems to like finding climate feedbacks being more-negative, according to how I see his postings to WUWT. In a similar manner, he even claimed multiple times that The Pause started in 1996.

Reply to  co2isnotevil
January 25, 2019 9:40 pm

Phil.,
Monckton is trying to explain things within the context of what the IPCC’s self serving consensus claims. My position is the the IPCC is so wrong, it’s embarrassing and there’s no sense in trying to be consistent with what they claim.

Alan D. McIntire
Reply to  Donald L. Klipstein
January 25, 2019 11:12 am

As opposed to the REAL world, where feedbacks are NEGATIVE.
See figure 2 here:

http://www-eaps.mit.edu/faculty/lindzen/235-LindzenGRL.pdf

As the “Sesame Street” song goes, “One of these things is not like the others..” And that’s the graph based on the real world rather than models.

Reply to  co2isnotevil
January 26, 2019 2:37 am

Ha ha, Steve Mosher is struggling here! THIS is all he could come up with? A weak technicality? :

“…no they don’t argue it is due to co2.
it is due to the sum of all net forcing, including CO2 and other GHGs.
You can’t attack a theory you misrepresent…”

RicDre
Reply to  Steven Mosher
January 24, 2019 7:37 pm

Mr. Mosher said: “it is due to the sume of all net forcing, including c02 and other GHGs”

Ok, that makes sense. So approximately what percentage of the non-natural atmospheric warming does the theory say is the result of CO2 emitted into the atmosphere by human activity?

MarkW
Reply to  Steven Mosher
January 24, 2019 7:40 pm

You are misrepresenting how much misrepresentation is going on.
The claim has always been that any changes in other GHGs is due directly to changes in CO2.

PS: It really is funny for you, of all people to whine about others misrepresenting the arguments of those they are disagreeing with, in light of the whoppers you were telling this morning.

Dave Fair
Reply to  Willis Eschenbach
January 24, 2019 9:19 pm

+ a lot, then more alot, then we go back to a lot.

Reply to  Steven Mosher
January 25, 2019 4:18 am

@Steven – regardless of the replies below, you know quite well that you are “misrepresenting the theory.” Hypothesis, actually, and a disproven one, that forcing by increased CO2 concentration raises the temperature, which raises the other GHG (primarily H2O) concentrations, which further raises the temperature in a high-gain (which is where the hypothesis is wrong) positive feedback.

No serious poster or commenter here “misrepresents” your hypothesis – we merely abbreviate it.

tom0mason
Reply to  Steven Mosher
January 26, 2019 11:32 am

By their ideas all this ‘extra’ solar energy stays as heat.
What utter prefossilized coprolite!

Nature in all its abundance would be hastily transferring most of that ‘extra’ solar energy for chemical energy in organic bonds, especially as the atmospheric CO2 level rises.
Energy balance is complete BS a myth, something to tell aspiring ‘climate modelers’.
This energy balance is a moving target …
This planet naturally takes in the solar energy, and just as naturally eject some — it does NOT have to balance at any moment!
The ‘balance’ is a dynamic dependent on the conditions both on the planet (all those weather/climate variables, and the prevailing overall chemistry in the oceans, land, and atmosphere), and what the sun emits and all the changes over time.

To view it any other way is to absurdly reduce all this planets many natural processes to nought.

David Young
January 24, 2019 6:52 pm

Willis, I think the models are generally tuned to get the top of atmosphere radiation imbalance correct (at least in so far as we can measure it). That is probably achieved by arranging cancellation of much larger errors as you show here. That doesn’t build any confidence for me however.

Reply to  David Young
January 24, 2019 7:10 pm

As reported in various journal admissions, when most modellers try to close the TOA energy imbalance in their model, it runs way, way too hot, something like 6K to 11 K per CO2 doubling… way too hot. So they have to let the model’s energy imbalances exist based on their subjective judgements (hand tuning puppetry). They also tune in higher than observed amounts of aerosols (for cooling) for 20th Century calibration runs as well — Something Trenberth has lamented about.

David Young
Reply to  Joel O'Bryan
January 24, 2019 8:11 pm

Joel, Do you have a reference for the TOA imbalance point?

Joel O'Bryan
Reply to  David Young
January 24, 2019 9:00 pm

Yes David,
Start down that Climate Rabbit Hole with:

Climate scientists open up their black boxes to scrutiny
URL:
http://science.sciencemag.org/content/354/6311/401

“Recently, while preparing for the new model comparisons, MPIM modelers got another chance to demonstrate their commitment to transparency. They knew that the latest version of their model had bugs that meant too much energy was leaking into space. After a year spent plugging holes and fixing it, the modelers ran a test and discovered something disturbing: The model was now overheating. Its climate sensitivity—the amount the world will warm under an immediate doubling of carbon dioxide concentrations from preindustrial levels—had shot up from 3.5°C in the old version to 7°C, an implausibly high jump.

MPIM hadn’t tuned for sensitivity before— it was a point of pride—but they had to get
that number down. Thorsten Mauritsen, who helps lead their tuning work, says he
tried tinkering with the parameter that controlled how fast fresh air mixes into clouds.

Increasing it began to ratchet the sensitivity back down. “The model we produced with
7° was a damn good model,” Mauritsen says.

Did you get that?: “7° was a damn good model” !!!! LOL!! It literally doesn’t get any more junk science than that. The climateers are very right in that exposing their junk models would lead to ridicule. Because they know what they do ***IS*** junk science, but it pays the bills and brings in the paychecks. So they keep doing it. It really is all they can do.

Then (if you are brave enough to handle the ugly truth) proceed to:
“THE ART AND SCIENCE OF CLIMATE MODEL TUNING”
https://journals.ametsoc.org/doi/10.1175/BAMS-D-15-00135.1

Go to their Figure 2. in that article (and text therein). The Ice crystal fall velocity ( a parameter hand-tuned in the models that use it) shows that even small tweaks (of a very unconstrained by observation parameter) greatly affects the TOA energy balance outcome in the model (GFDL CM3) they highlight.

If you can get through that BAMS article on climate modeling in its entirety and still believe Climate Modeling has anything to do with objective science…. then you are a True Believer and beyond help.

Greg Goodman
Reply to  Joel O'Bryan
January 25, 2019 3:00 am

Thanks for that info Joel. It’s worse than we thought.

There is also Hansen et al 2002 , which states quite openly that you can get whatever climate sensitivity you wish by your choice of values for unconstrained parameters.

There is a link in my article on tropical climate sensitivity:
https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

I find sensitivity to volcanic events has been exaggerated by tweaking the scaling of AOD to W/m^2 . My result agrees with earlier physics based estimations by the same group which were abandoned to arbitrarily values in an attempt to reconcile model outputs with the climate record.

Reply to  David Young
January 24, 2019 9:19 pm

Yes.
My response is long and stuck somewhere in moderation (likely because of html formats and URL’s for your verification). So as to demonstrate I’m not making this stuff up.
Climate Models are junk science.
Be patient. Check back here sometime in the next 2 days. Maybe it’ll get through.

January 24, 2019 7:00 pm

The very fact that the Climate Model Intercomparison Project exists at all as the accepted validation method for the models (an intercomparison of models), rather than comparison to observation for validation, should clue even the dimmest of bulb science-engineering college majors that climate modelling today is Cargo Cult Science, (a.k.a. junk science).

And then to take the ensemble mean of all the junk outputs and try to apply a confidence interval to that mean … well that defies any hope of actually knowing anything at all about our climate and CO2.

DMA
January 24, 2019 7:15 pm

This miscalculation by the models is similar to the cloud formation uncertainties that were the basis for Pat Frank’s error propagation analysis. It looked like good work to me based on my scant knowledge of measurement theory and error analysis but he never found a journal that would print it. The result was, with the known uncertainty in the initialization the uncertainty propagated very rapidly and rendered the results completely unreliable.

Darcy
January 24, 2019 7:21 pm

Curious as to why the IPCC does not cull out models that cannot replicate the basic driving parameters of the system being modeled versus quoting an average of the output. Or even better require model update until a match is achieved (within a range that is small enough to not effect the parameter variation umder study). Model verification/validation 101. Such is required and SOP for robust decision making in the DoD and commercial world with $ an order of magnitude lower than the investments being called for.

Dave Fair
Reply to  Darcy
January 24, 2019 9:37 pm

The CMIP3 models vastly underestimated the reduction in Arctic ice. In CMIP5 they tried to get greater ice reductions, and succeeded to a limited extent. CMIP5 Arctic ice was still too high, but the attempt to get it right negatively affected other aspects of the models, making those metrics more inaccurate.

Additionally, every effort to downscale model results to regional hindcast/projection has been unsuccessful. They get the different land regions and individual ocean basins/sub-basins grossly wrong. They get some global success through tuning to the late 20th Century. Otherwise, they resemble shotgun blasts in their accuracy.

Dave Fair
Reply to  Darcy
January 24, 2019 9:45 pm

We cannot measure the net energy balance to the accuracy required to verify the model’s assumptions/output. IRC, the net flow increase was calculated at about 1% +/- 10% of the total energy flows. Cargo Cult Science.

icisil
Reply to  Darcy
January 25, 2019 5:28 am

Because they are not interested in accuracy. Imagine if they did hurricane forecasting like they do CMIP; it would be useless.

MarkW
Reply to  icisil
January 25, 2019 9:33 am

The really bad models are needed to pull the average of the ensemble in the direction they need it to go.

bit chilly
Reply to  Darcy
January 25, 2019 6:37 pm

Darcy,if they did that they wouldn’t have any models to keep the funding coming in.

Mr.
January 24, 2019 7:24 pm

Maybe the solar detectors were facing the wrong way Wills?
After all, for eight long years it was shining out of Barack’s ar5e.

January 24, 2019 7:37 pm

The graph comparing the models regarding solar SW at the surface doesn’t show the uncertainties of the measurements. In Wild’s paper he mentions 184 W/m2 with errors ranging from 5 to 10 W/m2. In AR5 WG1, the comparable estimate is 176 to 192 W/m2, (+/- 8 W/m2) combining SW absorbed and SW reflected at the surface. So even when a model correctly approximates the measurement central tendency, they are still looking for a change in energy flux amounting to a fraction of the measurement errors. The diversity of the models is one problem, measurement noise is perhaps worse.

Matthew R Marler
January 24, 2019 7:42 pm

Thanks for the essay and the link to the paper.

Not to be missed, my favorites: Considerable uncertainties remain also in the magnitudes
of the non-radiative fluxes of sensible and latent heat and
their partitioning over both land and oceans.

Patrick MJD
January 24, 2019 7:50 pm

“But when someone says “Our latest computer model, which contains over a hundred thousand lines of code and requires a supercomputer to run it, says that in the year 2100 temperatures will be two degrees warmer than today”, people scrape and bow and make public policy based on what is nothing more than the physical manifestation of the programmers’ same assumptions, claims, prejudices, and scientific (mis)understandings made solid.”

That’s because most people, and especially politicians and people in positions of power and influence, do not know how computers work, let alone how to program them. Computer modelling is used almost universally and accepted as Gospel.

ChasTas
January 24, 2019 8:22 pm

“All models are wrong, some are useful”

Dave Fair
Reply to  ChasTas
January 24, 2019 9:51 pm

“Useful” as in supporting more government grants and crony, rent seeking handouts?

MarkW
Reply to  Dave Fair
January 25, 2019 9:34 am

Useful as in making short range weather forecasts, helping you design airplanes, or electronic circuits, keep the power grid stable, etc.

MarkW
Reply to  Dave Fair
January 25, 2019 9:36 am

Even the GCMs “could” be useful if they were being used for their original purpose of helping scientists understand what it is they don’t know regarding weather and climate.

whiten
Reply to  MarkW
January 25, 2019 12:17 pm

Exactly…

January 24, 2019 8:52 pm

Is there any reason given in the subject paper why it reviewed 43 climate models and not all (what 105?) of them?

January 24, 2019 9:18 pm

Is there separate Energy Budget charts for Daytime (12 hours) and Nighttime(12 hours) similar to the Kiehl Trenberth Energy Budget chart ?

January 24, 2019 9:20 pm

Regarding “say half a watt per square metre or so” imbalance, said also as “So their predictions of an impending Thermageddon are based on half a watt or a watt [I assume this is per square meter] of imbalance in global incoming and outgoing energy”, with regard also to “average error at the surface is seven watts per square metre”:

The small imbalance figure of .5-1 watt per square meter, multiplied by actual ECS climate sensitivity in terms of degrees per W/m^2, is how much future global warming we will have if we freeze atmospheric concentrations of all GHGs other than water vapor to where they are now.

The larger error of “average error at the surface is seven watts per square metre” appears to me as an error of income of over 200 W/m^2 from sunlight alone, or of nearly 400 W/m^2 of surface radiation income from sunlight/daylight plus IR radiated downward from greenhouse gases and clouds, sounds to me as probably close to the watts/m^2 errors in radiation outgo in the cited climate models. I suspect the error these models have for income minus outgo is on average income exceeding outgo by about 1 watt per m^2 than is actually the case.

Lasse
January 25, 2019 12:23 am

10% more sunshine since 1983 (SMHI)
Less clouds and more sun hours.

Could it be solar brightening due to less sulfur in the air?
https://journals.ametsoc.org/doi/pdf/10.1175/BAMS-D-11-00074.1

Jaap Titulaer
Reply to  Lasse
January 25, 2019 2:11 am

Ah no.
Sulfur reductions did not even start in most of the EU until early 90’s (after the wall). Eastern block used a lot of Brown Coal and had zero measurements in place to reduce sulfur exhaust. And they only started after large money grants from EU.
Only a few countries (W-EU) did anything in the 80’s, Portugal and Spain had just joined, so they had other things to focus on. Most of the South (except maybe France) did not even start until this millennium, same for most of the Eastern Block.
China did not do anything positive (have they even now?), Chinese sulfur has only been on the increase IMHO.
Japan was early as were some W-EU and Nordics. But that is really a drop in a bucket compared to everything in between.
So you could only notice the reductions in sulfur in Eurasia sometime well after 2005.

The re-greening of the European forests was mostly driven by increased CO2, not by timely reductions in sulfur, that came later. Woods in Czech or Poland where already getting greener when the pH content in their rain was still high enough to damage things like marble statues.

Greg Goodman
Reply to  Jaap Titulaer
January 25, 2019 3:11 am

The brightening happened in the stratosphere , in the decades after El Chichon and Mt. Pinatubo.

This is reflected in cooling of the lower stratosphere:
comment image

After the initial TLS warming ( cf surface cooling ) both events were followed by a persistent drop of 0.5 deg C in TLS. This indicates a more transparent stratosphere and more solar energy reaching the lower climate system.

Lasse
Reply to  Jaap Titulaer
January 25, 2019 6:02 am

Still we, in Sweden at least, have less clouds since 1983. 10% more sunshine (W/sqm)

Reply to  Lasse
January 25, 2019 9:35 pm

One thing I see in what IPCC reports of climate models is high positivity of water vapor feedback around or even slightly over that expected from constant relative humidity, along with positivity of cloud albedo feedback.

The way I see these things, global warming is making clouds more effective/efficient at moving heat (and associated upward moving air), which means that a warmer world has less coverage by clouds and updrafts as these get more efficient, and greater coverage by downdrafting clear air. I see this as meaning that the water vapor positive feedback is less positive than according to constant relative humidity, and I suspect the combined positive feedbacks from water vapor and cloud albedo are limited to what the water vapor feedback alone would be if clouds were frozen/unchanging in their coverage and ability to move heat.

January 25, 2019 12:24 am

But when someone says “Our latest computer model…

I noticed this a long time ago when personal computers first started to make their appearance. Brash young business grads with a bit of savvy started showing up with pretty graphs and pie charts, say it was from a computer program and senior management of major corporations would swoon. After a while the people with real life experience who knew what they were doing either retired or learned to make pretty pictures of their own and level the playing field. But the gullibility of experienced senior management because the pie chart was from a computer, with no thought at all to the validity of the data, was astounding… and led to a lot of really bad decisions.

peterg
January 25, 2019 12:59 am

Some computer models are very accurate. Spice models electronic circuits quite successfully. So if you are developing a circuit, first you run it in spice if at all possible, then breadboard it, then develop the pcb etc.

A model that is the sum of accurate sub-models has a possibility of being accurate. However a model that is the sum of inaccurate sub-models has no chance.

A computer model in the end is only a calculation. The calculation may be correct or incorrect. Until verified experimentally, it is only an hypothesis.

MarkW
Reply to  peterg
January 25, 2019 9:42 am

I believe the point is that you don’t go from a SPICE run directly to production. SPICE is used to narrow down which designs you want to take to breadboarding etc.

The GCMs are trying to go directly from model runs to full scale production.

tom0mason
Reply to  peterg
January 26, 2019 12:13 pm

However the parameters used in SPICE are relative few and very well understood.
Not quite the same with climate models where there are a huge number of interacting parameters that are not all well understood.

A C Osborn
January 25, 2019 1:54 am

Mr Eschenbach, is this database available to the general public?

Greg
January 25, 2019 3:23 am

There is a link for BRSN in the paper but it seems that you have to apply for ftp read access and I have not looked into the conditions. It does not appear to be openly available. There does not seem to be a link GEBA, maintained at ETH Zurich. Swiss are not generally that open either.

https://bsrn.awi.de/data/data-retrieval-via-ftp/

meiggs
January 25, 2019 4:18 am

As one who does power plant energy balances I can tell you for certain that once one resorts to “tuning” a model it means they lack the ability grasp and simulate a simple system. A good model can make accurate predictions about off design modes of operation. A “tuned” model cannot, period. Meanwhile, none of the quantum folks have dreamt up a way to directly measure LWIR??

http://www.nusod.org/2016/nusod16paper20.pdf

icisil
January 25, 2019 4:46 am

“…take any old load of rubbish and run it through a computer, when it comes out the other end, there will be lots of folks who will believe it is absolutely true.”

Sounds like science crime syndicate money laundering.

Steve O
January 25, 2019 4:50 am

It seems like a stretch to say that you can be confident about an effect of 0.5/ m2 if you have an imbalance of ten times that. I’d be interested in hearing what the scientists say and why they believe that they’ve overcome that.

icisil
January 25, 2019 4:56 am

““Our latest computer model, which contains over a hundred thousand lines of code and requires a supercomputer to run it, says that in the year 2100 temperatures will be two degrees warmer than today””

I can’t help but think those monstrosities are full of Medusa code – one look and it turns you to stone. I’ve quit (contract) jobs before after I saw what was under the hood.

michael hart
January 25, 2019 5:16 am

I think we live in changing times with respect to people’s attitudes towards computer models. Once the entire population is familiar from childhood with what they can do, and what their failings are, then I think we shall see this deference to computer out-put decline.

January 25, 2019 6:05 am

I really like what this guy, Erl Happ, has done over a ten yr period. https://reality348.wordpress.com/2015/12/20/how-do-we-know-things/
There is about 40 chapters, but each quite short and very clear data.

The best part….no modeling apart from the use of some nullschoolearth pictures.
Great stuff, lots about ozone too.

Dan Hughes
January 25, 2019 7:12 am

Earth’s climate systems chaotically evolve in space and time driven by the imposed energy input at the ToA. Mathematical models of climate’s evolution in space and time differ with respect to the continuous equations (PDEs, ODEs, algebraic, and parameterizations) that make up the models. Calculated results indicate that the numerical solutions of the models give chaotic trajectories. That being the case, it is highly unlikely that the set of parameterizations and their numerical values, including those chosen for tuning/calibration, will be the same among the various models.

Beyond the parameterizations, their numerical values, and tuning, the discrete approximations to the continuous equations, and the numerical solution methods applied to these, are different among the various models. Each different approximation, each different solution method, and each different parameterization and associated turning, and each change in parameter numerical values, ensures that none of the models will produce the same trajectories. A given model will not produce the same trajectories if even one numerical value is changed.

Under these conditions, it seems that parameter estimation can not ever be carried out in a manner that gives unique values for any parameters. All trajectories will be different. Averages of the chaotic trajectories remain chaotic; the amplitude and frequency are changed. It is especially unfortunate that parameterizations and turning are necessary for those aspects of Earth’s climate systems that are critical to getting a ‘correct’ response relative to the physical domain.

Weather is the result of the thermally-driven hydrodynamics and the thermodynamics of phase change in Earth’s atmosphere and certain other sub-systems. All GCMs and ESMs and associated numerical methods produce different weather, and averages of the calculated weather produce different calculated climates.

A focus on the parameterizations and turning is a welcome development in climate science. Hopefully, as time goes on, the focus areas will eventually turn to the critically important aspects of the discrete approximations and numerical solutions of these. After all, that’s where the numbers comes from.

Paul Linsay
Reply to  Dan Hughes
January 25, 2019 9:15 am

In my experience, complicated nonlinear systems have multiple attractors. The one you’re on depends on what the initial conditions are, plus you don’t know which one you’re on until you’ve run a simulation a long time. The modelers have no idea. From a former colleague of Schmidt’s

“This explanation conveniently ignores the fact that there is no way to ever know if a climate model’s attractor is the same as nature’s. When I (Duane Thresher) was at NASA GISS I pointed this out to Dr. Gavin Schmidt, current head of NASA GISS (anointed by former head Dr. James Hansen, the father of global warming) and leading climate change spokesperson. His response was, “We just have to hope they are on the same attractor”, literally using the word “hope”. They are almost certainly not so a climate model can’t predict nature’s climate.”

http://realclimatologists.org/Articles/2017/07/27/More_Reasons_To_Doubt_Climate_Models_Can_Predict_Climate/index.html

Paul Linsay
Reply to  Dan Hughes
January 25, 2019 9:24 am

A second comment about the coding of the climate models by GISS. It’s as bad as Harry Read Me on the data.

“NASA GISS used to have its own supercomputer to run its climate model but they were so IT incompetent that they had it taken away from them and had to use the supercomputers at NASA’s Goddard Space Flight Center (GSFC). While at NASA GISS, I spent a summer at GSFC, outside Washington D.C., attending NASA’s supercomputing school. After I left NASA GISS, I was talking to a programmer from GSFC and he said they referred to NASA GISS’s climate model as “The Jungle” because it was so badly coded. The results of NASA GISS’s climate model, oft-cited as proof of global warming, are thus still questionable since the model is almost certainly full of bugs.”

http://realclimatologists.org/Articles/2019/01/03/Climate_of_Incompetence/index.html

Max
January 25, 2019 7:41 am

“Now, bear in mind that for these models to be even remotely valid, the total energy entering the system must balance the energy leaving the system.”

It seems to me, the reason the models don’t work is that they are not taking the largest variable into consideration. For some reason, I rarely hear anyone talk about the heating of our atmosphere being caused by “friction”. (engineers know, all heat is friction)
The downward force in the column of air in a gravity well, creates heat at the bottom of the column. That’s why most heating occurs at the surface. (Day or night)
If the sun is the source of all energy (heat and light) then the laws of thermodynamics must apply. The heating would be from the top, to the closest point to the sun (Mount Everest) getting colder the further you get from the sun (death valley). The opposite is true indicating a different mechanism at play. ( UV, x-rays are consistent with the laws of thermodynamics, the higher you go, the more intense the radiation)
This mechanism is best illustrated by the “Chinook winds”.The dry wind flowing over the mountain heats up 5.4° for every thousand feet it falls in altitude! (day or night).
The higher you go in the atmosphere, the cooler it becomes as the air pressure drops. Camping in the mountains at 8000 feet is 20° colder than the valley below it, at 4000 feet.
You do not need to send up a weather balloon, or an airplane for this affect to occur. It also happens during weather events, like the air pressure drop in the eye of a hurricane, or the cooling affect of a “low pressure system” dropping the barometric pressure as the weather moves over.
To calculate the sun’s influence in real time, it is as simple as subtracting the low of the night from the high of the day, to give you the amount of heat that occurs from just the sun.
There is no better demonstration of how much heat comes from the sun than Antarctica. For six months it receives no sunlight causing the temperatures to plummet to 70° below zero on average. In the summer, with three months of 24 hour sunlight, the temperatures sores the hottest place on earth? NO!
The temperature average is 40° below zero. A 30° difference that usually only takes a few hours every morning any where else on earth.
Now compare the north pole to the south. Under the same conditions, why does the arctic temperatures rise above freezing every summer? It is at sea level.
Antarctica’s height averages near 10,000 feet altitude. Less air pressure generates less heat.
Does carbon dioxide cause any of this heat? Yes. In relation to its molecular “weight” and abundance. About 4/100 of 1%. (400 ppm)
Additional proof that air pressure generates frictional heat to keep our planet warm is to look at the other planets in our solar system. Hottest to coldest is Jupiter, Saturn, Neptune (the furthest from the sun) Uranus, Venus, Mercury, Earth, then Mars. (The gas giants are hotter than the surface of the sun) In short, the thicker the atmosphere, the hotter the planet under its atmosphere.
A comparison between earth and its moon, both in the green zone, reveals that the moon varies 550° (-300° to 250° in the sun) average temperature -50° at the equator. Earth average temperature is near 50°, that’s 100° warmer than the moon even though half of the suns energy does not penetrate our atmosphere!
Does any of this information appear in any climate models?

MarkW
Reply to  Max
January 25, 2019 9:51 am

Friction is the action of turning momentum into heat. That heat then radiates away.
The problem is that unless you have something to keep these molecules in motion, friction will eventually cause them to stop moving altogether. No movement, no friction, no heat.

What you are proposing is indistinguishable from perpetual motion.

All heat is friction????? I’m an engineer, and I don’t know that. Radiation is heat. Chemical reactions are heat.

Compression causes heating, pressure doesn’t.

The heating would be from the top, to the closest point to the sun

This would only be true if the sun was heating the earth via conduction. It isn’t. It heats the earth via radiation, and what gets heated is what absorbs the radiation. The atmosphere doesn’t. The ground and water do.

That’s about all the nonsense I can deal with for now.

Vicus
Reply to  MarkW
January 26, 2019 11:13 am

MarkW: “Compression causes heating, pressure doesn’t.”

Incorrect, see:

Kinetic pressure
and
Ideal gas law

Clyde Spencer
Reply to  Max
January 25, 2019 9:52 am

Max
While I’m sure that there is some frictional heating, the process you are describing is explained by the gas laws. Air going up over a mountain range cools, and then heats as it goes back down the leeward side.

Reply to  Max
January 25, 2019 9:48 pm

Max — “The downward force in the column of air in a gravity well, creates heat at the bottom of the column.”

At a sea level measurement of 14.7 pounds per square inch, the weight of that column of air generates very little heat.

Gunnar Sttrandell
January 25, 2019 8:26 am

Willis,
In my department we made fatigue testing, measurements and simulations of heavy trucks.
And we had a saying:
“When test or measure results are presented no one believes in them except from the person that made them, and when simulation results are presented every one believes in them except the person who did them.”

It took us a huge amount of workload to solve the problem by geting the different approaches in agreement.

Joe Campbell
January 25, 2019 8:49 am

Hey, Willis – or anybody:
Looking at the outputs of the many climate models, I generally see that all of them, but one, deviate more and more from the observations as calculations proceed from about 1990 on. The one that best matches data appears to be INM-CM4, which, I think, is a model produced in Russia. However, looking at the material in the reference you used in your post, that model has biases that are quite large in a number of areas. Got any thoughts on that? Luck?

Ferdberple
January 25, 2019 8:55 am

Willis you have hit the nail on the head. The models are back casting temperature while ignoring equally important climate metrics.

In many places temperature is less important than rainfall, for example. By tuning the models to a single metric the models know more and more about less and less.

January 25, 2019 1:43 pm

So downwelling solar radiation at the surface is very well measured at a number of sites over decades. And surprisingly, or perhaps unsurprisingly given their overall poor performance, the climate models do a really, really bad job of emulating even this most basic of variables—how much sunshine hits the surface.

Downwelling radiation from an object of low radiance to one of high radiance does not exist. It is akin to saying objects move up hill against the influence of the gravity field.

Measuring downwelling radiation is peak delusion as it does not exist. It is simply the response of a thermopile emitting EMR from its junctions to the cooler target object based on Stefan-Boltzman equation. They are not measuring radiation but inferring from a junction cooling and the electrical potential difference to a reference junction:
https://www.sensorsmag.com/components/demystifying-thermopile-ir-temp-sensors

People who think that EMR will transfer energy against the electric field potential have no clue about the electric field or magnetic field we exist in. There is a kindergarten level explanation here:
http://www.irregularwebcomic.net/1420.html
The key clue is the derivation of the speed of energy transfer in a vacuum or other medium being a simple function of the permittivity and permeability (or electrical and magnetic properties) of the transfer medium. All matter affects the electric field and magnetic field at that speed; commonly known as the speed of light.

For those who have a better grasp of maths there is a mathematical proof that EMR energy can only flow in one direction at any point in time and space:
https://pdfs.semanticscholar.org/c03b/2b493f57e13d3c3e2b58d17c9656d2dee978.pdf

Anything that discusses downwelling radiation from a cool atmosphere to a warmer surface is pure drivel. It is unphysical claptrap.

Measuring downwelling radiation is peak delusion as it does not exist. It is simply the response of a thermopile emitting EMR from its junctions to the cooler target object based on Stefan-Boltzman equation. They are not measuring radiation but inferring from a junction cooling and the electrical potential difference to a reference junction:
https://www.sensorsmag.com/components/demystifying-thermopile-ir-temp-sensors

Ed Bo
January 25, 2019 7:51 pm

RickWill:

Attempts at pedantry that stem from fundamental ignorance are especially irritating and only serve to make the attempter look foolish.

First of all, the post is about downwelling SOLAR radiation, which is most certainly NOT “radiation from an object of low radiance to one of high radiance”. So you have beclowned yourself in the very first sentence.

Your second paragraph just sinks you deeper. It is the logical equivalent of saying a mercury thermometer does not measure temperature; it is simply measuring the height of a column of mercury. (Hint: Essentially ALL measurements are indirect in this way.)

But your basic claim, irrelevant as it is to this post that you think you are responding to, is still fundamentally wrong.

You remind me of the kids in my science and engineering classes who would choose the most complex possible method of examining a problem, get lost in the math, and not come close to understanding the underlying physical phenomena.

First example: Let’s say you and I are in a dark room and we shine flashlights such that the beams cross each other at right angles. Your approach would be to calculate the electromagnetic field values and Poynting vectors where the beams intersect to try to predict the follow-on behavior of the beams.

Someone who really understands what is going on would realize that the nature of electromagnetic radiation is such that the two beams will simply pass right through each other, and that computing those properties where the beams intersect is a waste of time for understanding what happens to the beams afterwards.

Second example: In the same dark room, we now shine highly collimated flashlight beams directly at each other. One flashlight has low batteries, so has a filament of lower temperature and therefore lower “radiance” as you put it.

Once again you apply Maxwell’s equations and calculate Poynting vectors, concluding from these calculations that the resulting energy flow is from the brighter flashlight to the dimmer flashlight.

But what you don’t realize is that you have simply performed very difficult calculations to get to the same resulting net transfer as people who use the much simpler “radiative exchange” analysis using Einstein’s “photon gas” paradigm.

EVERY heat transfer textbook I have ever seen uses this “radiative exchange” analysis to introduce radiative heat transfer. Here’s one such textbook available free on-line. It is used to teach mechanical engineering students at MIT.

http://www.mie.uth.gr/labs/ltte/grk/pubs/ahtt.pdf

The quick explanation of exchange is on p32 in the overview chapter. A more detailed explanation is on p487 at the very start of the chapter dedicated to radiation heat transfer.

Both are very clear that radiation “from an object of low radiance to one of high radiance” does indeed exist, but it is just less than the radiation from an object of high radiance to one of low radiance.

Because you get lost in the complex math, and do not have a fundamental understanding of the underlying physical mechanisms, you do not realize that this method is just a different way of calculating the net energy transfer, and it is every bit as valid (and far superior in utility) to your approach.

Vicus
Reply to  Ed Bo
January 26, 2019 11:19 am

You haven’t demonstrated where RickWill is wrong, you just wrote paragraphs why you feel he is. Didn’t refute.

January 26, 2019 7:13 am

For climate to be stable (not change):

“the total energy entering the system must balance the energy leaving the system”

Surely I’m right in thinking that this is the key to man-made climate change, models say CO2 causes less OLR to be emitted. More CO2 in atmosphere causes longer retention time for “trapped” LWIR. This causes less OLR to leave earth (go into space). Why does the data show the opposite?

https://ibb.co/hdcX8fZ

tom0mason
January 26, 2019 11:02 am

So the energy from the sun balances the energy radiated from the planet eh?
So back in the 1600s that still held true(?), even though there was only about ½billion people on the planet, with all their agriculture.
Today there’s 7¼billion people with everything they need (and that’s expanded a lot!) and still it balances. Hay, that means the expansion of humanity (and everything they do) requires no energy from the sun.
And all those peat bogs and all that organic matter that falls to the deep ocean abysses, matter that exchanges solar energy for chemical energy for many millennia does not count?
The expansion and contraction of the totality of all life on this planet (expands and contracts mostly due to weather/climate effects) does not affect this wonderful energy balance, as again it exchanges solar energy for chemical energy in the organic chemical bonds.

Clever stuff this energy balance thing!

Frank
January 27, 2019 1:54 am

Willis: These errors in downward solar radiation could be unimportant – based on you AOGCMs are used.

Step 1. The model is spun up (for 150? year) until incoming and outgoing radiation are in equilibrium and temperature is stable. In a zero-dimensional model:

(S/4)*(1-a) = eoT^4

So the model will contain compensating errors in albedo (a), effective emissivity (e), and T. We know that different models initialize to different temperatures (+/-2 K or +/-6 W/m2 in eoT^4). This error is compensated by errors in a and e as the model is spun up.

To predict warming at equilibrium, we need to know how the planet’s radiative imbalance (Ri) changes with surface temperature – the climate feedback parameter (Ri/dT). In a zero-dimensional model:

Ri = (S/4)*(1-a) – eoT^4

dRi/dT = -(S/4)*(da/dT) – 4eoT^3 – (eoT^4)*(de/dT)

-4eoT^3 is Planck feedback. -(eoT^4)*(de/dT) is the sum of the other LWR feedbacks. Decreased emissivity due to rising water vapor, increased emissivity due to a falling lapse rate and changing cloud top altitude changing effective emissivity. -(S/4)*(da/dT) is the sum of all SWR feedbacks. On paper, a model that gets forcing correct and dRi/dT correct will calculate the correct amount of warming even if it starts with compensating errors in albedo, emissivity and temperature.

Frank
Reply to  Willis Eschenbach
January 27, 2019 4:49 pm

Thanks for the reply, Willis. Thinking simply, warming is controlled by two factors: forcing, which is moderately well understood for GHGs and the climate feedback parameter – how much more LWR is emitted to space and more SWR is reflected to space per degK of surface warming. I’m happy to admit that it is very hard for climate models to correctly calculate a climate feedback parameter, but my opinion is that is unrelated to the model’s ability to spin up perfectly to the correct initial conditions. It is the derivatives (feedbacks) in my equations that are critical to warming in a simple ZDM.

Wills writes: “Not only that, but it is an iterated model, where the output at time t is used as the input for time t+1. And those are a bitch to even balance and keep from running off the rails, much less to simulate something.”

When discussing warming at steady state (ECS), it doesn’t make any different what happened in all of those iterated steps. All you need to know is the temperature that restores a steady state between incoming and outgoing radiation. There are many problems you can solve in physics without calculating all of the intermediate steps along the way. For an object moving along some path in a gravitational field, you can integrate acceleration vs time over the whole path to get a final velocity or simply convert potential energy into kinetic energy and skip all of the intermediate steps. In thermodynamics, the fact that some quantities are state functions means you know how they will change regardless of the path taken between two states.

The rate of ocean heat uptake has a tremendous influence on how long it takes to reach a steady state, but not how much warming is needed. The same thing applies to melting of sea ice. Melting of ice caps and outgassing of CO2 from the deep ocean are problems because they won’t be complete before radiative balance has been restored.

Frank
Reply to  Willis Eschenbach
January 29, 2019 12:54 pm

Willis asked: “First, please point out to me in your simple (or complex) model the effect of the Constructal Law, which governs all flow systems that are far from equilibrium.”

Constructal Law began with calculations about heat transfer and has expanded into a philosophy for explaining a surprising large and diverse number of phenomena we observe. I personally have no idea of where this stops being science and begins being philosophy. Based on your comments, I spent some time looking into Bejan with Google Scholar and decided to wait until Bejan is awarded a Nobel Prize before I use Constructal Law to reject otherwise reasonable science. That, of course, may turn out to be too conservative a position.

Frank
Reply to  Willis Eschenbach
January 29, 2019 2:51 pm

Willis also asked: “Second, please point out to me in your simple (or complex) model the effect of the emergence of thermoregulatory phenomena like say thunderstorms.”

I used to ask myself why warming in the tropics couldn’t be suppressed by running the Hadley cycle a little faster – carrying heat from the surface to the upper atmosphere where it could easily escape to space. The thunderstorms of the ITCZ that form the heart of your Thermostat Hypothesis are the ascending branch of the Hadley circulation. Some of the air that rises descends locally and some makes the full Hadley circuit. Whether one views individual thunderstorms or the entire Hadley circulation, the fundamental question: Is what controls rate of vertical heat flux that “air conditions” the tropics? Greater flux, lower surface temperature.

Your answer appears to be that surface temperature controls the amount of vertical heat flux. Eventually I decided this couldn’t be right, because vertical convection requires an unstable lapse rate. In that case, the rate at which heat escapes to space from the upper troposphere limits the amount of heat that can be convected upward.

We can express this idea in terms of a global or a local climate feedback parameter: IF 1 K of surface warming results in X W/m2 more LWR escaping to space, then X W/m2/K is the additional amount of heat that must also leave the surface. You can’t have more heat entering the atmosphere from below than leaves from the top indefinitely! Given that latent heat carries about 80 W/m2, then a 1.25%/K increase in latent heat flux (and eventually in precipitation) would be consistent with sending an addition 1 W/m2/K to space as LWR.

However, I’ve oversimplified the problem by ignoring reflection of SWR. The climate feedback parameter is based on the sum of the increased emission of LWR and reflection of SWR per degK of surface temperature rise. Surface warming can result in the vertical convection of 10 W/m2/K of latent heat if 9 W/m2/K of additional SWR is reflected back to space. According to your paper, there is a 60 W/m2 change in SWR in the ITCZ every day! That can allow a massive amount of latent heat to move upward – in a small unique area.

However, once you are above the cloud tops, the only thing left moving heat upward is LWR. At steady state, GLOBALLY, that flux must be about 240 W/m2 and currently is about 0.7 W/m2 short of balance according to ARGO. So it looks like warming over the past half century is sending 2 W/m2/K more OLR+OSR to space, not 1 W/m2/K. The former is equivalent to an ECS of about 3.6 K/doubling and the latter about 1.8 K/doubling. This is simply expressing the output from energy balance models in the terminology of a climate feedback parameter.

Due to the low heat capacity of the NH, it warms and cools much more with the seasons than the SH, producing a 3.5 K warming in GMST. CERES show a 2.2 W/m2/K perfectly linear increase in LWR accompanies this seasonal warming. Unfortunately, global warming is not seasonal warming and 2.2 W/m2/K mostly reflects changes from outside the tropics. Lindzen and Choi and now Mauristen and Stevens have shown that the tropics show an LWR feedback of about 4 W/m2/K. Unfortunately, in all cases, the SWR response is not linear with surface temperature; some components appear lagged by several months. AOGCMs don’t do a good job of reproducing the seasonal or tropical feedbacks we observe from space, so their is no reason to believe the ECS they project.

The weakness of your thermostat hypothesis – as I understand it – is that it doesn’t take into account what happens to surface heat after it is convected above from the surface. Where does descending air come from? When I take into account the need for an unstable lapse rate, I personally envision heat convected upward in response to a cool upper atmosphere, not pushed upward by a hot surface while ignoring the local lapse rate. Tropical islands certainly develop a hot surface during the daytime, but not tropical oceans. And from what I read, there is more precipitation over tropical oceans at night than during the day. And the average molecule of water remains in the atmosphere for an average of 5 days between evaporation and precipitation, not less than a day as one might expected from a thermostat hypothesis limiting maximum temperature every day. However, in the ITCZ, the precipitation rate may indeed be high enough to remove all of the moisture in the column overhead every day. However, most of that moisture appears to be swept into the ITCZ by trade winds after evaporating elsewhere. Obviously I don’t have a very clear idea of how individual intense thunderstorms are integrated into a larger picture of tropical climate as seen from the TOA.

You, of course, have a vast amount of personal experience with tropical weather, both on islands and in the open ocean, below the ITCS with its deep convection, other places where shallow convection predominates and the places where it is dry for half the year. Is the temperature in all of these different locations controlled by your Thermostat Hypothesis or does that apply to only a small fraction of the tropics?

Respectfully, Frank

Neogene Geo
January 29, 2019 1:59 am

I spent considerable time quantifying all of the Wild et al. fluxes in terms of the solar constant, atmospheric emissivity (both short and longwave), albedo and non-radiative heat transfers. To solve for atmospheric longwave emissivity, we really need the value of the atmospheric window, and we need time series observations to see changes in the atmospheric longwave emissivity, which is what the fuss is all about. No such observations exist, just a rough estimate currently around 20 W/ m2. That is why you never see an estimate for this rather fundamental value in any of the Wild et al papers 2013, 2014, 2015. Still, I have a lot of respect for Wild et al.’s attempts to quantify the magnitude of the fluxes. You should see in their 2015 paper that there is no attempt to hide the uncertainties in the fluxes Willis. Why don’t YOU solve for the fluxes, if you want to dive in?

Frank
January 29, 2019 11:11 am

Willis wrote: “Finally, the climate models do NOT even have the ability to “spin up perfectly to the correct initial conditions”. Instead, they vary by as much as 3°C in the temperature that they spin up to … which is a difference in emitted radiation worldwide of about SIXTEEN WATTS PER SQUARE METRE!”

Thanks for posting the above Figure showing the range/error in PI temperature exhibited by AOGCMs: 13.5+/-1.5 degC. In terms of oeT^4 for a graybody model, +/-5 W/m2. That is a 2% error in the 240 W/m2 the Earth actually emits to space. That is about the same size as the error in the ability of CERES to measure the emission of 240 W/m2 of LWR! However, we are interested in measuring change in flux, not absolute flux, so you happily post CERES data despite absolute uncertainties this big.

Let’s imagine a graph for RCP6.0, where radiative forcing reaches a plateau (in 2060?) and then approaches steady state warming over the next century or so. (ARGO shows us that we are about 70% of the way to steady state: The current forcing is 2.5 W/m2 and the current imbalance 0.7 W/m2). What will this graph look like? Each of this curves should have reached a plateau, a new steady state. The only thing important to me is how much higher those plateaus will be – steady state WARMING. Think of this as a linear fit, can you obtain a reasonable slope if the y-intercept in your data is wrong? Yes, if the y-intercept is consistently wrong you can get the right slope.

Do the errors in predicting PI conditions say anything definitive about our ability to predict WARMING – to predict the climate feedback parameter. As I showed above, the climate feedback parameters arises from de/dT and da/dT, LWR and SWR feedbacks. These parameters are not directly involved in spinning up to the right PI temperature. Observations prove to me that AOGCMs get feedbacks wrong, so I’m not going to waste my time focusing on less definitive issues.