The Trouble with Global Climate Models

Guest essay by Rud Istvan

The IPCC’s Fifth Assessment Report, Working Group 1 (AR5 WG1) Summary for Policy Makers (SPM) was clear about the associated Coupled Model Intercomparison Project (CMIP5) archive of atmosphere/ocean general circulation models (AOGCMs, hereafter just GCM). CMIP5 results are available via the Royal [Koninklijk] Netherlands Meteorological Institute (KNMI). The SPM said about CMIP5:

§D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).

§D.2 Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing.

 

Neither statement is true, as the now infamous CMIP5/pause divergence proves (illustrated below). CO2 continued to increase; temperature didn’t.

The interesting question is why. One root cause is so fundamentally intractable that one can reasonably ask how the $multibillion climate model ‘industry’ ever sprang up unchallenged. [1]

GCMs are the climate equivalent of engineering’s familiar finite element analysis (FEA) models, used these days to help design nearly everything– from bridges to airplanes to engine components (solving for stress, strain, flexure, heat, fatigue, …)

clip_image002

In engineering FEA, the input parameters are determined with laboratory precision by repeatedly measuring actual materials. Even non-linear ‘unsolvables’ like Navier Stokes fluid dynamics (aircraft air flow and drag modeled using the CFD subset of FEA) are ‘parameter’ verified in wind tunnels (as car/airplane designers actually do with full/scale models).

clip_image004

That is not possible for Earth’s climate.

GCM’s cover the world in stacked grid cells (engineering’s finite elements). Each cell has some set of initial values. Then a change (like IPCC RCP8.5 increasing CO2) is introduced (no different than increased traffic loading increases bridge component stress, or increased aircraft speed increases frictional heating), and the GCM calculates how each cell’s values change over time.[2] The calculations are based on established physics like the Clausius-Clapeyron equation for water vapor, radiative transfer by frequency band (aka the greenhouse effect), or the Navier-Stokes fluid dynamics equations for convection cells.

The CMIP5 archive used up to 30 vertically stacked atmospheric cells, up to 30 stacked ocean cells, and time steps as fine as 30 minutes according to UCAR.

clip_image006

CMIP5 horizontal spatial resolution was typically ~2.5° lat/lon at the equator (about 280 km). The finest CMIP5 horizontal resolution was ~1.1° or about 110km. That limit was imposed by computational constraints. Doubling resolution by halving a grid cell (xy) quadruples the number of cells. It also roughly halves the time step due to the Courant Friedrichs Lewy condition. (Explaining CFL for numerically solved partial differential equations is beyond the scope of this post.) Doubling resolution to a ≈55km grid is ~4 x 2 times as computationally intensive. University Corporation for Atmospheric Research (UCAR) says the GCM rule of thumb for 2x spatial resolution is 10x the computational requirement. One order of magnitude per doubled resolution.

The spatial resolution of modern weather models is necessarily much finer. The newest (installed in 2012) UK MET weather supercomputer and associated models use a rough scale of 25 km (NAE for things like pressure [wind] gradients and frontal boundaries), and a fine UKV scale of 1.5 km for things like precipitation (local flood warnings). As their website proudly portrays:

clip_image008

This is possible because UK Met weather models only simulate the UK region out a few days, not the planet for many decades. Simulating ΔT out to 2100 on the ‘coarse’ 25 km MET weather grid is two orders of magnitude (≈4x4x2x2x[10/8]) beyond present capabilities. Simulating out to 2100 on a 1.5 km resolution to resolve tropical convection cells (and their ‘Eschenbach’ consequences) is (110-55-27-13-7-3-1.5) seven orders of magnitude beyond present computational capabilities. Today’s best supercomputers do a single GCM run in ~2 months (fifty continuous days is typical per UCAR). A single 1.5k run would take 1.4 million years. That is why AR5 WG1 chapter 7 said (concerning clouds at §7.2.1.2):

Cloud formation processes span scales from the submicron scale of cloud condensation nuclei to cloud system scales of up to thousands of kilometres. This range of scales is impossible to resolve with numerical simulations on computers, and is unlikely to become so for decades if ever.”

The fundamentally intractable GCM resolution problem is nicely illustrated by a thunderstorm weather system moving across Arizona. 110×110 cells are the finest resolution computationally feasible in CMIP5. Useless for resolving convection processes.

clip_image010

Essential climate processes like tropical convection cells (thunderstorms) with their associated release of latent heat of evaporation into the upper troposphere where it has an easier time escaping to space, with their associated precipitation removing water vapor and lowering that feedback, simply cannot be simulated by GCMs. Sub-grid cell climate phenomena cannot be simulated from the physics. They have to be parameterized.

And that is a second intractable problem. It is not possible to parameterize correctly without knowing attribution (how much of observed past change is due to GHG, and how much is due to some ‘natural’ variation). IPCC’s AR5/CMIP5 parameter attribution was mainly AGW (per the SPM):

§D.3 This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.

CMIP5 parameterizations were determined in two basic different ways. Since 2002, the DoE has sponsored the CAPT program, which uses multiple short-term comparisons between GCMs modeling a few days (at coarse resolution) to their numerical weather prediction brethren and actual observed weather. The premise is that short term GCM divergence from weather models must be due to faulty parameterization, which the weather models don’t need as much.[3] This works well for ‘fast’ phenomena like a GCM mistakenly splitting the ITCZ into two in two days (the cited paper illustration), but not for ‘slow’ phenomena like changes in upper troposphere humidity or cloud cover with rising CO2 over time.

The second way is to compare longer-term observational data at various time scales to parameterization results, and ‘tune’ the parameters to reproduce the observations over longer time periods. This was the approach taken by the NOAA MAPP CMIP5 Task Force.[4] It is very difficult to tune for factors like change in cloud cover, albedo, SST, or summer Arctic sea ice for which there is little good long-term observational data for comparison. And the tuning still requires assuming some attribution linkage between the process (model), its target phenomenon output (e.g. cloud cover, Arctic ice) and observation.

clip_image012

CMIP5 parameterizations were tuned to hindcast temperature as best possible from 2005 back to about 1975 (the mandatory three decade hindcast), explained by the CMIP5 experimental design itself.[5] This is circumstantially evident from the ‘goodness of fit’.

clip_image014

Assuming mainly anthropogenic attribution means GCM’s were (with pause hindsight) incorrectly parameterized. So they now run hot as assumed away ‘natural’ variation changes toward cooling, like it did from about 1945 to about 1975. This was graphically summarized by Dr. Akasofu, former head of the International Arctic Research Center, in 2010—and ignored by IPCC AR5.[6]

Akasofu’s simple idea also explains why Artic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak. The entire observational record until 2013 may be just the decline phase of some natural ice variation. The recovery in extent, volume, and multiyear ice since 2012 may be the beginning of a natural 35-year or so ice buildup. But the GCM attribution is quite plainly to AGW.

clip_image016

Almost nobody wants to discuss the fundamentally intractable problem with GCMs. Climate models unfit for purpose would be very off message for those who believe climate science is settled.


References:

[1] According to the congressionally mandated annual FCCE report to Congress, the US alone spent $2.66 billion in 2014 on climate change research. By comparison, the 2014 NOAA NWS budget for weather research was $82 million; only three percent of what was spent on climate change. FUBAR.

[2] What is actually calculated are values at cell corners (nodes), based on the cell’s internals plus the node’s adjacent cells internals.

[3] Philips et. al., Evaluating Parameterizations in General Circulation Models, BAMS 85: 1903-1915 (2004)

[4] NOAA MAPP CMIP5 Task Force white paper, available at cpo.NOAA.gov/sites.cop/MAPP/

[5] Taylor et. al., An Overview of CMIP5 and the Experimental Design, BAMS 93: 485-498 (2012).

[6] Akasofu, On the recovery from the Little Ice Age, Natural Science 2: 1211-1224 (2010).

0 0 votes
Article Rating
226 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike from the cooler side of the Sierra
August 9, 2015 6:41 am

and then the weather changed….

RickA
August 9, 2015 6:45 am

Very nice and clear explanation. Thank you.

John
Reply to  RickA
August 9, 2015 10:38 am

I agree, great article.
This paper evaluates the 24 models from worst to best and indicates that none of the models nor processes are being quantitatively evaluated.
Advances in Climate Change Research
Volume 4, Issue 3
A Review on Evaluation Methods of Climate Modeling
Zhao Zong-Ci, et al
Open access: found on sciencedirect.com

Bill Illis
August 9, 2015 6:51 am

The Earth receives …
– 16,029,162,711,741,600,000,000,000,000,000,000,000,000
photons of energy from the Sun every day; and,
emits back to space …
– 80,145,813,558,708,100,000,000,000,000,000,000,000,000
IR photons of energy every day.
These photons are moving through,
– 108,980,000,000,000,000,000,000,000,000,000,000,000,000,000.000
molecules in the air each day, and even more than that on the ground surface each day.
You can pretend that 5,000 grid boxes are capable of simulating how all those energy packets move through all those molecules every 24 hours but, more than likely, the simulation would be completely wrong.

TonyL
Reply to  Bill Illis
August 9, 2015 7:04 am

I think you miscounted, could you recount and make sure you did not miss any?

MarkW
Reply to  TonyL
August 9, 2015 8:01 am

There’s one he missed.

Being and Time
Reply to  Bill Illis
August 9, 2015 7:29 am

The term “photons of energy” has no meaning. A photon is not a unit of energy, it is a quantum of electromagnetic radiation that may be equivalent to an arbitrarily high (or low) amount of energy.

Pamela Gray
Reply to  Being and Time
August 9, 2015 8:38 am

Yes. I hate it when skeptics go off the rails to attempt a counterpoint worth a damn. And the photon counterpoint ain’t worth a plugged nickle.

MikeB
Reply to  Being and Time
August 9, 2015 9:01 am

Well Pamela, it is important to get details right, if you can. Unfortunately ‘Being and Time’ jumped in and got it wrong.

MikeB
Reply to  Being and Time
August 9, 2015 9:14 am

Oh my God. Now hockeyschtick has jumped in with some sky dragon nonsense.
A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody.
Electromagnetic radiation transports energy. When radiation is absorbed the energy it carries is also absorbed. The blackbody will absorb it all, regardless of where it came from – BY DEFINITION.

Being and Time
Reply to  Being and Time
August 9, 2015 9:22 am

What are you talking about, MikeB? I didn’t get anything wrong.

Harold
Reply to  Being and Time
August 9, 2015 9:55 am

Nope. You flunk HS physics.

Reply to  Being and Time
August 9, 2015 10:00 am

MikeB you clearly don’t understand that the Pauli exclusion principle COMPLETELY EXCLUDES a lower frequency/quantum-energy photon from being thermalized by the completely saturated lower quantum energy states. Please do yourself a huge favor and read chapter 4 in this textbook, as well as the chapter on thermodynamics. It is a very well-written text which confirms a cut-off frequency for thermalization and Pauli excl principle multiple times and just as I’ve stated above.
The Pauli exclusion principle of quantum mechanics prevents any quantum energy transfer/thermalization/increased frequency/temperature/energy from a lower-quantum-energy /frequency photon to a higher frequency/temperature/energy blackbody because ALL of those lower E microstates (eg vibration of bonds) and molecular and atomic orbitals are already completely saturated with a maximum of two electrons each. Any photon with insufficient quantum E is reflected or absorbed with simultaneous re-emission of an exactly-equivalent photon with no gain in energy by the hotter blackbody.

Sturgis Hooper
Reply to  Being and Time
August 9, 2015 10:01 am

Pamela,
You and Being are both wrong.
As usual (I’d say always, but haven’t read everything by him), Bill is right.

MikeB
Reply to  Being and Time
August 9, 2015 10:04 am

Being and Time
Maybe I was a little bit unkind, but there is nothing wrong with Bill’s ‘photons of energy’.
Photons carry energy! That is the point. The energy each one carries is not arbitrary. It is precise and is equal to the frequency of the radiation (v) multiplied by Planck’s constant (h). Thus each photon carries energy ‘hv’.
I thought complaining about Bill’s ‘photons of energy’ was unnecessary semantics, especially when the meaning was clear. So perhaps I am agreeing with Pamela after all.

Sturgis Hooper
Reply to  Being and Time
August 9, 2015 10:09 am

Bill specifies IR energy.

Reply to  Being and Time
August 9, 2015 10:34 am

“How many overtones does a CO2 molecule have?”
CO2 is a linear symmetric triatomic molecule with two double bonds, thus, unlike the bent triatomic molecule H2O, does not have any bending transitions. With respect to the ~15um IR emission/absorption that comes from CO2, it is entirely from vibrational microstates, i.e. the distance between nuclei. Each bond consists of a shared molecular orbital which can contain a maximum of 2 entangled electrons of opposite spin (Pauli exclusion principle). Since there are two double bonds that would be a total of 8 electrons sharing the molecular orbitals, but they all have to be of the same energy since they are double bonds and CO2 is a symmetric, linear, non-polar molecule.

Bob Weber
Reply to  Being and Time
August 9, 2015 11:01 am

Did you miss a few physics classes? The energy and momentum of a photon [the quantum of EM radiation] depend only on its frequency (ν) or inversely, its wavelength (λ): E=hv=hc/λ
To the contrary, nothing arbitrary about that!

Being and Time
Reply to  Being and Time
August 9, 2015 2:45 pm

MikeB, Sturgis Hooper, and Bob Weber,
You guys are either deliberately misreading what I wrote, or are engaging in useless pettifogging, or are being ridiculous.
There is no such thing as a “photon of energy.” Period. I know that photons are quanta of electromagnetic radiation and that they consequently carry energy, but to say “photons of energy,” and to give a coefficient along with it, is a construction similar to saying newtons of force or volts of potential. Since a photon is not a unit or a measurement, this construction makes no sense. Specifying a number of photons tells us nothing about energy.
Furthermore (Bob Weber), I never said that the energy of a particular photon (de re) was arbitrary. I know how the energy of a photon is determined, and so does everyone else who can type the words “wikipedia” and “photon.” But “the energy of a photon” (de dicto) is an undetermined quantity, since we are not talking about a particular photon in that case.
All of this is pretty simple; a child could understand it. All of this is abundantly clear from the context and from what I originally wrote. So why are you picking a fight with me? Do you think you possess some superior learning that you’re anxious to show off? If you have nothing better to do than to split hairs like a pedant, then do it amongst yourselves. Do not use me as a foil for your asinine charades, and do not call me wrong when I am not.

rogerknights
Reply to  Being and Time
August 9, 2015 8:07 pm

“As usual (I’d say always, but haven’t read everything by him), Bill [Illis] is right.”
I’ve noticed that too.

Reply to  Being and Time
August 10, 2015 2:12 am

“Oh my God. Now hockeyschtick has jumped in with some sky dragon nonsense.
A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody.”
~ MikeB,
The fellow who posts here and uses the nickname “hockeyschtick” has no relationship to the group who wrote the book about killing off that dragon. On the contrary, he expounds the physics and understandings that was the consensus in science from at least the 50s to the 70s. The US Standard Atmosphere Model developed by the best and brightest during the space race was crucial to get right for aviation and space exploration. See here: http://hockeyschtick.blogspot.com/2014/12/why-us-standard-atmosphere-model.html
And please stop with the strawman crap. No one said that there were molecules that don’t radiate. And, no one agreed that the earth is a black body. It is not ya know — cause I can see the darn thing. So, try to argue your side correctly please.

Reply to  Being and Time
August 11, 2015 9:58 am

micro6500 August 10, 2015 at 1:10 pm
The 15 μm band of CO2 is the 𝜈2, or bending mode so as usual ‘the schtick’ is wrong again!
Phil, thanks! I learned something new.

Good, glad to be of help.
But actually I think HS is sort of right, there’s 2 orbital at 15u, and once those energy levels are full without emitting a 15u photon, it’s not absorbing anymore 15u photons. That why there are 2 identical energy levels associated with Co2, one for each C-O bond.
No he’s not, also you appear to have picked up some of his errors.
As said above a triatomic like CO2 has 4 vibrational modes. Assume the CO2 molecule is in the plane of your screen.
In the first case the O atoms move away from the central C atom simultaneously in the plane of the screen, this is the symmetrical vibrational mode. Because it maintains symmetry it has no dipole and therefore is not IR active.
In the second case the O atoms move in the same direction, so as the C-O bond on one side shortens the other one lengthens, this is the asymmetrical mode. Because symmetry is not preserved there is a dipole and the mode is IR active at around 2349 cm-1 (4.26 micron) with its associated rotational fine structure.
The remaining two modes are the bending modes (which ‘the schtick’ mistakenly asserts don’t exist!)
One mode has the two O atoms moving vertically wrt the central C atom, the other mode has them moving in and out of the screen. Energetically both modes are identical which is why they are termed ‘degenerate’. Both have a dipole and therefore are IR active at around 667 cm-1 (14.98 micron) with their associated fine structure.
Once the ground state absorbs a 15 micron photon it’s quite long lived and can absorb another photon and be promoted to yet another energy level. So for example the gs absorbs a 667.4 cm-1 photon it can then absorb a 667.8 cm-1 photon (Q-branch) or a 618.0 cm-1 or a 720.8 cm-1 (you’ll see those two strong lines on either side of the 15 micron band). Those are only the pure vibrational lines, there are many associated rotational lines which comprise the whole band.
Hope that helps.

Reply to  Bill Illis
August 9, 2015 8:59 am

Simply counting photons without knowing their quantum energy content and the quantum energy content of their target doesn’t tell you anything about the energy that will be transferred. Photons have a wavelength & frequency (v) and thus a discrete quantum energy content E=hv. A low frequency/energy photon (eg 15um CO2 photons) cannot transfer any quantum energy to a higher frequency/energy/temperature blackbody because all of those lower frequency/energy microstates & orbitals are already completely filled/saturated in the hotter body. This fact alone from quantum mechanics falsifies CAGW.

MikeB
Reply to  hockeyschtick
August 9, 2015 9:19 am

Posted in the wrong place, but it’s worth repeating.
A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody.
Electromagnetic radiation transports energy. When radiation is absorbed the energy it carries is also absorbed. The blackbody will absorb it all, regardless of where it came from – BY DEFINITION.

Reply to  hockeyschtick
August 9, 2015 9:36 am

What you apparently don’t understand is that the Pauli exclusion principle of quantum mechanics prevents any quantum energy transfer/thermalization/increased frequency/temperature/energy from a lower-quantum-energy /frequency photon to a higher frequency/temperature/energy blackbody because ALL of those lower E microstates (eg vibration of bonds) and molecular and atomic orbitals are already completely saturated with a maximum of two electrons each. This is basic quantum theory extensively explained in this freshman chemistry textbook:
http://hockeyschtick.blogspot.com/2015/08/why-pauli-exclusion-principle-of.html

Reply to  hockeyschtick
August 9, 2015 9:41 am

HS – at TOA, there is a big chunk missing from the outgoing spectrum due to IR absorbing gases. Where does that energy go?
It’s not getting out in the IR. But, it must be accounted for. The GHE hypothesis says it heats the surface, raising the temperature, and thereby allowing the energy to escape at different frequencies when the distribution shifts.
I believe this is true, up to a point. But, I believe the function describing the heating is not monotonic. I.e., just because a given amount of GHG causes surface heating, that does not mean that an incremental additional amount will necessarily produce an incremental increase in surface temperatures. It is the difference between the secant line and the tangent line in this plot:
http://i1136.photobucket.com/albums/n488/Bartemis/co2surftemp_zpsd400ef15.jpg
The bulk sensitivity (secant line) always has a positive slope, but the incremental sensitivity (slope of tangent line) can go negative.
I believe that two ways this can happen are through convective overturning, and increased cloud albedo.

MikeB
Reply to  hockeyschtick
August 9, 2015 10:15 am

“…the Pauli exclusion principle of quantum mechanics prevents any quantum energy transfer/thermalization/increased frequency/temperature/energy from a lower-quantum-energy /frequency photon to a higher frequency/temperature/energy blackbody”
This is gibberish that would make Doug Cotton proud. What are all those forward-slashes.? Are you just hoping that one of the words you don’t understand may have some relevance to this discussion?
“The most reliable sign of truth is simplicity and clarity. Lie is invariably complicated, gaudy and verbose.” – Leo Tolstoy
It is really quite simple. So let me repeat for a 3rd time
A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody.
Electromagnetic radiation transports energy. When radiation is absorbed the energy it carries is also absorbed. The blackbody will absorb it all, regardless of where it came from – BY DEFINITION.
No’ ifs’ no ‘buts’ no Pauli exclusion principle.

Reply to  hockeyschtick
August 9, 2015 10:24 am

“HS – at TOA, there is a big chunk missing from the outgoing spectrum due to IR absorbing gases. Where does that energy go?
It’s not getting out in the IR. But, it must be accounted for. The GHE hypothesis says it heats the surface, raising the temperature, and thereby allowing the energy to escape at different frequencies when the distribution shifts.”
Hi Bart, the CO2 + H2O “hole” in the OLR spectra is not “trapping” IR, at most it delays IR photons from the surface by a few milliseconds on their way to space, and this few millisecond delay is reversed at night for no net effect other than a small thermal transfer delay.
What we see in the OLR spectra is the overlapping line spectra of mostly vibrational transitions of CO2 & H2O centered around 15um for the Earth’s outgoing LWIR. By Wein’s law, ~15um “corresponds” to a blackbody at a peak radiating temperature of ~217K, which is limited to that by their molecular bonding structure (also called molecular orbitals which share electrons between atoms). Since this emitting temperature of the ~15um IR from CO2+H2O is a constant and not dependent upon concentration, the “blackbody” emitting temps of GHGs is not unlimited as falsely assumed by Manabe et al, Hansen, and even today’s models. The ~217K BB Planck curve where it overlaps with the “chunk” in the OLR is shown in this marked up diagram
http://3.bp.blogspot.com/-70jSecyxJIE/VIYNXE3AqFI/AAAAAAAAG74/jQvLk0q68-8/s1600/spectra%2BGHGs%2BAnnotated%2BFinal%2BLapse%2BRate%2B4.jpg
from this post on why GHE theory confuses the cause with the effect
http://hockeyschtick.blogspot.com/2014/12/why-man-made-global-warming-theory.html
The CO2 + H2O pseudo-blackbodies have an emitting temp seen by that spectra of ~217K, which is colder than any other part of the atmosphere, and thus cannot warm the warmer parts of the atmosphere with low-quantum-E ~15um photons, which just serve a passive IR radiators to low-temperature space.
A pure N2 atmosphere without GHGs would be warmer than our current atmosphere:
http://hockeyschtick.blogspot.com/2014/11/why-greenhouse-gases-dont-affect.html

JT
Reply to  hockeyschtick
August 9, 2015 10:26 am

“all of those lower frequency/energy microstates & orbitals are already completely filled/saturated in the hotter body”
But does that actually matter? As I understand it: The amount of energy in an emitted photon depends on the DIFFERENCE between the energy of the emitting electron before the emission and the energy of the emitting electron after the emission, not the absolute value of the energy state of the electron from which the photon was emitted. Correspondingly, the possibility of a photon being absorbed depends on the absorbing electron having available to it an energy TRANSITION from a lower energy state to a higher energy state such that the transition is equal in energy to the energy of the photon to be absorbed. Those energy differences between the bound electron states diminish in magnitude as the absolute energy of the states increases. In metals, the differences between adjacent energy states of the mobile electrons are so small that the emission spectrum becomes virtually continuous. That would appear to imply that a high energy bound electron in a metal (at least) could very easily absorb a low energy incident photon by simply shifting to a slightly higher energy state, of which it has available to it a very large number. Now, how long it would remain in that state before re-emitting said low energy photon is another question.

gammacrux
Reply to  hockeyschtick
August 9, 2015 10:27 am

No, hockeyschtick you’re definitively and utterly wrong. It’s not a matter of Pauli exclusion principle but of statistical mechanics that tells us that whatever the temperature near equilibrium there are always much less occupied excited states than occupied ground states and so absorption readily takes place.
Only in a completely out of equilibrium media with ideally completely inverted populations (such as in a laser) would any absorption at the relevant frequency be blocked.

Reply to  hockeyschtick
August 9, 2015 10:40 am

hockeyschtick @ August 9, 2015 at 10:24 am
“…at most it delays IR photons from the surface by a few milliseconds on their way to space”
If that were true, there would be no gap in the spectrum at TOA.

Reply to  hockeyschtick
August 9, 2015 10:49 am

“all of those lower frequency/energy microstates & orbitals are already completely filled/saturated in the hotter body”
They’re not. When the body gets hotter, more of the states get filled. The shape of the curve changes, and those portions decrease in size relative to the other portions, but in absolute terms, they increase.
E.g., in the plot below, the red line is above the green line, is above the dark blue line, is above the light blue line, is above the black line everywhere.
http://www.optotherm.com/infrared-planck.jpg

Reply to  hockeyschtick
August 9, 2015 10:53 am

MikeB, if a low frequency/quantum-E photo is “absorbed” by a higher frequency/quantum-E blackbody, an identical photon of the same frequency/energy/wavelength MUST be simultaneously emitted by the higher-quantum-E blackbody, therefore there is NO net energy transfer/heat transfer.
This is explained by quantum physics on a molecular/atomic basis and by the 2nd law of thermodynamics on a macro basis. The 2nd law and why photons below the cutoff frequency cannot warm a warmer blackbody is also explained by quantum entropy considerations on a micro basis in Chapter 14 of this freshman textbook:
http://hockeyschtick.blogspot.com/2015/08/why-pauli-exclusion-principle-of.html
At this point if you’re just going to keep parroting the same nonsense disproven by ANY textbook of physics or chemistry, give me a published reference for your overturning of quantum and statistical mechanics theory stating specifically what you are claiming. I’ve already supplied you with links to the relevant chapters in the text that prove my arguments hundreds of times over.

Reply to  hockeyschtick
August 9, 2015 11:02 am

Bart, the LWIR of interest ~15um is solely due to the linear vibrations of covalent double bonds in CO2. This 15um line-emission quantum energy is obviously fixed at E=hv. A photon at frequency v can never have any more energy than hv, and if it’s frequency is below the cutoff frequency for thermalization of a hotter blackbody, the hotter blackbody will simultaneously emit an identical photon of the exact same v,wavelength,E and thus there is no net energy/heat transfer from a cold to hot blackbody. Please read Chapter 14 which explains all of this on both a quantum basis and macro-thermodynamics basis:
http://hockeyschtick.blogspot.com/2015/08/why-pauli-exclusion-principle-of.html

Reply to  hockeyschtick
August 9, 2015 11:05 am

“If that were true, there would be no gap in the spectrum at TOA.”
On this, I want to go back to my original post.
The gap in the spectrum can be considered proportional to the function. According to the secant line, the gap always results in an increase of surface temperatures above what they would be with no gap. However, that does not mean that the gap necessarily increases in breadth and depth for increasing GHG, i.e., the tangent line can have a negative slope.
The equivalent radiating surface at TOA increases with increasing GHG concentration, and convection provides a pathway to that radiating surface which bypasses the optical filter of the GHGs below. Thus, one can encounter a point of diminishing returns, and even net incremental cooling, as GHG concentration increases.
Nature has a tendency to evolve toward maximal limits. I suspect the present climate state of the Earth is at just such an inflection point, and that is why the data indicate that there is essentially zero net warming for increasing CO2.

Reply to  hockeyschtick
August 9, 2015 11:10 am

gammacrux,
No you are absolutely wrong. The Pauli exclusion principle limits a maximum of 2 entangled electronics of opposite spin to occupy any given molecular or atomic orbital at any given time. Molecular bonds are the same thing as molecular orbitals and also limited to the Pauli exclusion principle. The vibrational microstates corresponding to CO2 15um LWIR emission/absorption is at a fixed quantum energy level E=hv, “corresponding” to a true blackbody peak at ~193K, which is colder than any other part of the atmosphere. Thus, 15um CO2 emission has insufficient quantum energy above the cutoff frequency of any blackbody warmer than 193K. This is all explained on a quantum and macro/entropy basis in Chapter 14:
http://hockeyschtick.blogspot.com/2015/08/why-pauli-exclusion-principle-of.html

Reply to  hockeyschtick
August 9, 2015 11:13 am

hockeyschtick @ August 9, 2015 at 11:02 am
“A photon at frequency v can never have any more energy than hv, and if it’s frequency is below the cutoff frequency for thermalization of a hotter blackbody, the hotter blackbody will simultaneously emit an identical photon of the exact same v,wavelength,E and thus there is no net energy/heat transfer from a cold to hot blackbody. “
If you have two passively emitting bodies, yes.At nighttime, the atmospheric IR blanket is most definitely not heating the surface, though it is reducing its rate of cooling.
With the Sun up in the morning, it is still reducing the rate of cooling, but because energy is being continually pumped in, that has the effect of increasing the temperature, ceteris paribus.
But, the ceteris are not paribus. Heating of the surface also increases convection, and evaporates water to form clouds.
Again, FWIW, it is my opinion that the GHE is, indeed, responsible for increased heating of the surface. But, that does not mean that an incremental increase in GHGs must necessarily produce an incremental increase in surface temperatures. And, there I will leave it.

Reply to  hockeyschtick
August 9, 2015 11:58 am

Here’s 2 pages from the textbook I linked to which explain why a lower frequency/energy photon absolutely cannot be thermalized by a higher frequency/energy/temperature blackbody. I strongly urge anyone who believes otherwise to please read chapter 4 of the same text I linked.

Reply to  hockeyschtick
August 9, 2015 12:21 pm

Bart says, August 9, 2015 at 9:41 am:
“(…) at TOA, there is a big chunk missing from the outgoing spectrum due to IR absorbing gases. Where does that energy go?
It’s not getting out in the IR. But, it must be accounted for. The GHE hypothesis says it heats the surface, raising the temperature, and thereby allowing the energy to escape at different frequencies when the distribution shifts.”

Still on this, Bart?
“Where does that energy go?” What energy? Your assumed energy? “It must be accounted for.” What energy? You assume some energy is missing and tell us that we must account for it?
What you see from space is not IR absorbed. It is IR emitted. Those are not absorption spectras. Those are emission spectras. This particular absorption only takes place inside your head. You start out with the assumption that there’s a full 390 W/m^2 288K Planck curve being emitted from the surface and then out through the ToA go only 240 W/m^2, and so the difference of [390-240=] 150 W/m^2 was somehow absorbed by the atmosphere on its way out, never reaching space, but rather kept (yes, “trapped”) within the Earth system, somehow maintaining its temperature by just looping around, up and down between sfc and atm, heating in both directions.
In reality, 85-90% of the IR in those ToA spectras are emitted to space from the atmosphere itself (and not from one particular layer, but cumulatively from all layers, from top to bottom), only the remaining 10-15% from the actual surface. The Earth system emits an average flux of 240 W/m^2 to space, not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux, but because … that’s the flux it absorbs from the Sun on average. 240 IN, 240 OUT.
Also, Bart, how exactly do you translate those “missing IR chunks” of yours into surface heating? How do you know, for instance, that if we increased the CO2 chunk by 3.7 W/m^2, then the global surface would necessarily end up (feedbacks disregarded) 1 Kelvin warmer? How do you know that by doubling the atmospheric content of CO2, then a total OLR flux of only 236.3 W/m^2 would escape to space? Unless surface warming made up for it.

Reply to  hockeyschtick
August 9, 2015 12:56 pm

Agreed with Kristian. The spectral emissions are caused by the lapse rate as I marked up on the OLR spectra above, which in turn is solely dependent upon -g/Cp, and has nothing to do with “radiative forcing”, “backradiation” nor “heat trapping.”

Reply to  hockeyschtick
August 9, 2015 3:25 pm

Kristian and hockeyschtick,
Thanks for the illuminating comments. Well done.

Reply to  hockeyschtick
August 9, 2015 4:10 pm

Kristian @ August 9, 2015 at 12:21 pm
“You start out with the assumption that there’s a full 390 W/m^2 288K Planck curve being emitted from the surface…”
I think the HITRAN replication of TOA radiation is fairly convincing.
“How do you know, for instance, that if we increased the CO2 chunk by 3.7 W/m^2, then the global surface would necessarily end up (feedbacks disregarded) 1 Kelvin warmer?”
I don’t. I don’t think you can disregard the state of the system. I don’t think you can disregard heat convection to the radiating layers. This is not a feedback, per se, just part of the state of the system, though it might be enhanced further by a feedback connection.
I have made the following analogy before: Positing that, because GHGs remove part of the outgoing spectrum, any increase in GHG concentration must produce surface heating is like positing that, because gravity is an attractive force, all gravitating bodies must collide.
If you started with no angular momentum, then the bodies would eventually collide. But, once you’ve got sufficient angular momentum in the system, they aren’t. If you have sufficient orbit energy and angular momentum, two bodies pulling at each other via a an inverse square central force will orbit one another, and never collide, for as long as the universe exists. If you were able to suddenly increase that force a bit, you would not start moving the bodies relentlessly toward each other, you’d just perturb the orbit a little.
Just so, if you start out with zero convection, then adding GHGs will relentlessly increase surface temperatures, up to the maximum allowed. But, if you’ve got enough non-radiative exchange going on in the atmosphere, I believe increasing GHG will not have a significant effect. And, that is consistent with what we are seeing, with essentially no significant warming in the past two decades while CO2 concentration has gone up by a third from “pre-industrial levels”.

Ian Macdonald
Reply to  hockeyschtick
August 9, 2015 10:28 pm

Black body radiation – Thermal jostling of molecuies/atoms excites EM radiation..
CO2 absorbtion – Molecular dipole resonance, angle or length of bonds vibrating.
Pauli exclusion principle – Applies to shorter wavelength photon causing electron to jump orbit (energy bandgap) around atom..
All are essentially different mechanisms and cannot be lumped together. Electron orbit jumps require a precise amount of energy and hence respond only to a narrow bandwidth of EM, molecular resonance less critical so medim bandwidth, black bodies don’t care, so are wideband.

Reply to  hockeyschtick
August 9, 2015 11:17 pm

GHGs don’t “remove” a part of the outgoing spectrum. The ~15um “chunk” in the middle is due to the fact that the CO2 + H2O molecular line-emitters are emitting as much as they can as pseudo-blackbodies “corresponding” to a BB emission temperature of ~217K as shown by the 220K Planck curve in the OLR spectra I posted above. The 15um OLR is merely delayed by a few milliseconds by GHGs on its transit to space. In addition, GHGs accelerate convection in the troposphere by preferentially transferring heat energy to N2/O2 rather than emitting a photon.
Ian, CO2 is a linear triatomic non-polar molecule that does not have any bending transitions. The ~15um emission/absorption is from linear vibrations of the two double bonds. Bonds are molecular orbitals which are also subject to quantum energy levels & the Pauli exclusion principle. Blackbodies, molecular & atomic orbitals, EM radiation, and chemical bonds are all subject to quantum energy levels (as is all matter and energy).

gammacrux
Reply to  hockeyschtick
August 10, 2015 12:13 am

hockeystick
The Pauli exclusion principle limits a maximum of 2 entangled electronics of opposite spin to occupy any given molecular or atomic orbital at any given time.
Sure and so what?
This does by no means imply what you seem to “infer” from it. Please study a bit of simple equilibrium statistical mechanics.
Briefly, in contrast of what you seem to believe, non ground state orbitals, involved in an excitation by a given photon energy are of course not at all filled with 2 electrons as soon as temperature is high enough. Utter nonsense.
Moreover, at any rate, in the case of the 15 micrometer photons of CO2 we have not even to do with electronic transitions but vibrational ones. In other words we have to do with bosons or Bose-Einstein statistics since the quantized states of those of an oscillator.
So Fermi statistics with its Pauli exclusion principle is not even relevant in this case

Reply to  hockeyschtick
August 10, 2015 9:23 am

gammacrux August 10, 2015 at 12:13 says “Sure and so what?…Briefly, in contrast of what you seem to believe, non ground state orbitals, involved in an excitation by a given photon energy are of course not at all filled with 2 electrons as soon as temperature is high enough. Utter nonsense.”
You are misstating me to create a strawman. Of course, higher excited orbitals are NOT filled and a photon with sufficient frequency/quantum energy can raise electrons to a higher orbital. That is of course how a hot body can increase the quantum-heat-energy/frequency/temperature of a colder body.
However, an incoming photon of insufficient frequency/energy (=hv), less than that of the blackbody target, cannot raise the frequency/quantum energy/temperature of of a higher frequency/energy/temperature blackbody. This is explained on pages 147-148 I posted above, and comments above.
gammacrux says “Moreover, at any rate, in the case of the 15 micrometer photons of CO2 we have not even to do with electronic transitions but vibrational ones. In other words we have to do with bosons or Bose-Einstein statistics since the quantized states of those of an oscillator. So Fermi statistics with its Pauli exclusion principle is not even relevant in this case”
I’ve already stated several times above that specifically with respect to the 15um CO2 line emission/absorption, the transitions are vibrational, from the 2 double bonds of the linear, symmetrical, non-polar CO2 molecule. Bonds are molecular orbitals, which contain shared electrons from the individual atomic orbitals, and both molecular and atomic orbitals are governed by the Pauli excl principle. Molecular spectroscopy is used to determine the stiffness of these bonds and thus how much or how little they can vibrate and at what frequency. For CO2 these vibrations are limited in the LWIR centered at 15um, which cannot change and is fixed due to the molecular structure.
Bose-Einstein statistics of photons/bosons are unnecessary to determine that the vibrational transitions of CO2 in the LWIR are limited to ~15um absorption and emission.
Now, gammacrux, your turn to explain on a quantum mechanical basis why you think 15um photons from CO2 can be thermalized and increase the frequency/energy/temperature of the 10um/288K earth surface.

Reply to  hockeyschtick
August 10, 2015 11:07 am

“However, an incoming photon of insufficient frequency/energy (=hv), less than that of the blackbody target, cannot raise the frequency/quantum energy/temperature of of a higher frequency/energy/temperature blackbody. This is explained on pages 147-148 I posted above, and comments above.”
This is just not so. The “blackbody target” does not have those energy levels “saturated”. If it did, then in the curves I posted above would all start out at the same level at the right, and diverge only at specific wavelengths.
But, they don’t. Increasing the temperature of the object fills up more of the available low energy states, and the red curve is above the green curve, and the green above the blue, and so on, at every point on the absissa.
If what you were saying were true, that plot would look like this:
http://i1136.photobucket.com/albums/n488/Bartemis/planck_zpsn44tqszy.jpg
or with some other, possibly smoother, type of interpolation between where the lower energy curves leave off and the higher energy curves begin.
You are conflating a result of passive radiators to those with an active source of heat. You are misconstruing the heating of the surface by the Sun as heating by the GHG layer. The GHG layer does not heat the ground. It merely keeps it from losing heat as rapidly as it otherwise would. Because there is a steady input of radiative flux from the Sun, then in the absence of any other influence, that would have the effect of raising the surface temperature.
Look, it is very simple. IR radiation is notched out of the frequency spectrum at TOA. That IR is coming up from the Earth, but it is being blocked. It’s not getting out “milliseconds later”. It’s not getting out at all, hence the notch. You’ve got to account for that notch.

Reply to  hockeyschtick
August 10, 2015 11:16 am

abscissa… Should’ve run spell check.

Reply to  hockeyschtick
August 10, 2015 11:44 am

Bart says, August 9, 2015 at 4:10 pm:
“I think the HITRAN replication of TOA radiation is fairly convincing.”
It is. If you consider the atmosphere completely static, stable and stratified so that all internal energy transports and transfers happen via radiation alone. This is NOT a model of the real atmosphere, Bart. It is a model of an ideal, hypothetical atmosphere being fully radiative.
“I don’t think you can disregard the state of the system. I don’t think you can disregard heat convection to the radiating layers.”
Good. And still you think the HITRAN replication of ToA radiation is “fairly convincing”? BTW, you also cannot expect the IR activity of the rest of the atmosphere to remain unchanging whenever you tweak one variable. Like you can pretend when using HITRAN. A very good case in point is what the “CERES Surface EBAF Ed2.8” dataset shows us from 2000 till today. Even if CO2 has gone up a lot over the period, total tropospheric WV has gone up considerably and cloud cover/total cloud water also went up substantially, global DWLWIR to the surface has still gone down! And the radiative cooling ability of the global surface of the Earth has (partly as a direct result of this) strengthened significantly rather than weakened (by about 1.5 W/m^2 in 15 years). You can read more about this here:
https://okulaer.wordpress.com/2015/03/05/the-enhanced-greenhouse-effect-that-wasnt/
“I have made the following analogy before: Positing that, because GHGs remove part of the outgoing spectrum, any increase in GHG concentration must produce surface heating is like positing that, because gravity is an attractive force, all gravitating bodies must collide.”
Yeah, and it’s still a really bad analogy. GHGs don’t “remove part of the outgoing spectrum”, Bart. That’s what you think. Because that’s what you’ve been told. The outgoing spectrum is what it is. Again, it’s NOT an absorption spectrum. It’s an emission spectrum. To 85-90% emitted by the atmosphere. By GHGs (mostly WV) and by clouds. Without GHGs, heat wouldn’t have any effective way of escaping the atmosphere to space. Most of it would’ve been … “trapped”.
What you see from space, Bart, the spectrum we observe, the OLR, is Earth’s HEAT LOSS to its surroundings/heat sink (space). Its Q_out. On average, this matches Earth’s HEAT GAIN from its heat source (the Sun), it’s Q_in.
240 W/m^2 IN, 240 W/m^2 OUT.
The global surface of the Earth on average absorbs a heat flux of only 165 W/m^2. That’s the heat gain – the Q_in – of the gl sfc. Which means that the atmosphere absorbed 75 W/m^2 of the incoming solar flux before it reached the surface.
The global surface of the Earth on average also rids itself of a (total) heat flux of 165 W/m^2. Its heat loss, the Q_out, again matching the gain, the Q_in. While the heat gain is all radiative, the total heat loss from the surface is only partly radiative, mostly non-radiative: 53 W/m^2 of radiant heat + 24 W/m^2 of conductive heat + 88 W/m^2 of latent (evaporative) heat = 165 W/m^2 of total heat. Of the 53 W/m^2 of radiant heat, about 20 go straight out to space, while ~33 is transferred to and absorbed by the atmosphere.
That’s the energy actually transferred from the surface to the atmosphere, Bart: [33+24+88=] 145 W/m^2. And when you add this to the 75 W/m^2 of solar flux absorbed by the atmosphere on its way from the ToA to the surface, plus the 20 W/m^2 of surface emission going straight to space, then you end up with: [145+75+20=] 240 W/m^2.
Isn’t that amazing, Bart? All the heat that enters also exits. AT DYNAMIC EQUILIBRIUM. Nothing is “trapped” anywhere. Thanks to the GHGs.
What you want to do, is compare directly the magnitude of a mathematically constructed, conceptual HALF of a radiant heat flux (the calculated UWLWIR from the surface) with an actual radiant heat flux (the OLR through the ToA), as if the two were equivalent entities. They’re not, Bart. They really are not.
“Just so, if you start out with zero convection, then adding GHGs will relentlessly increase surface temperatures, up to the maximum allowed.”
Say what!!? Bart, try to think. If there’s zero convection and you still keep heating the surface, then THAT’S when surface temperatures will rise. GHGs are necessary for the atmosphere to be able to cool to space at all, and thus for the Earth system as a whole to be able to reach a dynamic equilibrium in the first place and then remain in a steady state from then on.

Reply to  hockeyschtick
August 10, 2015 12:21 pm

Bart, simply look at the blackbody Planck curves you posted earlier. Note the curve at 2um for the 2000K BB and 5000K BB. Note the curve is higher at all points including the longer-wavelengths/low frequency/low quantum E irradiation. This simply proves heat transfer is one way only hot to cold. Radiation is of course bidirectional, but heat transfer is one-way-only hot to cold, due to the fact that any heat transfer cold to hot requires an impossible decrease of both macro-entropy (2nd LoT) and micro-entropy (quantum theory – also explained in the text I linked to)
Now look at the textbook excerpt fig 4.8. If one assumes from classical physics (Rayleigh-James-Jeans “law”) at photons from a colder BB can heat a hotter lower-wavelength and higher-frequency-energy BB the two dashed lines to infinity result and we have an explosive perpetual motion machine that simply does not occur in nature. This is the whole reason why Planck had to devise quantum theory to explain why every transfer of energy and all matter has to be quantized and why nature instead follows a Planck curve as shown in fig 4.8, which is *less than* the classical physics predicts shown by the dashed lines at every wavelength.
Bart says “You are conflating a result of passive radiators to those with an active source of heat. You are misconstruing the heating of the surface by the Sun as heating by the GHG layer. The GHG layer does not heat the ground. It merely keeps it from losing heat as rapidly as it otherwise would. Because there is a steady input of radiative flux from the Sun, then in the absence of any other influence, that would have the effect of raising the surface temperature. Look, it is very simple. IR radiation is notched out of the frequency spectrum at TOA. That IR is coming up from the Earth, but it is being blocked. It’s not getting out “milliseconds later”. It’s not getting out at all, hence the notch. You’ve got to account for that notch.”
It is you Bart who is conflating passive radiators/oscillators at a fixed frequency (ie GHG molecular-bond vibrational oscillators at fixed frequencies dependent upon molecular structure and which are NOT true blackbodies and do not have a Planck curve) with an active source of heat (the Sun – only). The reason for the “notch” in OLR is because in the LWIR CO2+H2O fixed line absorptions and emissions are centered at ~15um, which if they were true blackbodies (they definitely are NOT) corresponds to a PEAK emitting temperature of a blackbody at ~217. The Planck curve corresponding to 220K is shown in the OLR spectra I posted above and is exactly where the GHG “notch” appears. Radiation is not being blocked. It is being absorbed and reemitted ASAP by GHG oscillators/passive radiators and absorbers at a frequency fixed by their molecular structures.

Reply to  hockeyschtick
August 10, 2015 12:42 pm

hockeyschtick August 9, 2015 at 11:17 pm
GHGs don’t “remove” a part of the outgoing spectrum. The ~15um “chunk” in the middle is due to the fact that the CO2 + H2O molecular line-emitters are emitting as much as they can as pseudo-blackbodies “corresponding” to a BB emission temperature of ~217K as shown by the 220K Planck curve in the OLR spectra I posted above. The 15um OLR is merely delayed by a few milliseconds by GHGs on its transit to space. In addition, GHGs accelerate convection in the troposphere by preferentially transferring heat energy to N2/O2 rather than emitting a photon.

Do we have to suffer the pseudoscientific garbage again?
In the lower atmosphere the 15 micron band emitted by a bb is completely absorbed by CO2 in a matter of meters. Most of the energy is transmitted to the atmosphere via collisional deactivation with a small amount of IR emission, higher up in the atmosphere the band becomes optically thin and from there the emission can proceed freely to space. Thus a satellite observes emissions from the altitude corresponding to the altitude at which the atmosphere becomes sufficiently thin for this to occur, this is different for each wavelength. It’s highest for the Q-branch where the absorption is strongest (seen as a sharp spike in the center) which is actually in the lower stratosphere.comment image
An opposite effect is observed in the intense sidebands (~610 &710) which show up as cold lines because they emit lower down (still in the troposphere).
If the GHGs such as CO2 were not absorbing the IR from the surface there would be no ‘chunk’ missing.
Ian, CO2 is a linear triatomic non-polar molecule that does not have any bending transitions. The ~15um emission/absorption is from linear vibrations of the two double bonds. Bonds are molecular orbitals which are also subject to quantum energy levels & the Pauli exclusion principle. Blackbodies, molecular & atomic orbitals, EM radiation, and chemical bonds are all subject to quantum energy levels (as is all matter and energy).
As a triatomic CO2 has 3N-5 vibrational modes, i.e. 4, but the bending modes are doubly degenerate so that there are only 3 fundamental frequencies corresponding to the symmetric stretch 𝜈1(𝜎g) at 1388 cm-1, the bending mode 𝜈2(𝜋u) at 667 cm-1 and the antisymmetric stretch 𝜈3(𝜎u) at 2349 cm-1. The infrared spectrum of CO2, however, contains strong 𝜈2 and 𝜈3 bands because 𝜈1 has no oscillating dipole moment and is thus forbidden.
The 15 μm band of CO2 is the 𝜈2, or bending mode so as usual ‘the schtick’ is wrong again!
Here’s the energy diagram for the first few energy levels:
http://www.aml.engineering.columbia.edu/ntm/level1/ch05/html/Image432.gif
Note that in the bending mode that there is a Q-branch, therefore a CO2 molecule in the ground vibrational state (000) will absorb a ~15μm photon and be promoted to the first excited state (010), if it intercepts another ~15μm photon before it has time to lose its energy it will be promoted to the second excited state (020). There is no application of the Pauli Exclusion principle because the absorption of a photon changes the motion of the molecule, i.e. the motion of the nuclei, not add new electrons to an orbital.
http://www.intechopen.com/source/html/40159/media/image3.jpeg
http://www.barrettbellamyclimate.com/userimages/CO2LEVS.jpg

Reply to  Phil.
August 10, 2015 1:10 pm

The 15 μm band of CO2 is the 𝜈2, or bending mode so as usual ‘the schtick’ is wrong again!

Phil, thanks! I learned something new. But actually I think HS is sort of right, there’s 2 orbital at 15u, and once those energy levels are full without emitting a 15u photon, it’s not absorbing anymore 15u photons. That why there are 2 identical energy levels associated with Co2, one for each C-O bond.

Reply to  hockeyschtick
August 10, 2015 1:03 pm

Kristian @ August 10, 2015 at 11:44 am
“This is NOT a model of the real atmosphere, Bart. It is a model of an ideal, hypothetical atmosphere being fully radiative.”
Would you concede that, with such an atmosphere, the notch in the OLR spectrum would indicate surface heating above the nominal from the Stefan-Boltzmann relationship?
“Yeah, and it’s still a really bad analogy.”
No, it’s a very good analogy, you just don’t appreciate its purpose. Its purpose is to say that what seems obvious isn’t necessarily so. It seems obvious that an attractive force should pull two objects closer together. Just as it seems obvious that more CO2 should increase surface temperature, by deepening and/or broadening the notch in OLR.
The notch in outgoing OLR does represent optical filtering of IR from the surface. All things being equal, it should produce warming of the surface relative to what it would be without it. No matter what hand waving arguments about thermalization or 2nd Law violations one may make, one has to account for the notch in any calculation of heat balance.
The AGW hypothesis depends on the assumption that additional CO2 will deepen and/or broaden the notch. That is its weak link. Not this other hand waving stuff.

Reply to  hockeyschtick
August 10, 2015 1:19 pm

hockeyschtick @ August 10, 2015 at 12:21 pm
“This simply proves heat transfer is one way only hot to cold.”
It proves the low energy states are not “saturated”.
The GHGs are not transferring heat. They are not making the surface warmer than it would be if there were no external heat source.
The GHGs are like a (leaky) dam on a stream. The dam does not produce water. The dam does not make the water level higher. It simply prevents water from escaping so that the source that does make the water higher can pile more of it in. Take away the source of the stream, and the water just sits there, gradually decreasing because this is a leaky dam.
It is the same with GHGs. Take away the source, and indeed, they will not produce any heating of the surface. But, with the source piling in the heat energy, they do make the surface warmer than it otherwise would be.
But, that does not mean that increasing GHG will necessarily make the surface hotter. Other avenues of heat transport to the radiating levels of the atmosphere act like a spillway that prevents water behind a dam from increasing, even if the height of the dam is increased.

Reply to  hockeyschtick
August 10, 2015 1:27 pm

micro6500 @ August 10, 2015 at 1:10 pm
“But actually I think HS is sort of right, there’s 2 orbital at 15u, and once those energy levels are full without emitting a 15u photon, it’s not absorbing anymore 15u photons.”
Yes but, there are always other molecules available where the energy levels are not full.
I have great respect for HS. Unfortunately, I believe that he has latched onto an erroneous conclusion here. You have no idea how it pains me to be agreeing with Phil, whom I more usually disagree with because he has latched onto an erroneous conclusion without adequate consideration of alternatives to his POV.

Reply to  Bart
August 10, 2015 3:04 pm

Yes but, there are always other molecules available where the energy levels are not full.

Hmm, Yes, maybe, but I would think it would depend where you are in altitude that you’re looking.
On a hot day at the surface I would expect almost all Co2 Molecules are in a constant state of emitting and absorbing, isn’t that the mean free path? I would expect it to be very short at the surface, much longer higher up.
This paper says 33ft, http://www.biocab.org/Reviewed_Total_Emissivity_of_the_Carbon_Dioxide_and_Mean_Free_Path.pdf

Reply to  hockeyschtick
August 10, 2015 1:51 pm

Phil. says, August 10, 2015 at 12:42 pm:
“If the GHGs such as CO2 were not absorbing the IR from the surface there would be no ‘chunk’ missing.”
CO2 does absorb IR. No one claims it doesn’t.
Thing is, how do you go from there to extra heating of the global surface of a planet?
The Martian ToA spectrum as seen from space looks like this:comment image
Obvious CO2 “bite” in the Planck curve, no question about it.
There are ~26 times as many CO2 molecules in any given cubic metre of Martian atmosphere near the surface than there are in a similar volume on Earth. Even more higher up.
Still, does this raise the global mean surface temperature of Mars above its apparent (‘effective’) planetary temperature in space?
Nope. Not one bit.
According to NASA (from multiyear readings by the TES instrument on the “Mars Global Surveyor” and corroborated by the MCS instrument aboard the “Mars Reconnaissance Orbiter”), the average global surface temp of Mars is ~210 K, exactly the same as its calculated planetary blackbody emission temperature in space.
So there is no rGHE as defined (by a “raised ERL”) at all on Mars, and still its atmosphere contains much, much more IR-absorbing CO2 than Earth’s.
Why do you think this is, Phil.? Any clues?
You see the Martian ToA spectrum above. There’s a very clear and very deep CO2 indentation in the Plack curve outline. So the CO2 molecules in the Martian atmosphere (~26 times as dense in the air as on Earth) definitely and obviously absorb a not insignificant portion of the outgoing surface IR.
But this circumstance doesn’t lift the Martian ERL off the ground at all. It doesn’t raise the physical global mean surface temp of Mars above its planetary blackbody emission temp in space. Not one single bit.
The ERL on Mars is not raised by atmospheric absorption. How come?
The absorption of sfc IR by atmospheric CO2 on Mars can do nothing to make the surface warmer. Even though the partial density of CO2 is way higher than on Earth. It doesn’t help. It can’t do anything, temperaturewise. Why? Because the air as a whole is way too tenuous. What you would find is that it’s the TOTAL mass of air molecules per cubic metre (total air density), NOT the partial density of its IR active constituents, that matters. The total air pressure and density at the Martian surface level is already much lower than at the tropopause levels of Earth (and Venus).

Reply to  hockeyschtick
August 10, 2015 2:44 pm

Phil once again tries to fool people by a using transition diagram from a CO2/N2 LASER with the STIMULATED AMPLIFIED EMISSION of much higher-frequency higher-energy photons than shown in the Earth’s OLR spectra:
Look carefully Phil:
http://2.bp.blogspot.com/_nOY5jaKJXHM/TJe4euh-oBI/AAAAAAAABTM/xZxImdPcrlo/s400/GlobalWarmingArticle_page2_image1.jpg
The “chunk” absorbed by CO2 of LWIR is circled and centered at 15um, NOT either the 9.6 or 10.6 um LASER transitions which don’t even show up on this chart whatsoever!
Furthermore, most of the CO2 emission/absorption in LWIR is heavily overlapped by H2O
http://1.bp.blogspot.com/_nOY5jaKJXHM/TJe4kAjDH4I/AAAAAAAABTU/DmoVOdVaVl8/s400/GlobalWarmingArticle_page3_image1.jpg
http://1.bp.blogspot.com/_nOY5jaKJXHM/TJe36s1JB0I/AAAAAAAABTE/kgD4VUKlyu0/s400/Fullscreen+capture+9202010+123527+PM.jpg
Your LASER analogies are a red herring meant to fool others and have absolutely nothing to do with non-amplified non-stimulated non-cohenent, non-concentrated CO2 15um LWIR absorption and emission!
As I’ve already explained 3 times or more on this thread alone, CO2 is a triatomic LINEAR non-polar molecule with TWO DOUBLE bonds which prohibits bending transitions:
http://www.answers.com/Q/Why_does_CO2_have_a_linear_shape_instead_of_a_bent_shape
The ONLY time CO2 will undergo any bending transitions is at higher energy levels from eg Raman spectroscopy in the UV, visible, and NEAR-IR, NOT the 15um FAR-LWIR.
http://www.chem.purdue.edu/gchelp/vibs/co2.html
The ONLY transitions of relevance to AGW are the linear vibrations of CO2 at the FIXED frequency/wavelength/energy centered around ~15um, NOT 9.6 and 10.6 laser transitions!
The OLR chart you posted shows yet again that the 15um “chunk” in the OLR “corresponds” to just below the 220K Planck curve, indicating that ~15um centered *band* is radiating as an actual BB in that range of LWIR, which is a BB emitting temperature of 217K, colder than any other part of the atmosphere.
Kristian: I agree with everything you’ve said to Bart & Phil, so no need to repeat similar replies here again.

gammacrux
Reply to  hockeyschtick
August 10, 2015 8:00 pm

Hockeyschtick
You are misstating me to create a straw man.
I am not.
You claimed precisely that:
A low frequency/energy photon (eg 15um CO2 photons) cannot transfer any quantum energy to a higher frequency/energy/temperature blackbody because all of those lower frequency/energy microstates & orbitals are already completely filled/saturated in the hotter body.
which is utterly wrong and just pseudoscientific garbage for at least the two reasons I already mentioned.
No further comment.

Reply to  gammacrux
August 10, 2015 8:28 pm

” No further comment.”
I’m trying to determine what the temp of a Co2 molecule is when it starts radiating at 15u is , and conversely how much a 15u photon being captured raises the temp of a Co2 molecule. Maybe you can answer this?

gammacrux
Reply to  hockeyschtick
August 10, 2015 11:36 pm

micro6500
The concept of temperature T is a statistical one that can be defined only for a large number of molecules in (at least) local thermodynamic equilibrium. It’s just a measure of the mean energy in the various molecule degrees of freedom, for instance in a specific vibrational mode of CO2 (in the “high temperature” limit, when the quantum involved is small with respect to kT, where k is Boltzmann’s constant) .
It has no meaning for a single specific CO2 molecule that is either in one of the many excited states of the 15u vibrational mode or in its ground state, for instance.
What increases with temperature is just the proportion of molecules in excited relative to ground states.
By the way there is an infinity of equally spaced excited 15u vibrational states for CO2 as we are told by the quantum mechanics of an oscillator and several photons could readily be absorbed by the same molecule successively. (So as I mentioned “HS’s Pauli exclusion principle limitation claim” is completely irrelevant and really nothing but bullshit)

Reply to  gammacrux
August 11, 2015 3:24 am

Then add more atoms, you can add more Co2 or more inert atoms to get the statistics.
And Phil energy diagram shows 2 15u modes, and a few slightly above and below, but from emission spectrums those show up at higher pressure.
It’s quantum electrodynamics, it’s possible to answer my questions.

Reply to  hockeyschtick
August 11, 2015 9:09 am

Kristian August 10, 2015 at 1:51 pm
Phil. says, August 10, 2015 at 12:42 pm:
“If the GHGs such as CO2 were not absorbing the IR from the surface there would be no ‘chunk’ missing.”
CO2 does absorb IR. No one claims it doesn’t.

Actually at various times ‘the schtick’ says exactly that, in a reply above here so have you!
In reality, 85-90% of the IR in those ToA spectras are emitted to space from the atmosphere itself (and not from one particular layer, but cumulatively from all layers, from top to bottom), only the remaining 10-15% from the actual surface. The Earth system emits an average flux of 240 W/m^2 to space, not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux, but because … that’s the flux it absorbs from the Sun on average. 240 IN, 240 OUT.
This is incorrect, most of the emission is from the surface, particularly in the 8-14 micron window. The emission from the surface in the 15 micron CO2 band is absorbed low in the atmosphere removing the ‘chunk’, what emission is seen in that band is from cold CO2 high in the atmosphere.
Thing is, how do you go from there to extra heating of the global surface of a planet?
The Martian ToA spectrum as seen from space looks like this:
Obvious CO2 “bite” in the Planck curve, no question about it.
There are ~26 times as many CO2 molecules in any given cubic metre of Martian atmosphere near the surface than there are in a similar volume on Earth. Even more higher up.
Still, does this raise the global mean surface temperature of Mars above its apparent (‘effective’) planetary temperature in space?
Nope. Not one bit.

Incorrect, the GHE effect on Mars is about 5K.
http://www.astro.washington.edu/users/eschwiet/essays/greenhouse_ASTR555.pdf
http://nova.stanford.edu/projects/mod-x/id-green.html
http://www.esrl.noaa.gov/gmd/outreach/lesson_plans/Teacher%20Backgournd%20Information-%20Earth,%20Venus,%20and%20Mars.pdf
http://clivebest.com/blog/?p=4374
So there is no rGHE as defined (by a “raised ERL”) at all on Mars, and still its atmosphere contains much, much more IR-absorbing CO2 than Earth’s.
Why do you think this is, Phil.? Any clues?

As shown above your basic premise is incorrect. The GHE on Mars would be expected to be less on Mars because of the lower surface temperature and the much lower pressure and hence less absorption due to less pressure broadening of the spectrum. I’ve shown this here before, here is the comparison again:
http://s302.photobucket.com/user/Sprintstar400/media/Mars-Earth.gif.html
You see the Martian ToA spectrum above. There’s a very clear and very deep CO2 indentation in the Plack curve outline. So the CO2 molecules in the Martian atmosphere (~26 times as dense in the air as on Earth) definitely and obviously absorb a not insignificant portion of the outgoing surface IR.
But this circumstance doesn’t lift the Martian ERL off the ground at all. It doesn’t raise the physical global mean surface temp of Mars above its planetary blackbody emission temp in space. Not one single bit.

Again it does, about 5K.

Reply to  hockeyschtick
August 11, 2015 10:17 am

micro6500 August 10, 2015 at 8:28 pm
” No further comment.”
I’m trying to determine what the temp of a Co2 molecule is when it starts radiating at 15u is , and conversely how much a 15u photon being captured raises the temp of a Co2 molecule. Maybe you can answer this?

This has been correctly answered above, temperature of a gas is due to the average kinetic energy of the translating molecules, the distribution follows a Boltzmann distribution. The absorption of a photon which promotes the CO2 vibrational level has no effect on the temperature of the molecule. The 15 micron photon represents a fairly high temperature but it would have to be shared with the rest of the gas via collisions to raise the temperature. If you have some CO2 molecules in a container at fairly low pressure you will get 15 micron emissions from the gas because those molecules will absorb photons emitted by the container walls and then re-emit regardless of the gas temperature. There isn’t a temperature of the gas at which it will start radiating under those circumstances.

Reply to  Phil.
August 11, 2015 11:54 am

This has been correctly answered above, temperature of a gas is due to the average kinetic energy of the translating molecules, the distribution follows a Boltzmann distribution.

My understanding is that warm inert gases don’t have Boltzmann distributes, and the optical window collaborates that.

The absorption of a photon which promotes the CO2 vibrational level has no effect on the temperature of the molecule.

If this is true then a vibrating Co2 would not warm inert gases surrounding it.

The 15 micron photon represents a fairly high temperature but it would have to be shared with the rest of the gas via collisions to raise the temperature.

So if you lit up a mix of Co2 and (as) equal mass inert gases with 15u photons until the temperature stabilized, what would the temp be? In fact isn’t this exactly how LWIR being intercepted by Co2 is suppose to be warming that air?

If you have some CO2 molecules in a container at fairly low pressure you will get 15 micron emissions from the gas because those molecules will absorb photons emitted by the container walls and then re-emit regardless of the gas temperature.

Place them in a cold trap.

There isn’t a temperature of the gas at which it will start radiating under those circumstances

Then create the circumstances, especially when it’s due to vibration.
On the energy level charts, I can’t figure out how they get the 10.6 and 9.6 emissions, the reciprocal of the wave number (times 10000) gives the wave length for all of the other modes that are a good match to the spectrum, but the 544, 597, 741 and 941 don’t work out to 10.6 and 9.6, what am I doing wrong for those?

Reply to  hockeyschtick
August 11, 2015 10:48 am

hockeyschtick August 10, 2015 at 2:44 pm
Phil once again tries to fool people by a using transition diagram from a CO2/N2 LASER with the STIMULATED AMPLIFIED EMISSION of much higher-frequency higher-energy photons than shown in the Earth’s OLR spectra:
Look carefully Phil:
The “chunk” absorbed by CO2 of LWIR is circled and centered at 15um, NOT either the 9.6 or 10.6 um LASER transitions which don’t even show up on this chart whatsoever!

Absolute rubbish again, the energy level diagram I posted shows the lower vibrational energy levels of CO2 molecules, they have nothing to do with whether the CO2 is undergoing stimulated emission or not. Under certain conditions it is possible to create a population inversions between some of the excited state levels and thereby create a laser emission (notably 10.6 and 9.6 micron) but that has no effect on the spacing of the energy levels themselves. The 15micron band is the result of absorptions by the bending mode of CO2 which are shown on that diagram between levels 000 and 010 and the associated rotational fine structure.
Furthermore, most of the CO2 emission/absorption in LWIR is heavily overlapped by H2O
A myth propagated by showing low resolution cartoons like the one you used.
As I’ve already explained 3 times or more on this thread alone, CO2 is a triatomic LINEAR non-polar molecule with TWO DOUBLE bonds which prohibits bending transitions:
Thereby demonstrating your complete lack of knowledge on the subject, the principle IR band of CO2 is the bending mode!
The ONLY time CO2 will undergo any bending transitions is at higher energy levels from eg Raman spectroscopy in the UV, visible, and NEAR-IR, NOT the 15um FAR-LWIR.
Absolutely flat out wrong, the bending mode is centered at 15 micron, by the mutual exclusion principle it can not also be Raman active.
The asymmetric stretch mode is also IR active and absorbs at ~4.3 microns, the symmetric stretch mode is not IR active but is Raman active.
My advice to you is to learn some physical chemistry before pontificating on the subject because so far you’d fail a freshman level phys chem course!

Reply to  hockeyschtick
August 11, 2015 12:59 pm

Phil lies again re “CO2 does absorb IR. No one claims it doesn’t.” Phil says “Actually at various times ‘the schtick’ says exactly that, in a reply above here so have you!”
No I have not said that, ever, that is an absolute lie Phil, and I challenge you to post a link to any of my comments to back up your lie. Of course, you will once again be unable to do so.
Re Mars: Just like all the planets with thick atmospheres, the “GHE” is entirely due to the Maxwell/Clausius/Carnot/Feynman/US Std Atmosphere gravito-thermal “GHE” and is completely independent of the concentrations of IR active gases
http://wattsupwiththat.files.wordpress.com/2011/12/image_thumb25.png
http://hockeyschtick.blogspot.com/2014/05/maxwell-established-that-gravity.html
Phil says ” temperature of a gas is due to the average kinetic energy of the translating molecules, the distribution follows a Boltzmann distribution.”
Agreed and if our atmosphere was a pure N2 Boltzmann distribution, the average kinetic energy of the atmosphere would be slightly higher than our current atmosphere due to the steeper lapse rate:
http://hockeyschtick.blogspot.com/2014/11/why-greenhouse-gases-dont-affect.html
And as Feynman shows, the gravito-thermal Boltzmann distribution of a pure N2/O2 atmosphere perfectly explains the atmospheric temperature gradient without any radiative calculations or trace CO2
http://hockeyschtick.blogspot.com/2015/07/feynman-explains-how-gravitational.html
The 1976 US Std atmosphere mathematical/physical model proves the exact same from the surface to 100km without using one single radiative transfer equation. How is that possible Phil?
Phil once again continues to deny that the line absorption/emission by CO2 of any relevance to the AGW debate are centered around 15um in the LWIR as shown in the spectra posted above, NOT the much higher energy transitions at 9.6 and 10.6um that Phil posted above relevant for a N2/CO2 LASER!
Climate science uses the absolutely false assumption that the molecular line-emitter CO2 is a blackbody to which the Planck and Stefan-Boltzmann laws apply. Manabe et al shows increased CO2 emitting as a blackbody at up to 300K! Even if CO2 was a blackbody, and it clearly is not, the 15um peak emission would correspond to a peak emitting temperature of a true blackbody at 193K, NOT 300K!
Phil: Yes or No: Can a blackbody at 193K warm a warmer blackbody at 255K by 33K to 288K?

Reply to  hockeyschtick
August 11, 2015 1:01 pm

micro6500 August 11, 2015 at 11:54 am
“This has been correctly answered above, temperature of a gas is due to the average kinetic energy of the translating molecules, the distribution follows a Boltzmann distribution.”
My understanding is that warm inert gases don’t have Boltzmann distributes, and the optical window collaborates that.

Certainly does, see:
http://hyperphysics.phy-astr.gsu.edu/hbase/kinetic/statcom.html#c1
“The absorption of a photon which promotes the CO2 vibrational level has no effect on the temperature of the molecule.”
If this is true then a vibrating Co2 would not warm inert gases surrounding it.

Not unless it collides with those molecules thus transferring some kinetic energy to the translational modes.
“The 15 micron photon represents a fairly high temperature but it would have to be shared with the rest of the gas via collisions to raise the temperature.
So if you lit up a mix of Co2 and (as) equal mass inert gases with 15u photons until the temperature stabilized, what would the temp be? In fact isn’t this exactly how LWIR being intercepted by Co2 is suppose to be warming that air?”

That would depend on the conditions and the surroundings, the temp would go up.
“If you have some CO2 molecules in a container at fairly low pressure you will get 15 micron emissions from the gas because those molecules will absorb photons emitted by the container walls and then re-emit regardless of the gas temperature.”
Place them in a cold trap.
“There isn’t a temperature of the gas at which it will start radiating under those circumstances”
Then create the circumstances, especially when it’s due to vibration.

On the energy level charts, I can’t figure out how they get the 10.6 and 9.6 emissions, the reciprocal of the wave number (times 10000) gives the wave length for all of the other modes that are a good match to the spectrum, but the 544, 597, 741 and 941 don’t work out to 10.6 and 9.6, what am I doing wrong for those?
The CO2 in the laser is excited to the first excited asymmetric mode(001) at ~2349 cm-1, photon emission to 020 at ~1335 cm-1 emits a ~9.6 micron photon (depending on the exact rotational levels), the difference is ~1014 cm-1 or ~9.8 micron (the exact value depends on the rotational levels).

gammacrux
Reply to  hockeyschtick
August 11, 2015 1:12 pm

No, nobody can really answer your questions and temperature is a matter of thermodynamics and statistical mechanics not quantum electrodynamics.
A specific CO2 molecule in a gas at temperature T may emit a 15u photon whatever T might be, except T= 0 K. So the gas at any T > 0 K emits at 15 u and this might even be taken advantage of to measure it’s temperature. The higher the temperature the more it emits and planck’s function describes this temperature dependence.
Similarly a specific CO2 molecule in a gas at T may absorb a 15u photon, thermalize that energy, i.e. redistribute it among other degrees of freedom of the gas, translation for example, and so increase the temperature of the gas whatever T might be, even at T= 0 K.

Reply to  hockeyschtick
August 11, 2015 1:31 pm

hockeyschtick August 11, 2015 at 12:59 pm
Phil lies again re “CO2 does absorb IR. No one claims it doesn’t.” Phil says “Actually at various times ‘the schtick’ says exactly that, in a reply above here so have you!”
No I have not said that, ever, that is an absolute lie Phil, and I challenge you to post a link to any of my comments to back up your lie. Of course, you will once again be unable to do so.

Here it is:
hockeyschtick August 9, 2015 at 11:17 pm
GHGs don’t “remove” a part of the outgoing spectrum. The ~15um “chunk” in the middle is due to the fact that the CO2 + H2O molecular line-emitters are emitting as much as they can as pseudo-blackbodies “corresponding” to a BB emission temperature of ~217K as shown by the 220K Planck curve in the OLR spectra I posted above.

So where does the BB emission in the 15 micron band that was emitted by the earth’s surface go to? Conventional science says that it is absorbed by the CO2 in the lower troposphere, i.e. it is ‘removed’, you say that it is not ‘removed’.
Re Mars: Just like all the planets with thick atmospheres, the “GHE” is entirely due to the Maxwell/Clausius/Carnot/Feynman/US Std Atmosphere gravito-thermal “GHE” and is completely independent of the concentrations of IR active gases
Your self reference to your misinterpretation of atmospheric physics doesn’t establish anything.
Phil once again continues to deny that the line absorption/emission by CO2 of any relevance to the AGW debate are centered around 15um in the LWIR as shown in the spectra posted above, NOT the much higher energy transitions at 9.6 and 10.6um that Phil posted above relevant for a N2/CO2 LASER!
Evidently you can’t read, I continually referred to the bending mode at 15 micron, you know the one that you said doesn’t exist!
Here for instance:
“As a triatomic CO2 has 3N-5 vibrational modes, i.e. 4, but the bending modes are doubly degenerate so that there are only 3 fundamental frequencies corresponding to the symmetric stretch 𝜈1(𝜎g) at 1388 cm-1, the bending mode 𝜈2(𝜋u) at 667 cm-1 and the antisymmetric stretch 𝜈3(𝜎u) at 2349 cm-1. The infrared spectrum of CO2, however, contains strong 𝜈2 and 𝜈3 bands because 𝜈1 has no oscillating dipole moment and is thus forbidden.
The 15 μm band of CO2 is the 𝜈2, or bending mode so as usual ‘the schtick’ is wrong again!”
Climate science uses the absolutely false assumption that the molecular line-emitter CO2 is a blackbody to which the Planck and Stefan-Boltzmann laws apply. Manabe et al shows increased CO2 emitting as a blackbody at up to 300K! Even if CO2 was a blackbody, and it clearly is not, the 15um peak emission would correspond to a peak emitting temperature of a true blackbody at 193K, NOT 300K!
Not at all that’s based on your false interpretation of Wien’s law, CO2 emits 15 micron radiations at temperatures in excess of 300K, there is no such thing as your ‘peak emitting temperature’.
Phil: Yes or No: Can a blackbody at 193K warm a warmer blackbody at 255K by 33K to 288K?
Since the ‘blackbody at 193K’ is a figment of your imagination there’s no reason to answer it, I would point out that in your pure nitrogen atmosphere the blackbody at 193K would be replacing a background at 4K, that would make quite a difference.

Reply to  hockeyschtick
August 11, 2015 1:34 pm

Sorry Micro I appear to have posted this in the wrong place.
Phil. August 11, 2015 at 9:58 am
micro6500 August 10, 2015 at 1:10 pm
“The 15 μm band of CO2 is the 𝜈2, or bending mode so as usual ‘the schtick’ is wrong again!”
Phil, thanks! I learned something new.

Good, glad to be of help.
But actually I think HS is sort of right, there’s 2 orbital at 15u, and once those energy levels are full without emitting a 15u photon, it’s not absorbing anymore 15u photons. That why there are 2 identical energy levels associated with Co2, one for each C-O bond.
No he’s not, also you appear to have picked up some of his errors.
As said above a triatomic like CO2 has 4 vibrational modes. Assume the CO2 molecule is in the plane of your screen.
In the first case the O atoms move away from the central C atom simultaneously in the plane of the screen, this is the symmetrical vibrational mode. Because it maintains symmetry it has no dipole and therefore is not IR active.
In the second case the O atoms move in the same direction, so as the C-O bond on one side shortens the other one lengthens, this is the asymmetrical mode. Because symmetry is not preserved there is a dipole and the mode is IR active at around 2349 cm-1 (4.26 micron) with its associated rotational fine structure.
The remaining two modes are the bending modes (which ‘the schtick’ mistakenly asserts don’t exist!)
One mode has the two O atoms moving vertically wrt the central C atom, the other mode has them moving in and out of the screen. Energetically both modes are identical which is why they are termed ‘degenerate’. Both have a dipole and therefore are IR active at around 667 cm-1 (14.98 micron) with their associated fine structure.
Once the ground state absorbs a 15 micron photon it’s quite long lived and can absorb another photon and be promoted to yet another energy level. So for example the gs absorbs a 667.4 cm-1 photon it can then absorb a 667.8 cm-1 photon (Q-branch) or a 618.0 cm-1 or a 720.8 cm-1 (you’ll see those two strong lines on either side of the 15 micron band). Those are only the pure vibrational lines, there are many associated rotational lines which comprise the whole band.
Hope that helps.

Reply to  hockeyschtick
August 11, 2015 1:50 pm

Planck’s Law and the theory of blackbody radiation does in fact prove my statement “A low frequency/energy photon (eg 15um CO2 photons) cannot transfer any quantum energy to a higher frequency/energy/temperature blackbody because all of those lower frequency/energy microstates & orbitals are already completely filled/saturated in the hotter body. This fact alone from quantum mechanics falsifies CAGW.”
As shown on these calculated Planck curves, CO2 (+H2O overlap) absorbs and emits in the LWIR the same as a true blackbody would at an emitting temperature of ~217K over the LWIR band from ~12 or 13um to ~17um
http://www.geo.cornell.edu/geology/classes/Geo101/graphics/Nimbus_energy_out.jpg
Even though CO2 has emissivity less than a true black body and line emissions centered around 15um, and observations also show CO2 emissivity decreases with temperature unlike a true BB, for purposes of this simple question, we’ll assume (like climate scientists incorrectly do) that CO2 emits and absorbs as a true BB.
gammacrux: Can a BB at 217K cause a BB at 255K to warm by 33K to 288K as the Arrhenius theory claims?
The lower energy/temperature/frequency microstates of a BB at a given temperature are by definition “saturated” in a perfect blackbody absorber/emitter and that explains why classical physics shown by the dashed lines in fig 4.8 does not happen in nature and instead a Planck curve of emission and absorption is found in nature. If those lower energy/temperature/frequency microstates of a BB were not saturated, then any energy level photons could be thermalized by a blackbody and the frequency vs BB energy intensity curve would go to infinity as shown by the (false) dashed lines in fig 4.8:
http://hockeyschtick.blogspot.com/2015/08/plancks-quantum-theory-explains-why-low.html

Reply to  hockeyschtick
August 11, 2015 2:10 pm

Phil: Yes or No: Can a blackbody at 193K warm a warmer blackbody at 255K by 33K to 288K?
Phil says: “Since the ‘blackbody at 193K’ is a figment of your imagination there’s no reason to answer it, I would point out that in your pure nitrogen atmosphere the blackbody at 193K would be replacing a background at 4K, that would make quite a difference.”
NO not even wrong.
Phil claims the blackbody at 193K is a figment of the OLR spectra that both he and I have posted!
Phil look very closely: do you see the blackbody Planck curves calculated for blackbodies with emitting temperatures of 220K-320K?
http://www.geo.cornell.edu/geology/classes/Geo101/graphics/Nimbus_energy_out.jpg
Do you see that the CO2+H2O overlap corresponding Planck curve is ~217K (it is higher than 193K for pure CO2 due to presence of water vapor overlap) in the LWIR spectra of any relevance to the AGW debate 12-17um?
THAT is the 217K “partial blackbody” I’m asking about. YOU are effectively claiming that radiation from a “partial blackbody” at a peak emitting temperature of ~217K in the 12-17um band can make a true black body e.g. the Earth warm from the 255K equilb temp with Sun by 33K to 288K!
Secondly, I’ve already shown you (and so does Feynman’s chapter 40, vol 1, and the US Std Atmosphere, Maxwell, etc) that a pure N2 or pure N2/O2 Boltzmann distribution atmosphere would have almost the same temperature gradient as our current atmosphere:
http://hockeyschtick.blogspot.com/search?q=boltzmann+distribution
The “ERL” on a planet with pure N2 atmosphere is located at the surface h=0 and is exactly equivalent to the equilibrium temperature with the Sun for that planet, NOT “a blackbody at 193K” as you falsely claim above!

Reply to  hockeyschtick
August 11, 2015 2:50 pm

Phil says “So where does the BB emission in the 15 micron band that was emitted by the earth’s surface go to? Conventional science says that it is absorbed by the CO2 in the lower troposphere, i.e. it is ‘removed’, you say that it is not ‘removed’.”
Since when does absorption and re-emission in the 15 micron band equal “removed” energy? Only on Phil’s planet, which is apparently oblivious to the 1st and 2nd laws, Kirchhoff’s, Planck’s, & Stefan-Boltzmann Laws, quantum and statistical mechanics, etc. Where oh where Phil did your “removed” “missing heat” go?
GHGs are passive IR radiators at all levels of the atmosphere, with the possible oscillator frequencies fixed by their molecular & atomic structure, and that absorb/emit in the ~12-15um band. There is no energy “removed” or added by GHGs, other than the energy GHGs loose to space, which is exactly equal to the energy input from the Sun, Ein = Eout. GHGs delay transit of IR photons by the time it takes to absorb and then either re-emit a 15um photon which amounts to only a few milliseconds for the whole troposphere, or more likely transfer energy with collisions with non-GHGs to accelerate convection)

Reply to  hockeyschtick
August 13, 2015 7:42 am

hockeyschtick August 11, 2015 at 2:10 pm
Phil look very closely: do you see the blackbody Planck curves calculated for blackbodies with emitting temperatures of 220K-320K?
Do you see that the CO2+H2O overlap corresponding Planck curve is ~217K (it is higher than 193K for pure CO2 due to presence of water vapor overlap) in the LWIR spectra of any relevance to the AGW debate 12-17um?

No I don’t.
At 667cm-1(15 micron) I see the CO2 emission at about 235K, (no H2O).
At ~660 and 680 cm-1 I see CO2 emission at about 217, (no H2O).
At ~620 and 718 cm-1 I see CO2 emission at about 255.
THAT is the 217K “partial blackbody” I’m asking about.
Which doesn’t exist.
Secondly, I’ve already shown you (and so does Feynman’s chapter 40, vol 1, and the US Std Atmosphere, Maxwell, etc) that a pure N2 or pure N2/O2 Boltzmann distribution atmosphere would have almost the same temperature gradient as our current atmosphere:
Yes the same lapse rate, it has nothing to do with actual temperature, just the way it falls off with altitude.

Reply to  hockeyschtick
August 13, 2015 7:47 am

hockeyschtick August 11, 2015 at 2:50 pm
Phil says “So where does the BB emission in the 15 micron band that was emitted by the earth’s surface go to? Conventional science says that it is absorbed by the CO2 in the lower troposphere, i.e. it is ‘removed’, you say that it is not ‘removed’.”
Since when does absorption and re-emission in the 15 micron band equal “removed” energy?

Energy transmitted to atmosphere via collision = absorption – re-emission

Reply to  hockeyschtick
August 13, 2015 12:17 pm

Phil. says, August 11, 2015 at 9:09 am:
“… in a reply above here so have you!
In reality, 85-90% of the IR in those ToA spectras are emitted to space from the atmosphere itself (and not from one particular layer, but cumulatively from all layers, from top to bottom), only the remaining 10-15% from the actual surface. The Earth system emits an average flux of 240 W/m^2 to space, not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux, but because … that’s the flux it absorbs from the Sun on average. 240 IN, 240 OUT.”

Surely you’re joking, Phil.
Did you notice how I wrote “not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux” and NOT “CO2 does not absorb IR from the surface”? You are not this dumb, Phil. So why do you feel the need to pretend you are …?
This pattern of playing stupid, “misunderstanding” what I’m actually saying, so as to avoid having to relate directly to it, runs through your entire reply. It’s a common misdirection tactic employed by internet trolls. Are you a troll, Phil.?
“This is incorrect, most of the emission is from the surface (…)”
Phil., what is this nonsense?
“The emission from the surface in the 15 micron CO2 band is absorbed low in the atmosphere removing the ‘chunk’, what emission is seen in that band is from cold CO2 high in the atmosphere.”
There is no energy “removed”, Phil. That’s all in your head. Earth’s spectrum as seen from space is an EMISSION spectrum, not an absorption spectrum. It is not the surface spectrum eaten into by the intervening atmosphere. I know you’d prefer to think of it that way, but no. The emissions making up the full spectrum comes to 85-90% from the atmosphere, a mere 10-15% directly from the surface. You assume a LARGE surface flux being absorbed by the atmosphere and a resultant much SMALLER atmospheric flux to space. There is nothing in this spectrum suggesting there is.
I explained this all to Bart above. You can’t compare the potential (mathematically calculated) 390 W/m^2 from a 288K BB radiating straight into space with the real OLR radiant heat through the ToA. Then you only end up confusing yourself. They are in no way equivalent entities. This is when you end up thinking that energy is somehow “missing” on the way out. It’s not, Phil. Those 150 W/m^2 of “trapped” energy are imaginary. Made up. The global surface on average releases a total of 145 W/m^2 worth of heat into the atmosphere. For it to absorb and thus warm from. That’s it. That’s the transfer of energy. Anything beyond this is merely part of a mental construct. Moreover, only ~33 of those 145 W/m^2 represent radiant heat. The rest is non-radiative. This is compared to the 220 W/m^2 emitted through the ToA to space (final heat flux from the Earth system), which include also the 75 W/m^2 absorbed by the atmosphere of the incoming solar flux. There’s no energy missing anywhere at dynamic equilibrium/steady state, Phil. That’s sort of the point. All that enters exits. Balance.
“the GHE effect on Mars is about 5K.”
No. This is nothing but a perpetuated myth. The myth of a Martian “greenhouse effect”. People just keep stating it as a defined Truth, not referring to anything of substance, any real-world evidence of any kind. And you’re naturally all too eager to regurgitate it here once more. Thing is, it’s just assumed and thus claimed to be. But assuming and claiming something to be true doesn’t make it true. The ~5K Martian rGHE is simply an old guesstimate from before any global data was available. Or as your second link describes it: “It is estimated that the Greenhouse effect on Mars warms the atmosphere at the surface by less than 10 degrees Fahrenheit.” Of course it’s “estimated” to be present and working. Because that’s what the rGHE hypothesis itself DEMANDS. The “estimate” in question is based solely on the fundamental premise behind the rGHE hypothesis. There has never been any actual observational data to support it. It’s the common method of circular reasoning of “modern climate science”: There just HAS TO be an rGHE on Mars, simply because there’s so much CO2 in its atmosphere, absorbing outgoing surface IR. It doesn’t have to be big, but there MUST be one. Because there SHOULD be. According to the hypothesis. “Radiative forcing,” after all.
So, by simply starting out by stating there is one of, say, 5K, the hypothesis effectively verifies itself. The conclusion is already decided upon. Based on what the original premise requires. And the original premise is thus also neatly confirmed by the conclusion. No need to go out and actually observe anything at all. And we’ve come full circle.
This is the reason why I specifically referred you to the “Mars Global Surveyor” (TES) and “Mars Reconnaissance Orbiter” (MCS) projects of NASA. To avoid this circularity. Its hard, though, when you just choose to ignore it, pretend that you didn’t see it. Playing stupid …
http://faculty.washington.edu/joshband/publications/bandfield_mcs_tes.pdf
Back in the days before consistent, comprehensive, global, multiyear measurements of the Red Planet were in place, all one had was scattered atmospheric profiles from descending probes, plus in situ measurements from various landers and rovers. Most of these operated preferably during the summer and landed preferably in the tropics and subtropics. Also, the strong inversions at the Martian surface during nighttime and high-latitude winters tend to be ‘forgotten’ when extrapolating the mean tropospheric temperature gradient all the way down to the ground. The diurnal average ((day+night)/2) is actually quite close to isothermal (zero gradient) in the lowermost 2-3 km of the Martian atmosphere:comment image
NASA now seems to have it pretty well established now that the mean global surface temperature of Mars is in fact around 210K (-63C), taking in also the winters and high latitudes.
http://mars.jpl.nasa.gov/allaboutmars/facts/
http://nssdc.gsfc.nasa.gov/planetary/factsheet/marsfact.html
210K is exactly the number you get when you calculate Mars’s effective planetary blackbody temperature in space. So, the rGHE is zero, no raised ERL. You might wish for there to be some rGHE warming. But there isn’t.
I understand that it’s more convenient for you, Phil., to refer to outdated (and unsupported) assumptions about what SHOULD be the case, so that you can lean back and just dismiss offhand this quite intriguing circumstance rather than actually dealing with it. I agree with Bart above in his description of you:
“(…) he has latched onto an erroneous conclusion without adequate consideration of alternatives to his POV.”

Reply to  hockeyschtick
August 13, 2015 4:14 pm

The ~217K “partial blackbody” and corresponding 220K emitting temperature Planck curve is circled right here Phil, please get some reading glasses:
http://1.bp.blogspot.com/-vvN1VZjxhu4/Vc0gRW-aXeI/AAAAAAAAHUI/dlx0Wlsaeco/s640/OLR%2BNimbus_energy_out%2B2.jpg
Secondly, I’ve already shown you (and so does Feynman’s chapter 40, vol 1, and the US Std Atmosphere, Maxwell, etc) that a pure N2 or pure N2/O2 Boltzmann distribution atmosphere would have almost the same temperature gradient as our current atmosphere:
Phil says “Yes the same lapse rate, it has nothing to do with actual temperature, just the way it falls off with altitude.”
Baloney, it has everything to do with temperature from the surface to edge of space, and the HS greenhouse eqn calculates a perfect replica of the US Standard Atmosphere, Triton, and overlapping portions of T/P curve on Venus without any knowledge of surface temperature in advance. The only radiative forcing in that equation is that from the Sun, no radiative forcing from GHGs.
http://3.bp.blogspot.com/-xXJOurldG_E/VHjjbD6XinI/AAAAAAAAGx8/8yXlYh8Lcr4/s1600/The%2BGreenhouse%2BEquation%2B-%2BSymbolic%2Bsolution%2BP.png
Since when does absorption and re-emission in the 15 micron band equal “removed” energy?
Phil says “Energy transmitted to atmosphere via collision = absorption – re-emission”
This is how GHGs preferentially transfer heat to N2/O2 to accelerate convective cooling. Thus, the “removed energy” is sent to the tropopause and then “removed” to space. You continue to fantasize that the CO2+H2O ~217K “partial blackbody” emitting temperature can warm the 255K Earth by 33K to 288K.
Lalaland fizzikxs

Reply to  hockeyschtick
August 14, 2015 8:55 am

Kristian August 13, 2015 at 12:17 pm
Phil. says, August 11, 2015 at 9:09 am:
“… in a reply above here so have you!
In reality, 85-90% of the IR in those ToA spectras are emitted to space from the atmosphere itself (and not from one particular layer, but cumulatively from all layers, from top to bottom), only the remaining 10-15% from the actual surface. The Earth system emits an average flux of 240 W/m^2 to space, not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux, but because … that’s the flux it absorbs from the Sun on average. 240 IN, 240 OUT.”
Surely you’re joking, Phil.
Did you notice how I wrote “not because the atmosphere happens to absorb 150 W/m^2 of a much larger original flux” and NOT “CO2 does not absorb IR from the surface”? You are not this dumb, Phil. So why do you feel the need to pretend you are …?
This pattern of playing stupid, “misunderstanding” what I’m actually saying, so as to avoid having to relate directly to it, runs through your entire reply. It’s a common misdirection tactic employed by internet trolls. Are you a troll, Phil.?
“This is incorrect, most of the emission is from the surface (…)”
Phil., what is this nonsense?
“The emission from the surface in the 15 micron CO2 band is absorbed low in the atmosphere removing the ‘chunk’, what emission is seen in that band is from cold CO2 high in the atmosphere.”
There is no energy “removed”, Phil. That’s all in your head. Earth’s spectrum as seen from space is an EMISSION spectrum, not an absorption spectrum. It is not the surface spectrum eaten into by the intervening atmosphere. I know you’d prefer to think of it that way, but no. The emissions making up the full spectrum comes to 85-90% from the atmosphere, a mere 10-15% directly from the surface. You assume a LARGE surface flux being absorbed by the atmosphere and a resultant much SMALLER atmospheric flux to space. There is nothing in this spectrum suggesting there is.

No it’s reality. Here’re the spectra from Petty’s book:
http://www.skepticalscience.com/images/infrared_spectrum.jpg
Note that looking up in the 800 – 1000 cm-1 range and 1100 – 1200 cm-1 range there is no IR emitted downwards from the atmosphere whereas there is upward emission corresponding to the surface BB radiation. That is consistent with my explanation, not yours.

Reply to  hockeyschtick
August 14, 2015 11:06 am

Phil. says, August 14, 2015 at 8:55 am:
“No it’s reality.”
What’s reality? What exactly are you referring to, Phil.? I said: “You assume a LARGE surface flux being absorbed by the atmosphere and a resultant much SMALLER atmospheric flux to space. There is nothing in this spectrum suggesting there is.” Your reply: “No it’s reality. Here’re the spectra from Petty’s book:” Er, again: Both of these spectra are EMISSION spectra. It is the radiation actually reaching the detector of the measuring instrument. This is ALWAYS the radiant HEAT (what you would call the ‘net flux’) moving between source and target. If you cool an IR detector sufficiently, it will detect IR even from a cold sky. That doesn’t mean that this cold sky transfers any radiant energy to Earth’s warm surface. It transfers radiant energy to the much colder detector. That’s how these spectra are produced, Phil. Perhaps you didn’t know.
You seemingly want to claim that what we see from space is the emission spectrum of Earth’s actual, solid/liquid surface with bits of it somehow removed by the intervening atmosphere. There is, however, no way that you could’ve interpreted the situation the way you do unless you perceived that incidentally drawn outline of a perfect Planck curve on top of the actual spectrum as somehow the original surface emission flux and the real, measured Earth spectrum only as this original after atmospheric absorption.
“Note that looking up in the 800 – 1000 cm-1 range and 1100 – 1200 cm-1 range there is no IR emitted downwards from the atmosphere whereas there is upward emission corresponding to the surface BB radiation. That is consistent with my explanation, not yours.”
No, it’s completely consistent with my ‘explanation’.

Reply to  hockeyschtick
August 15, 2015 11:41 pm

Bart writes “The GHE hypothesis says it heats the surface, raising the temperature, and thereby allowing the energy to escape at different frequencies when the distribution shifts.”
Well that possibility is an assumption of the AGW theory. The “settled science” part says that the altitude of the atmosphere radiating as the “effective radiation level” (ERL) increases due to the increase in CO2 above and then due to the lapse rate, is radiating from a cooler place and hence radiates less energy. The settled science part says that the temperature of the atmosphere at the ERL must increase to regain equilibrium.
There is no “settled science” requirement at ground level for any change in temperature. AGW theory simply extrapolates that heating at the ERL back to the ground through changed lapse rate all the way to the ground where your secondary effect helps to compensate.
And it supposedly does so with feedbacks that IMO appear to decrease entropy.

Reply to  hockeyschtick
August 17, 2015 7:18 am

Kristian August 14, 2015 at 11:06 am
Phil. says, August 14, 2015 at 8:55 am:
“No it’s reality.”
What’s reality? What exactly are you referring to, Phil.? I said: “You assume a LARGE surface flux being absorbed by the atmosphere and a resultant much SMALLER atmospheric flux to space. There is nothing in this spectrum suggesting there is.” Your reply: “No it’s reality. Here’re the spectra from Petty’s book:” Er, again: Both of these spectra are EMISSION spectra. It is the radiation actually reaching the detector of the measuring instrument. This is ALWAYS the radiant HEAT (what you would call the ‘net flux’) moving between source and target. If you cool an IR detector sufficiently, it will detect IR even from a cold sky. That doesn’t mean that this cold sky transfers any radiant energy to Earth’s warm surface. It transfers radiant energy to the much colder detector. That’s how these spectra are produced, Phil. Perhaps you didn’t know.
You seemingly want to claim that what we see from space is the emission spectrum of Earth’s actual, solid/liquid surface with bits of it somehow removed by the intervening atmosphere. There is, however, no way that you could’ve interpreted the situation the way you do unless you perceived that incidentally drawn outline of a perfect Planck curve on top of the actual spectrum as somehow the original surface emission flux and the real, measured Earth spectrum only as this original after atmospheric absorption.
“Note that looking up in the 800 – 1000 cm-1 range and 1100 – 1200 cm-1 range there is no IR emitted downwards from the atmosphere whereas there is upward emission corresponding to the surface BB radiation. That is consistent with my explanation, not yours.”
No, it’s completely consistent with my ‘explanation’.

Your hypothesis is that almost all of the outgoing spectrum at the ToA is emission from the atmosphere. In which case looking up in the 800 – 1000 cm-1 range and 1100 – 1200 cm-1 range should show emission just as when looking down in those same ranges, however as shown it does not (however in the ranges where GHGs absorb/emit downward emission is seen). The reason is that the atmosphere is transparent to those wavelengths and the emissions from the surface go directly to space there. In the regions where components of the atmosphere absorb the surface emissions are attenuated and only the emissions from the GHGs high in the atmosphere where it is optically thin (lower than surface values) are able to escape.
MODTRAN calculates this effect and produces a very good match to the observed spectrum. Here’s a sample MODTRAN calculation showing all the GHG ‘notches’, and the same calculation with each GHG progressively removed. You’ll notice that the resultant spectrum approaches the blackbody spectrum of the surface.
http://i302.photobucket.com/albums/nn107/Sprintstar400/Atmos.gif

Reply to  Phil.
August 17, 2015 7:45 am

MODTRAN calculates this effect and produces a very good match to the observed spectrum.

Where and what direction is MODTRAN “looking”, from the surface looking up, from TOA looking down?
I have been pointing a 8u-14u (atm window) IR Thermometer both down, and up(from my front yard) measuring surface temps as well as what the surface is seeing (without any DWIR, which you can add back in), and I frequently see 100F differences between my concrete sidewalk and Zenith temps.
Low level clouds BTW can be within 10-30F of the grounds temp. The temp of Zenith is frequently in the 160W/M^2 range, and even if you add what I think is the full 22W/M^2 Co2, it’s still very cold compared to the surface, and that window is allowing a lot of flux through much of the atm with no influence.

Reply to  hockeyschtick
August 19, 2015 11:32 am

Phil. says, August 17, 2015 at 7:18 am:
“Your hypothesis is that almost all of the outgoing spectrum at the ToA is emission from the atmosphere.”
It is not my ‘hypothesis’, Phil. All Earth energy budgets that I’ve seen (starting with the Kiehl-Trenberth ’97 one) say basically the same thing. The only part of our planet’s average total emission to space (240 W/m^2) that originates straight from the surface is that portion escaping through the atmospheric window (~8-13 microns). This makes up ~10-15% of the total spectrum. Note that far from all of Earth’s emission to space even in this 8-13 micron interval comes directly from the surface. Much of it is emitted rather by clouds (covering ~60% of Earth) somewhere aloft in the tropospheric column. The atmospheric window is only a ‘window’ through the gases making up our atmosphere (ozone being one important exception), not through its clouds. This is often forgotten about.
“In which case looking up in the 800 – 1000 cm-1 range and 1100 – 1200 cm-1 range should show emission just as when looking down in those same ranges, however as shown it does not (however in the ranges where GHGs absorb/emit downward emission is seen). The reason is that the atmosphere is transparent to those wavelengths and the emissions from the surface go directly to space there.”
Er, yes. And I have never claimed otherwise? That’s the atmospheric window that I was speaking of.
“In the regions where components of the atmosphere absorb the surface emissions are attenuated and only the emissions from the GHGs high in the atmosphere where it is optically thin (lower than surface values) are able to escape.”
Indeed. But the atmosphere absorbs most of the energy (radiative and non-radiative) transferred from Earth’s surface, about 80-85% of it. ALL atmospheres absorb some of the outgoing heat from the solar-heated planetary surfaces beneath. Thick atmospheres absorb most of it.
“MODTRAN calculates this effect and produces a very good match to the observed spectrum. Here’s a sample MODTRAN calculation showing all the GHG ‘notches’, and the same calculation with each GHG progressively removed. You’ll notice that the resultant spectrum approaches the blackbody spectrum of the surface.”
Staring at these spectra has apparently made you go blind to reality, Phil. They seem to muddle your thinking. You think you ‘see’ in them something you quite frankly do not, an effect you are in fact only imagining in your mind, by first mentally placing a perfect outline of a surface (288K) Planck curve as an overlay on top of the actual ToA spectrum, then automatically interpreting notches in the real spectrum as observed as somehow notches rather in your hypothetical surface (288K) Planck curve outline, as if the ToA spectrum were just the surface spectrum ‘eaten into’ by the GHGs on its way through the atmosphere.
Once again, Phil., the ToA spectrum is an EMISSION spectrum, NOT an absorption spectrum.
Once again, you believe there is “energy missing” on its way from the surface to space, and so in your mind, this energy must rather make the surface warmer; I guess, by being re-radiated back down.
You need to wake up and see the whole picture, Phil. It’s about more than just outgoing IR. The atmosphere is NOT like in these contrived MODTRAN spectra of yours. You cannot just clinically remove separate IR active constituents at your own will from the atmosphere and then expect the atmosphere in all other respects to remain unchanged, only with more radiation going out than before. In reality, you change the WHOLE atmosphere.
Earth as a thermodynamic system is in a relative steady state, in a sort of dynamic equilibrium with the Sun and space (its ‘surroundings’). In such a state, ALL energy that enters the system also exits. On average. Across some defined cycle. 240 W/m^2 comes in from the Sun. Then, in such a steady state, 240 W/m^2 also needs to go out from the Earth to space. And it does. None of the energy coming in is “held back” or “trapped” within the system. The notches do not represent “missing energy”, Phil. And the transfer fluxes sure don’t amplify internally by looping up and down between atmosphere and surface. That’s a mental model that’s completely absurd. There is no energy transfer from the cool atmosphere to the warm surface. The energy transfer only goes from the surface UP. The only energy transfer TO the surface comes from the Sun.
You seem to forget (or overlook) two crucial aspects:
1) Of the surface heat transferred to the atmosphere, less than one fourth is radiant, the remaining being non-radiant. However, ALL the heat (100%) transferred from the atmosphere to space is radiant. So the atmosphere absorbs 25-35 W/m^2 of radiant heat (IR) from the surface, but emits 200-220 W/m^2 into space. (This includes the ~75 W/m^2 of SOLAR radiant heat absorbed by the atmosphere coming in through the ToA. See next point.)
2) The atmosphere (the IR active gases + aerosols + liquid and icy clouds) not only absorbs outgoing IR from the surface. It also reflects AND absorbs a large portion of the incoming solar radiant heat, on average globally/annually about 77 and 75 W/m^2 respectively of the original 340 W/m^2, leaving only 165 W/m^2 og solar heat to be absorbed by the actual surface (23 W/m^2 reflected by the surface itself).
So when you try to argue that as a MODTRAN spectrum fills up and starts moving towards a full 288K Planck curve, then the surface will automatically have to warm, you have completely missed the point.
First of all, there is no full 288K Planck curve originally being emitted from the surface. That’s just something that’s being arbitrarily drawn onto the actual recorded spectrum. As sort of an artificial reference or benchmark spectrum: This is how it should’ve looked! It’s fully hypothetical. And it obviously and totally confuses your eyes and your mind into thinking you see something you don’t.
The mean planetary emission temperature of Earth is 255K, not 288K, if you go by the 240 W/m^2 global average flux. But since this flux is emitted NOT from a single solid blackbody surface at 255K, but rather from a full 3D dynamic volume of gases, clouds and aerosols, tens of kilometres thick + a solid/liquid surface at the bottom, the outline will necessarily look a far cry from a smooth 255K Planck curve. You cannot and should not try to fit a 2D blackbody Planck curve to the emission spectrum of a gaseous VOLUME. If you still choose to do so, you will have to find the total flux intensity first. In our case it’s 240 W/m^2. Well, then the hypothetical 2D blackbody would be one at 255K, and the resulting Planck curve would be one peaking at ~11.4 microns, not ~10 microns (288K). Some of the 3D spectrum will be above this ideal 255K Planck curve, and other parts will be below it. But all in all, the ragged real spectrum will match the total flux intensity of the smooth hypothetical one: 240 W/m^2.
Secondly, let’s say, Phil., that you were to compare two Earth spectra to space and that the one is recorded during a dry, cloud-free day, and the other during a cloudy, humid day. This could be at the same spot at different times, or at different spots, only in the same general latitudinal zone.
Using your logic, the second spectrum will be a lot more ‘closed up’ than the first one, allowing a lot less energy from the surface out to space, and thus the “removed” energy will necessarily make the surface underneath warmer than in the first case, where substantially more of the energy is allowed through to space. MODTRAN also would agree with you. This would be the naturally inferred conclusion.
And it would be wrong!
Because, is this what we actually observe in the real world? Is the surface of the Earth generally warmer when there are lots of clouds and lots of water vapour in the tropospheric column above than when its dry and cloudless? It sure isn’t! Clouds and humid air definitely reduce nighttime cooling of the surface, but they also reduce daytime heating of that same surface. And the latter effect turns out to be invariably stronger. And quite significantly so. This is pretty easy to verify empirically through consistent observations from the real Earth system.

Reply to  Kristian
August 19, 2015 12:19 pm

Good post Kristian,
I’ve been measuring the 8u-14u IR from the surface (N41,W81), and pointing out in that window the Zenith temp is fracking cold, in dry air >100F warmer than my concrete side walk, and my asphalt driveway can be >20F warmer then my sidewalk, as well as the grass in in my yard is close to the sidewalk while in the Sun, but cools off by >10F when compared to the sidewalk once it’s out of the Sun, I believe this is because grass is mostly an air based insulator, so it’s exposed surface cools rapidly.
You can see all of this herecomment image
Along with air temps.
Now, high humidity slows cooling and makes the sky warmer dropping the difference to 70F-80F when compared to my sidewalk (proxy for the earths surface temp), clouds are warm, the thicker they are the warmer they are. Thick flat bottom cumulus clouds can be less than 30F from the surface temp.
Late into the high, rel humidity approaches 80-100% and the rate of air temp cooling slows, you can see how it changes during the night, while at the same time Zenith temp drops pretty closely with surface temp.
You can see how nightly cooling regulates surface humidity here ( sticking a fork into a lot of the water vapor amplification predicted)comment image

Reply to  Kristian
August 19, 2015 12:21 pm

Doh!

>100F warmer

should be

>100F colder

PiperPaul
Reply to  Bill Illis
August 9, 2015 9:41 am

What’s that in metric?

Scott Scarborough
Reply to  Bill Illis
August 9, 2015 10:07 am

Let’s get on the criticize Bill Illis band wagon! He cites his numbers out to 15 places of accuracy. They don’t even know the mass of the platinum iridium bar in Paris, that is the world standard for the Kilogram, to that accuracy. The point is that all of these criticisms enhance his point, not counter dict it.

Reply to  Bill Illis
August 9, 2015 10:13 am

What are those numbers in english?

Mike M. (period)
Reply to  Bill Illis
August 9, 2015 10:20 am

Bill Illis,
You are claiming that there can not possibly be such things as physics, chemistry, or engineering. Just apply the same logic to a chemical reaction, or the flow of a fluid, or the cooling of a hot object, or even the cars crossing a bridge. Clearly, there is nothing that we can possibly understand!

ossqss
August 9, 2015 7:09 am

Thanks Rud. Nice job.
A few more adjustments to the terrestrial data sets and the observations will be brought in line with the models. The once laughable graphic about the observations being wrong, will be no more.

Latitude
August 9, 2015 7:09 am

CMIP5 parameterizations were tuned to hindcast temperature as best possible from 2005 back to about 1975
====
Something I have never understood…
They are constantly adjusting past temperatures…making the past cooler, the present warmer and the rate of warming faster
….doesn’t that really pissoff these guys that are trying to computer model this garbage

urederra
Reply to  Latitude
August 9, 2015 10:36 am

Exactly.
That is one of the reasons climate models cannot be validated but FEA models or molecular dynamic models can. In order to validate a 100 years running simulation, you have to run your model using actual temperatures, wait for 100 years and compare the results of your simulation with the real data.
Now suppose your simulation started on the year 2000, and the dataset you used in your simulation was HADcrut3. So, in order to validate your model, you have to wait until the year 2100 and compare the final results of your simulation with the HADcrut3 temperatures of that year. But how could you do that if at the pace the datasets are changing the HADcrut version used on the year 2100 could be 9 or 11?
That is why I find these models versus real temperature graphs useless. Simulations that were performed around the year 2000 cannot be compared to HADcrut4 temperatures because the HADcrut4 temperature dataset is only 5 years old and those simulations couldn’t have been performed by using HADcrut4 temperatures.
There is a “no model left behind” policy, anyway. so nobody in the field is kicked out of the gravy train. It is the only way to explain why models with a climate sensitivity of 1.5 ºC are said to be compatible with models with a climate sensitivity of 4.0 ºC. And that is another reason why climate models don’t work but FEA models do. In FEA models bad models were discarded and the parameters were readjusted to fit the experimental data. This is called parametrization, as Rud says. In climate models no model is discarded and is the experimental data that gets readjusted.
Any bets at what HADcrut version will be used on the year 2100?

Latitude
Reply to  urederra
August 9, 2015 10:48 am

well yeah….I know I’d be royally pissed if I worked with a set of numbers for five years of more….published my results…and was then told it’s garbage because they changed the numbers

John Peter
Reply to  urederra
August 9, 2015 11:47 am

This sounds exactly right to me. Let me guess. It will be HADcrut 20 near enough.

michael hart
Reply to  urederra
August 10, 2015 8:13 am

They will do away with the numbers before that.
It will probably be called “HadCRUT Reloaded”.

August 9, 2015 7:12 am

Thank you for these explanations.
When no physical explanation can be given to a phenomena or no solvable equation to represent it is available, one has to use so-called practical formulas.
Climate and weather are made of mass and heat transfers processes, non linear phenomena of that kind. As, well explained above, no experimental verification is possible (one Earth as laboratory, one single experiment on-going since 3-5 billion years) the choice of tuning parameters is left to the preference of the modeler.
As she or he has no clue about natural (non anthropic) climate drivers (for example, what did cause the exit from the little ice age), and has gotten the mandate to solve everything under the assumption of an overwhelming anthropic cause, it is not surprising that the models cannot reproduces “the pause”. But, tuned as they are they loyally confirm the anthropic hypothesis. Circular reasoning with gigaflops of computer runs.
This would be a simple cause for “more research is needed”, with the computing limitations not allowing to do much more.
But scandalously disturbing is the fact that these models are used to simulate hypothetical scenarios for the future, from which orders are given to “policy makers” that they have to de-carbonate asap or a catastrophe will be inevitable. And a nice but unfounded limit of 2 °C is provided to them to give an impression of controllability.
So ascientific, against common sense, and nevertheless now a kind of new “pensée unique”, the anthropowarmist dogma.

mwh
August 9, 2015 7:16 am

Climate models are absolutely indispensable for the study of climate changes in the past and the understanding of the reasons why the changes occurred. As of today this still is not fully understood or quantified, so forecasting using GCMs is still a matter of ‘GIGO’ syndrome.
As a regular looking at the BBC s weather maps which predict the N Atlantic pressure movements (from the Met office) discrepancies start to occur almost immediately – so in the short term the predictability is good but after 2 or 3 days the divergence is becoming so great that the forecast is unreliable.
This is true of all models down to the simplest of trend lines expanded into the future – good for the short term but rapidly more and more inaccurate.
So GCMs are a great tool – I am sure everyone can appreciate that, but used as a predictor of future climate and the basis for policy making – extremely dangerous and potentially ruinously expensive – to the extent that prevention is far more detrimental to future generations than the cure (the cure being money spent reacting to climate change rather than attempting to alter it)

Mike M. (period)
Reply to  mwh
August 9, 2015 10:25 am

mwh,
“So GCMs are a great tool”
Agreed.
“I am sure everyone can appreciate that”
You are too optimistic. Look at some of the other comments here.
“but used as a predictor of future climate and the basis for policy making – extremely dangerous and potentially ruinously expensive”
Agreed.
All models are wrong, but some models are useful. The trick is understanding when they are are wrong and when they are useful.

Latitude
Reply to  Mike M. (period)
August 9, 2015 10:49 am

MIke, how can they be useful when they are constantly changing the temp data they’ve used?

Reply to  Mike M. (period)
August 9, 2015 8:29 pm

Mike M. (period) writes “All models are wrong, but some models are useful. The trick is understanding when they are are wrong and when they are useful.”

The same can be said for relatively sparse measurements. Or short term measurements. For example its entirely possible that our measurements of polar ice simply aren’t over sufficient time to be useful in determining trends.
Its entirely possible our temperature measurements using proxies are too sparse to be useful.
Reported uncertainty never seems to include this and always seems to be based on readily quantifiable criteria such as the accuracy of measurements themselves. I guess that’s easier. It ticks the “uncertainty box”

Stuart Jones
Reply to  mwh
August 9, 2015 10:05 pm

has anyone done a model to predict what the past temperature data will look like in 2100.
for instance today the temp in Adelaide is 14 degrees c but in 2100 todays temp will probably be….8 degrees c. I notice that the old newspapers and even the government gazette from 50 years ago is not correct anymore, typical public servants and journalists, they can never get the facts right.

carbon bigfoot
August 9, 2015 7:31 am

Excellent dissertation. In my field of Chemical Engineering we deal with an infinite number of variables that we can rarely quantify mathematically. Experience in lab-to-pilot plant-to-production process normally dictates predictability. Dimensionless Numbers and experience with Unit Operations defines Mass & Energy Balances and creates Trends. Not always scientifically superior but in the real world it works.
As I predicted in 1992 with the Kyoto Protocol, it was patently obvious this Planet’s Climate is too complex to articulate mathematically. Since then we have much additional evidence to support that position.
Back when I matriculated we didn’t have access to desktops and most of our calculations were manual and complex iterative convergences. No fun. Thanks for the mental stimulation.

August 9, 2015 7:40 am

That was a nice essay, thanks much.
As i look at the computer models, I note that they are near useless for predicting future states of the climate of our planet. I note that they have so many “fudge factors” (parameterizations) that their accuracy depends entirely on fiddling with the thing until it “sorta looks like” the recent past. No way to predict the future with the things.
The climate “scientists” use the models to claim that their theory is correct when the models do no such thing. The models are built on top of a totally wrong theory of how our climate works, and then fudged to make it look like they are close; which is then said to prove the theory. Did they stop teaching basic logic in schools?
Some day in the far future when the subject is no longer a political one, we will get back to the real investigation of the climate and logical theories will once again be looked at seriously. Till then, we will try to drive up energy costs to punish the poor for being poor. Disgusting.

PiperPaul
Reply to  markstoval
August 9, 2015 9:58 am

Did they stop teaching basic logic in schools?
I figure that once computers became ubiquitous, many people just stopped thinking very deeply and experienced a diminished attention span. Some expert somewhere sat down with a programmer and figured out ‘The Best Way To Do It’, and now no one challenges the software output (it’s safer that way for job preservation), not are they even likely able to complete task “x” without a computer. There’s a lot of centralization of knowledge and ability going on and people are slowly losing the ability to think. [Well, OK, maybe I’m exaggerating a bit]

MarkW
Reply to  PiperPaul
August 9, 2015 10:23 am

To err is human.
To really f’ things up requires a computer.

michael hart
Reply to  PiperPaul
August 10, 2015 8:23 am

I often think the same.
But taking the optimistic view, this could just be a once off event, until all humans are familiar with computers and understand that an answer isn’t necessarily right because it came out of a computer.
So CliSci may actually be doing us all a favor by showing just how easy it is to f’ up big time with a computer.

tom s
August 9, 2015 7:44 am

Did you not mean “latent heat from condensation’ at the midpoint of this article? You used “evaporation”.

Mike
Reply to  tom s
August 9, 2015 8:01 am

It’s the latent heat of evaporation which is being released.

billw1984
Reply to  tom s
August 9, 2015 8:05 am

Same magnitude, just opposite sign.

Reply to  tom s
August 9, 2015 8:12 am

You are correct.

Curious George
Reply to  ristvan
August 9, 2015 4:14 pm

Latent heat of evaporation/condensation depends on temperature. At least one of CMIP5 models (CAM 5.1) neglects the temperature dependence, thus overestimating an energy transfer by evaporation from tropical seas by 2.5%.

MarkW
August 9, 2015 7:58 am

“And the tuning still requires assuming some attribution linkage between the process (model), its target phenomenon output (e.g. cloud cover, Arctic ice) and observation.”
Sounds like you are saying that the models assume the existence of that which they were designed to find.
That is, they find a linkage between climate and CO2, because an assumed linkage between CO2 and climate was built into the model.

Reply to  MarkW
August 9, 2015 9:48 am

Yes. It is a circular exercise, justified by the faith that there just must be such an effect, which is begging the question. My take on that above.

Mike
August 9, 2015 7:58 am

IPCC AR5:

§D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).

Note the deliberate misdirection by omission?
…. but NOT including the equally rapid warming between 1915 and 1940.

Reply to  Mike
August 9, 2015 10:36 am

Also note the throwout statement… “and the cooling immediately following large volcanic eruptions (very high confidence)” as an excuse for the “pause” for those who are not studied up on the climate. Meaning: Pinatubo in ’98 is responsible for the pause since ’98.

Reply to  Dahlquist
August 9, 2015 10:37 am

oops, ’91. Guess I need to re study up

jpattitude
August 9, 2015 8:04 am

Am I wrong to point out that GCMs involve a chaotic system whereas FEAs are not? So the comparison will never be apt? (And climate models will never work?)

Reply to  jpattitude
August 9, 2015 8:14 am

Navier Stokes is chaotic. Computational fluid dynamics is still used to design airplanes. But the parameterizations are verified in wind tunnels.

Luke
August 9, 2015 8:07 am

Rud Istvan states “Akasofu’s simple idea also explains why Artic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak. The entire observational record until 2013 may be just the decline phase of some natural ice variation. The recovery in extent, volume, and multiyear ice since 2012 may be the beginning of a natural 35-year or so ice buildup.”
Recovery since 2012- really? From NSIDC “July 2015 average ice extent was 8.77 million square kilometers (3.38 million square miles), the 8th lowest July extent in the satellite record. This is 920,000 square kilometers (355,000 square miles) below the 1981 to 2010 average for the month.” That does not sound like a recovery to me. You also must be very cautious about making predictions in a system with such high variability based on 2 years of information. The last several years of Arctic ice volume and extent are consistent with a multidecadal decline.

MikeB
Reply to  Luke
August 9, 2015 8:35 am

It is quite possible for both statements to be true. The arctic sea ice has staged a remarkable recovery from the low of 2012 (40% up on the 2012 minimum).
http://ocean.dmi.dk/arctic/plots/icecover/icecover_current_new.png
Of course, this recovery is from a low level and I agree that you must be very cautious about making predictions based on just 2 years. But, for the moment, the statement that arctic ice has recovered from 2012 is true.

Luke
Reply to  MikeB
August 9, 2015 9:42 am

“But, for the moment, the statement that arctic ice has recovered from 2012 is true.”
So you call a 27% decline followed by a 5% increase a recovery? Really?

Sturgis Hooper
Reply to  MikeB
August 9, 2015 9:55 am

In the year a dedicated satellite began operating, 1979, Arctic sea ice happened to be at or near its high for the 20th century. Early in the century it might have been higher in some years. But satellite observations from the 1960s and ’70s show it was lower in those years. In 1975 it was at or below 2015 levels.
The two record low years, 2007 and 2012, suffered from cyclones, which piled up the floes and moved them around so as to melt more easily. Now Arctic sea ice is back in the normal range, as defined for the period since 1980 (two SDs), and the trend is growing, as is to be expected with the switch in PDO and AMO.
Meanwhile Antarctic sea ice, which has five times the effect on albedo of Arctic ice, keeps setting records year after year. This is not what the GIGO model predicted.

Sturgis Hooper
Reply to  MikeB
August 9, 2015 9:55 am

Models.

Bill 2
Reply to  MikeB
August 9, 2015 10:09 am

When you look at sea ice area instead of extent, the recovery is not so obvious. http://arctic.atmos.uiuc.edu/cryosphere/arctic.sea.ice.interactive.html

MarkW
Reply to  MikeB
August 9, 2015 10:24 am

Yes, a 5% increase after a 27% decline is a recovery.
Not a complete recovery, but one none the less.

Luke
Reply to  MikeB
August 9, 2015 10:32 am

Only a politician would call a 5% increase following a 27% decline a “recovery”. The fact is we are still 22% below the long-term mean.

MarkW
Reply to  MikeB
August 9, 2015 3:30 pm

Luke, anyone interested in being accurate would call it a recovery, as to the rest of the 22%, be patient, it’s coming back. Just like it did the last three times ice levels fell in the arctic.

Robert Austin
Reply to  MikeB
August 9, 2015 5:49 pm

Luke says:

The fact is we are still 22% below the long-term mean.

It is a bit of a stretch, climatologically speaking, to consider the 36 year period from 1979 to present as long term. The first IPCC report showed 1973 as having substantially less Arctic ice extent than 1979, the year “conveniently” selected to start the official Arctic ice extents record.

Reply to  MikeB
August 9, 2015 7:03 pm

The graph is someone misleading because, “The accuracy of Arctic sea ice concentration at a grid cell in the source data is usually cited as within +/- 5 percent of the actual sea ice concentration in winter, and +/- 15 percent during the summer when melt ponds are present on the sea ice (GSFC Confidence Level), but some comparisons with operational charts report much larger differences (Agnew 2003, Partington et al 2003).”
This means the inaccuracy band is bigger than the span annual data curves.
See: http://nsidc.org/data/docs/noaa/g02135_seaicce_index

mwh
Reply to  Luke
August 9, 2015 8:36 am

so tell me if the least extents cover the last 8 years and the recovery has been over 4 how is that not a recovery. You seem to have the same ability to mangle data as so many other alarmists. Plus even though the extent was not so great as last year the Antarctic continues to produce unprecedented (satellite record) extents. I suppose you would be quick to point out that the extent has gone down this year – ‘proving’ continued warming. The absurdity of your ‘8th’ lowest extent fact when the only relevant one would be the lowest on record at this point shows that debate is pointless. But tyen you go on to talk about volume which also is on less than last year. If 3 years does not mean a recovery in your head how does 1 year equal ‘consistent with a multidecadal decline’ the lack of joined up thinking borders on idiotic.

MarkW
Reply to  Luke
August 9, 2015 8:37 am

Wow, one month, and that disproves the entire claim?
Are you really as pathetic as your posts make you seem?

Eugene WR Gallun
Reply to  Luke
August 9, 2015 9:48 am

Luke
Well, i know nothing about the science of arctic ice formation but reports of many types going back a couple hundred years do seem to indicate that arctic ice extent varies like a sine wave — going up and down in a regular pattern.
Now we can almost certainly say this pattern did not exist doing the great ice ages — everything being almost completely frozen solid all of the time (it would be amazing if there was much open arctic waters during those ice ages.)
And we can say that probably this pattern did not exist doing the Little Ice Age.
Did it exist doing the Medieval Warm Period? I have never seen that speculated on.
Well, it seems that we can say that whatever causes this oscillation in Arctic ice extent can apparently be overwhelmed by other larger forces.
Nevertheless — looking at the past couple of hundred years this type of sine wave oscillation seems to have been occurring. I can see no reason why it should stop. So I would say we are going to see more Arctic sea ice.
That would seem the safest bet for a layman to make.
Eugene WR Gallun

Luke
Reply to  Eugene WR Gallun
August 9, 2015 5:24 pm

Eugene,
That is an interesting theory but the data do not support it. There has been variation in arctic ice extent buyt nothing like we have seen over the past several decades. Take a look at Kincaid’s paper in Nature (url below). In the abstract he states “Here we use a network of high-resolution terrestrial proxies from the circum-Arctic region to reconstruct past extents of summer sea ice, and show that—although extensive uncertainties remain, especially before the sixteenth century—both the duration and magnitude of the current decline in sea ice seem to be unprecedented for the past 1,450 years.”
I don’t see evidence of the sine wave you are suggesting.
http://www.nature.com/nature/journal/v479/n7374/full/nature10581.html

Mike
August 9, 2015 8:09 am

The whole problem with this process of “tuning” is that it is a case of overfitting the data. Given enough variables ( time series of possible climate drivers ) it is always possible to produce a relatively close fit to a chosen period by multivaraite regression.
However, there is no reason to expect that the regression fit that produces the smallest residual ( the usual selection crtierion ) is the one closest to the physical reality, especially when there is no guarentee that all relevent drivers have been included and their mechanisms correctly understood.

Reply to  Mike
August 9, 2015 9:52 am

Yes. That is my point about observability below before I saw your comment.

August 9, 2015 8:11 am

So we need to build a wind tunnel large enough to put the Earth in? Is that right?

MarkW
Reply to  RoHa
August 9, 2015 8:39 am

No, we just need to build three or four more earths so that we can run experiments using them.

August 9, 2015 8:19 am

The penultimate graph gives impression that the models were accurate in predicting temperature up to late 1990s. Difference should be made between backasting and forecasting
http://www.vukcevic.talktalk.net/GR1.htm
Re last graph; I think that Dr. Akasofu is mistaken HERE it can be seen why his extrapolation may be wrong

Science or Fiction
Reply to  vukcevic
August 9, 2015 2:19 pm

There are some glimpse of realism in the IPCC report, just so sad that they are not honest on the implications:
“When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years … Biases can be largely removed using empirical techniques a posteriori … The bias correction or adjustment linearly corrects for model drift … The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend … The bias adjustment itself is another important source of uncertainty in climate predictions … There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques..”
(Ref: Contribution from working group I; on the scientific basis; to the fifth assessment report by IPCC; 11.2.3 Prediction Quality)
Some highlights:
Imperfect climatology! – What is that supposed to mean?
The time scale of the drift is, in most cases, a few years!
Biases can be largely removed using empirical techniques a posteriori!
The bias adjustment itself is another important source of uncertainty in climate predictions! –
(Strange to say that the uncertainty in a posteriori empirical bias adjustment is a source of uncertainty in the climate prediction!)

August 9, 2015 8:22 am

So they are basically weather models, working from the bottom up, from local to global scale — which immediately brings to mind Frankenstein, who was also patched together roughly and to bad overall effect. No wonder the logic of climate scientists is so fragmented, and the models so wrong. The models, and their creators/believers, have no clue to the real-world global constraints, which make all the essentially local and transient weather irrelevant. I repeat, in capital letters: IRRELEVANT (and for the umpteenth time, INCOMPETENT).
“For Climate, All the World’s a Stage”
The global constraints–and the only things that matter to the global mean surface temperature–are 1) the solar input power, 2) the depth of the atmosphere, which provides for a troposphere (above about 200mb) on all of the planets with sufficient atmosphere, and for the troposphere’s characteristic hydrostatic temperature lapse rate structure, which predominates, globally, over all other atmospheric conditions, including night and day, and 3) the fact that the atmosphere is warmed to, or maintained at, that stable lapse rate structure, by direct absorption of incident solar IR radiation (and not at all from the surface, on anything approaching the global scale)
There is no radiative “greenhouse effect” due to “greenhouse gases”, and the radiative transfer theory itself is wrong, and reverses not only the direction of radiational warming of the atmosphere in the troposphere, but the physical cause and effect; measured “longwave (IR) radiation” in the atmosphere (both “downwelling” and “upwelling”) is due to the stable temperature structure, not vice-versa as the radiation transfer theorists mistakenly believe. They are really just measuring the temperature, as is obvious when they claim the Earth’s surface radiates as a blackbody (in a vacuum) at the same temperature; that is an obscenely wrong statement for a supposedly fundamental theory.
Bottom line, don’t say the GCMs are “based on established physics”. They are based on a general incompetence among climate and atmospheric scientists, which has produced lazy intellectual delusion in place of sound logic, and gross perversions of truly established physics.

Reply to  harrydhuffman (@harrydhuffman)
August 9, 2015 3:28 pm

Harry, that last paragraph is priceless.
+ 10

Ian Macdonald
Reply to  harrydhuffman (@harrydhuffman)
August 9, 2015 11:03 pm

“..the atmosphere is warmed to, or maintained at, that stable lapse rate structure, by direct absorption of incident solar IR radiation ”
Don’t see how that can be the case, nitrogen and oxygen are very transparent to thermal or visible EM radiation. All evidence is that the troposhere is warm primarily because of surface convection, plus a little extra from GHG effects. That is why it is not so warm above 10,000ft or so, because the convection rarely goes above that level. It is possible that what warmth there is in the stratosphere is partly due to GHGs absorbing both inbound and outbound longwae IR.

Joe Crawford
August 9, 2015 8:26 am

It’s amazing how many “scientific” papers are published each year based strictly on the results of climate models. In this publish-or-perish universe I have no problem with that. What I do have a problem with is presenting those results as anything more than a characterization of the models. Trying to claim, or even just to infer, that those results have any physical interpretation in the real world is merely advertising your own incompetence to the rest of science. After the university P.R. department spins the paper it is advertising your incompetence to the rest of the world.

August 9, 2015 8:34 am

great essay

August 9, 2015 8:41 am

Will not the first part of the problem — computational speed capabilities — be solved by Moore’s Law before too long, even in the face of the couple orders of magnitude needed?

EC Burgener
Reply to  TBraunlich
August 9, 2015 8:47 am

Why bother when a simple hand held calculator was found to do a better job?

Joe Crawford
Reply to  TBraunlich
August 9, 2015 8:52 am

According to several people involved in the research, Moore’s Law is soon to reach its physical limits. It only has a few, maybe only a couple, of generations left to go. Most of the advancements in processing power today already come from attaching/bundling more and more processors together. Eventually, inter-processor communications puts a practical limit on that.

MarkW
Reply to  Joe Crawford
August 9, 2015 10:35 am

There are still a few tricks yet to play in terms of getting processors to be faster. Bigger caches and pipes for one. The algorithm for predictive pipelining is getting better. (Basically when when you reach a branch in your code, which branch do you pipeline and which do you ignore. Predictive pipelining takes a best guess. Larger pipelines also allow you to pipeline both branches.)
There is also look ahead computation. That is, when the CPU is idled waiting on data, it can examine the data in the pipeline, and determine which calculations are ready to be performed (All the data needed for them is available in the pipe or cache) and go ahead and do those calculations now.
All this stuff is already being tried in the labs, but haven’t made it out to the field yet. Then as more transistors become available, you can do all of this, but more of it.

MarkW
Reply to  TBraunlich
August 9, 2015 10:29 am

There are limits to Moore’s law, and it appears that the doubling will only continue for a few more years.
Think about it for a few minutes, do you really believe that transistors will ever get smaller than a single molecule?

Reply to  TBraunlich
August 9, 2015 10:39 am

Actually, not so much. Moore’s law is reaching practical limits on things like capitl cost and yield. The former goes up (a lot) and the latter goes down as line width is reduced.
Supercomputers are already massively parallel, with multiple cores per microprocessor. For example UK Met is starting a two year process to replace its 2012 IBM Power7 doing 1.2 petaflops (the example in the post) with a Cray XC40 doing 2.8 petaflops. The XC40 comprises 3944 dual microprocessor nodes, each having 128GB of dram for total active memory of 505 terabytes. The (3944*2) microprocessors are Intel Haswell E5-2680s, each with 12 cores running at 2.5 GHz. Total of 94656 CPUs.
The computational limit is not cores. It is the rate at which data moves from memory to cpu and back, and at which is is further exchanged between nodes. The post’s 7 orders of magnitude is off a nominal 1-2 petaflop base. Chinese are building the superest duperest supercomputer. It supposed to be 23 petaflops.

Pamela Gray
August 9, 2015 8:48 am

Fantastic summary of issues. The models also do not reflect ENSO processes as well as they could. That said, this is the next area of further discovery, as it at least, is on a larger-longer scale than clouds. ENSO is finally getting the attention it deserves as a multi-decadal and likely multi-century scale phenomenon that is ripe for longer scale modeling than the months-scale currently used in predicting ENSO directions. Furthermore, the clouds in the equatorial band have been shown to coincide with different trade-wind/SST processes in a fairly standard fashion, again becoming something that can be modeled. The only monkey in the wrench is to determine why El Nino’s occur and whether or not that phenomenon is a random walk phenomenon or is itself triggered by somewhat predictable normal to super normal La Nina longer-term oscillations.

Reply to  Pamela Gray
August 9, 2015 10:10 am

ENSO finally getting attention? I covered that in my book ‘What Warming?’ in 2010. and nobody noticed. Goes to show how illiterate these so-called “climate” scientists really are. Read pages 21 to 27 to understand what is ENSO. It is an harmonic oscillation of ocean water from side to side in the equatorial Pacific, powered by the trade winds. If you blow across the end of a glass tube the tone you get is its resonant tone, determined by the dimensions of the tube. The trade winds are the equivalent of blowing across the end of a tube and the ocean answers with its own resonant tpne – one El Nino wave every four-five years or so. An El Nino wave carries warm water from the Indo-Pacific Warm Pool across the ocean along the equatorial counter-current. When it reaches South America it runs ashore, spreads north and south along the coast, and warms the air above it. Warm air rises, joins the westerlies, and the world knows that an El Nino has started. But any wave that runs ashore must also retreat. When the El Nino wave retreats, water level behind it drops as much as half a meter, Cool water from below wells up and a La Nina has started. As much as the El Nino warmed the air La Nina will now cool it and the global mean temperature does not change. You can see how accurately this happens from the wave train that is part of my figure 15. That figure itself shows you a hiatus that is not found on present day temperature curves which are falsified to hide it. (You can still get it from satellites, however.) This is the normal course of ENSO but things can get messy and block the way across the ocean. When this happens and an El Nino wave is on the way along the equatorial counter-current it is forced to stop in its tracks, Its warm water will simply spread out in mid-ocean and create a rising El Nino warm cloud on the spot. This is called an El Nino Modoki or CP (Central Pacific) El Nino. The return flow is going to be different and I am not sure how the warming is distributed after this. There is not much information and nobody even knows what percentage of El Ninos end up this way. Now there is some unfinished work for someone to take up. As to ENSO history, it started when the Panamanian Isthmus rose from the sea and thereby established the present day equatorial current system in the Pacific.

Pamela Gray
Reply to  Arno Arrak (@ArnoArrak)
August 9, 2015 11:32 am

ENSO in short term scenarios has been well studied (and you would do well to refer to the peer reviewed research). What is yet to be consider by modeling is its long term oscillations, additional differences between La Nina and El Nino, and ENSO teleconnections in terms of currents sending water away from the equatorial band, and wide spread atmospheric pressure systems interacting with these equatorial systems.

Sturgis Hooper
Reply to  Arno Arrak (@ArnoArrak)
August 9, 2015 11:50 am

Pamela,
For the past few days Chile has been battered by powerful rain storms and northerly wind. I wonder if the building El Niño might be involved. Regardless of what the fiction writers at Hadley CRU, NOAA and GISS publish about GASTA, South America and Antarctica are getting colder.

Pamela Gray
Reply to  Arno Arrak (@ArnoArrak)
August 9, 2015 12:37 pm

Here is an attempt to model a wickedly complex problem. No clear result beyond 1 year was accomplished. But at least the attempt was made.
http://journals.ametsoc.org/doi/pdf/10.1175/2008JAS2286.1

Gary Pearse
August 9, 2015 9:03 am

When we are looking at global temperature, I wonder if we couldn’t make an average cell say of 1M sqkm, summing the extent of global tropical thunderstorm ‘area’ from satellite images and dividing by 500, measuring temperature change with the passage of 1000 storms to get an average, agglomerating all the other parameters to 1Msqkm, and run the model to see how close we get to the average temperature.
At least we could do the ‘integration’ for thunderstorm area and apportion its effect to the GCM models. We might be able to include a factor of the variability of thunderstorm area over the course of a year – heck lets start collecting this stuff henceforth. We have satellite temperatures, we have argo floats, etc. etc. Getting resolution down to small areas that aren’t small enough for what is apparently a sizable factor that gets ignored is a fools game and it should have been recognized a $100B ago.
I think there is symmetry enough in the problem to even divide the globe into strips – the entire ITCZ is pretty much the same around the globe and most of the sun’s energy impinges on this zone. A second two strips would be the temperate zone to +/-60 LATS plus the two polar areas. Possibly separate out land masses to account for their differential heating/cooling vs the oceans. Has anybody tried this? I think if it were engineers doing this, it would have been tried. I look at the multi million year proxies with its variations of plus or minus 3-4C and I immediately conclude that the solution is clearly a top-down affair. I wish I had the skills and tools to do such a thing.

Dr. Daniel Sweger
August 9, 2015 9:03 am

There are several problems with GCM parameterized models that typically not encountered in engineering applications. The reason parameterized models are necessary is that the process involved is nonlinear and does not lend itself to closed solutions. The models are trained using existing data over a well defined range of input values. The model is then applied to a particular problem.
However, the range for each variable needs to cover the entire range of the anticipated applied values. If input value of one or more of the variables exceeds the range used during model training then the model output is considered to be unreliable. The reason for that is relatively simple. The effect of that variable on the output is not known with any degree of certainty, and the further that variable gets from the training values the more unreliable the output of the model is. Model variables that behave “properly” over the training range can diverge quickly, even exponentially, outside of that range.
With engineering applications the ranges are set by experimental conditions. In wind tunnel experiments, for example, the experimental wind speed must exceed the anticipated real-life case. If and when the tested vehicle encounters wind speeds greater than the range of the model inputs the results can be disastrous, as has been encountered with ultra-supersonic test flights or attempts to set land speed records.
It is also possible for singularities to occur with combinations of input variable values that were not tested for. Such was the case with the 1940 Tacoma Narrows Bridge collapse in Washington State. A resonance condition was established by a combination of wind speeds and direction that was never tested.
In the case of GCMs the most important of the input values is assumed to be the CO2 concentrations. But these models are only trained over a relatively narrow range of values. They are trained on hind cast temperature values for the thirty years from 1975. During that thirty year time span, the value of atmospheric CO2 concentration ranged from 330 ppm to 385 ppm. The models are then run for values of CO2 that far exceed the training values by as much as 100%.
Another potential problem with GCMs is that the model should never be evaluated based on the training data. There must be a data set that is independent of the training data that is used for that purpose. This is typically done by randomly dividing the total data set into halves, one of which is used for training and the other for evaluating. This then requires a sizeable data set, which does not exist with global climate data.
These are just some practical problems with the modeling. Other theoretical problems include how to define a “global” temperature. Temperature is an intensive property, but only extensive properties are additive, and thus subject to simple averages. For example, if you mix a mass of dry air at a temperature Tdry and another equal mass of air at 100% relative humidity, i.e. saturated air, at a different temperature Twet the resultant temperature is not (Tdry + Twet)/2.
It is about time that the true scientific community stands up and disputes this raw attempt at power grabbing and wasteful spending of hard-earned tax dollars. There is a great deal of value in exploring for new and novel methods of generating electricity, but not at the expense of destroy emerging economies that are dependent on currently inexpensive technologies. The current climate “solutions” are nothing more than modernized versions of eugenics.

Reply to  Dr. Daniel Sweger
August 9, 2015 10:39 am

Indeed, current day climate change Lysenkoism is commonly discussed topic here at Anthoy’s WUWT.

Reply to  Joel O’Bryan
August 9, 2015 10:40 am

Anthony.

Reply to  Dr. Daniel Sweger
August 9, 2015 3:45 pm

Other theoretical problems include how to define a “global” temperature. Temperature is an intensive property, but only extensive properties are additive, and thus subject to simple averages. For example, if you mix a mass of dry air at a temperature Tdry and another equal mass of air at 100% relative humidity, i.e. saturated air, at a different temperature Twet the resultant temperature is not (Tdry + Twet)/2.

That paragraph was worth saying again, so I did. It would be nice to find a way to get the “man on the street” to see and understand this.

Dan Tauke
Reply to  Dr. Daniel Sweger
August 9, 2015 6:39 pm

+1000 Dr. Sweger – several very good points in that post.

eyesonu
Reply to  Dr. Daniel Sweger
August 10, 2015 7:35 am

Dr. Daniel Sweger,
You make very good points in your comment above. I know others have said the same but I also take note.
========
by Rud Istvan,
Good essay and good coverage.

EternalOptimist
August 9, 2015 9:25 am

One of the most frequent points made is that we have to use models because there is ‘only one earth’
With the amount of money being wasted on CAGW surely a cheaper option would be to build another one. We might even be able to provide clean drinking water to the worlds poor with the change

PA
August 9, 2015 9:32 am

http://www.nature.com/nature/journal/v519/n7543/images_article/nature14240-f4.jpg
The CO2 forcing was measured at 0.2 W/m2 for 22 PPM over an 11 year period.
Further – the data show the relationship between CO2 at the surface and IR forcing.
The models clearly aren’t matching the real world data which seems to indicate weak forcing and negative feedback.
How are they doing these long and short term parameter tuning runs, reviewing the results, and still not correcting the models?

August 9, 2015 9:42 am

“The second way is to compare longer-term observational data at various time scales to parameterization results, and ‘tune’ the parameters to reproduce the observations over longer time periods.”
The problem with this is one of observability. Basically, there is no unique parameterization – many different ones can produce the same observed behavior over the selected interval. One can find a parameterization that appears to fit, but the likelihood is that observations will diverge from the model in the future.

Gary Pearse
Reply to  Bart
August 9, 2015 10:38 am

Indeed, they have “corrected” their fits by over-weighting aerosols so that they can hang on to high climate sensitivity and they desperately cling to it. They’ve even added puffs of smoke from unremarkable volcanoes that don’t emit into the upper atmosphere to try to support the aerosol solution to their woes. They already know that sensitivity IS less than 1.5 but to admit that is to admit there is no crisis in the making.
This science would be entirely different were the norms of morality those of a few generations ago. They survived climategate with obfuscation, whitewash investigations, misdirection and clambering ever louder about Climate Armageddon and found they could essentially get away with murder as far as their supporters were concerned.
They brazened out the ‘pause’ with silly claims of new records being set (one should expect a bump or two on a plateau) and 50 ridiculous reasons for it with the heat going to hidden places. Gleick the Sneick got an award, Turney of the Ship of Fools also got an award after the comedy he was in. Emboldened by the fact that there seemed to be no reckoning to deal with for any crime, they held their noses and eliminated the pause knowing criticism would soon be over and their faithful would happily adopt this new adjustment as scientific. Those with more scruples came down with clinical depressions as they fell into classic psychological D’Nile, although they will probably recover with this evidence that it doesn’t matter what lengths they go to to support their fantasies.

Reply to  Gary Pearse
August 9, 2015 10:51 am

A sensitivity of less than 1.5 deg C has been argued many times here (with sound reasoning and paleo data) to be net positive, rather than net negative, in regards to Earth’s biosphere and the human condition.
That is humanity, through its fossil fuel CO2 injections, is producing a Modern Climate Optimum.
The real risk is though is a Malthusian one of non-renewable resource depletion, such as mineral ores necessary to advanced technical society (copper comes immediately to mind) for which no substitute can be found. But man’ingenuity has always come through against these Malthusian warnings. Such as using robotic space tugs and robotic mining tobpark an iridium-platinum rich asteroid in lunar orbit and mine the ore for earth delivery is one futuristic possibility. That scenario is about as fanciful as getting plentiful oil and natural gas out of dense shale rock would have been to the petroleum industry 50 years ago.

MarkW
Reply to  Gary Pearse
August 9, 2015 3:38 pm

The only copper that is lost, is the stuff that gets sunk with ships. The stuff tossed in land fills is still there, waiting for the day when it becomes economically advantageous to go back in and get it. Wouldn’t surprise me if the amount of copper per ton land fill material is comparable to many currently operating mines already. Plus less smelting to get it ready for market.

rogerknights
Reply to  Gary Pearse
August 9, 2015 8:32 pm

“Gleick the Sneick got an award, Turney of the Ship of Fools also got an award after the comedy he was in.”
So did Loony Lew.

MarkW
Reply to  Bart
August 9, 2015 10:44 am

I believe the author brought this point up, but I would like to re-iterate it here.
There’s parameterization, and then then there’s just making it up.
It’s one thing to parameterize a known process because it’s too hard to do computationally, but then they include things like aerosols.
For most of the period being analyzed, they have no idea what level or types of aerosols were being produced or even where most of them were being produced. They just add in the amount needed to get the model to fit the temperature curve they are training for, and then declare themselves satisfied.

Reply to  MarkW
August 9, 2015 11:19 am

Again, this is the problem of observability. They do not have any measurements which would uniquely differentiate the effect of aerosols from the host of other influences. So, they can monkey around in that infinitely unobservable subspace, and come up with any answer they please.

August 9, 2015 10:17 am

Rud, Thanks for the tutorial. I had read much of that several years ago from various sources (when I started my self-education process of what GCMs, Climate change stuff, and the claims were all about) but to read, think about those climate computational problems and claims again, and refresh ideas and claims is very useful (medicine calls it CME).
What we see now though is full-on politicized science that has corrupted a message that should have been communicated with lots of uncertainty to the public in the SPM. Unfortunately the CC politicians and renewable crony capitalists have taken over the science message and turned it into voodoo magic potions (of carbon trading taxes, and renewable energy crony capitalists with their taxes & subsidy schemes) to charge the public with those costs while taking away democratic freedoms in order to impose even more taxes down the road. And all on the basis of deeply flawed, circular logic-tuned GCMs.

Mike M. (period)
August 9, 2015 10:40 am

Rud Istvan,
Thank you for a nice summary of what is right and wrong with climate models. A nice contrast to the silly claims one often hears about what climate models assume.
Is there some reason that critical sub-grid scale processes can’t be better dealt with by using adoptive grid sizes? In numerically solving ODE’s adoptive step sizes are old hat. The other possibility for dealing with such phenomena is to develop properly validated parameterizations. You are right that they can’t be validated by comparing to trends. But perhaps comparisons to sufficiently detailed time and space resolved data could do the job. So far as I can tell, validation gets far to little attention from the modellers.
Beyond sub-grid scale phenomena, I suspect another big problem with the models: Inadequate modelling of multidecadal, possible chaotic, processes in the oceans. My guess is that such processes are the origin of multidecadal cycles in the climate, such as in Arctic ice. In principle, GCM’s should be able to deal with such processes, but there is probably nowhere near enough data to guide model development.

Reply to  Mike M. (period)
August 9, 2015 11:29 am

” In principle, GCM’s should be able to deal with such processes, but there is probably nowhere near enough data to guide model development.”

FYI, they don’t try to hide that the GCM don’t model internal dynamics. They pretend the open loop system is representative of the real climate responses.
https://twitter.com/ClimateOfGavin/status/630071181450866688?s=02
And if you cannot model dynamics, you cannot possibly get the feedbacks correct even IF one knows their sign and magnitude. And with their forced positive feedbacks of H2O vapor (net strong positive), they knowingly and willfully force the models to run hot.

MarkW
Reply to  Joel O’Bryan
August 9, 2015 3:39 pm

If you don’t attempt to model dynamics, then by definition any training that you do on past data is already invalid.

Reply to  Mike M. (period)
August 9, 2015 12:24 pm

“Is there some reason that critical sub-grid scale processes can’t be better dealt with by using adoptive grid sizes?”
Yes. As the article says, GCMs work up against a CFL constraint. They can’t decrease spatial grid size without decreasing time step. But there has to be one time step everywhere, for various practical reasons. So you can’t stably reduce horizontal grid size locally where you would like to.
Modelling of multi-decadal processes is the opposite problem. Grid size is not a problem, and it brings no new difficulty of sub-grid processes. It obviously takes a lot of computer time, but that will accumulate. GCM’s are the way to make progress here.

MarkW
Reply to  Nick Stokes
August 9, 2015 3:41 pm

Actually, grid size is just as big a problem for the GCMs. Since you can’t model anything that occurs at levels less than grid size, those things have to be parameterized. However before you can parameterize them, you must first understand them. And we are still years away from being able to do that.

Reply to  Nick Stokes
August 9, 2015 4:00 pm

Nick, with all due respect, modeling essential processes like tropical convention cells is essential, and I showed how it was directly related to grid size resolution. Pictures, even.
Yes, GCMs will progress. Yes, supercomputers will progress–slowly, for reasons given upthread in the technical supercomputer petaflops comment. Way beyond the intent of a simple guest post.
But not enough, fast enough to solve the several orders of magnitude problem highlighted by the guest post. And, my upthread comments on resolving attribution perameterization suggest 30-50 years more ‘good’ data before that knot can be untied. What say you to that?

Reply to  Nick Stokes
August 9, 2015 5:33 pm

It certainly will be much simpler to use fixed and uniform time steps for the entire model, but I very much doubt that this is strictly necessary. Variable time-stepping is done in FEA, and while it would certainly add complexity, it should be possible to use heterogeneous time steps in the same model. For example, at the interface of a small cell and a large one, one could perform calculations on two time scales, and where the two results are within acceptable distance of each other, one can switch to the longer time interval for further outward propagation from the large cell to other large cells.
I would be surprised if this idea had not yet been explored.

PA
Reply to  Nick Stokes
August 9, 2015 6:09 pm

Michael Palmer August 9, 2015 at 5:33 pm
It certainly will be much simpler to use fixed and uniform time steps for the entire model, but I very much doubt that this is strictly necessary.

I’m not that much of an analog guru… but event driven may make more sense than variable time steps.

Reply to  Nick Stokes
August 9, 2015 6:14 pm

Rud,
“Nick, with all due respect, modeling essential processes like tropical convention cells is essential”
Yes, of course (well, updrafts). I was referring to the “opposite problem” of multi-decadal processes, which are slow relative to the timestep.
MP,
“Variable time-stepping is done in FEA”,/i>
Many GCM’s use spectral methods for the dynamical core, for speed and accuracy. I think variable time (or space) intervals would be a big difficulty there.

Reply to  Nick Stokes
August 10, 2015 6:55 am

Well, variable space intervals at least are unavoidable on a spherical surface.
If continuously variable time intervals are too difficult, it may still be feasible to use time steps that are integral multiples of a basic time step. But whatever, fundamentally I think all this stuff is useless. I wouldn’t be surprised if subconsciously the people who program these things feel the same and don’t really try all that hard to improve them. The Harry-readme file comes to mind.

August 9, 2015 10:46 am

Off topic, I apologize; but can someone address rumors I’ve been hearing about a net loss in global land ice, which has not been ofset by a net gain in sea ice… I’m having difficulty breaking this down and now I’m forced to turn to you lot for a hand up

Sturgis Hooper
Reply to  owenvsthegenius
August 9, 2015 11:44 am

Owen,
The Antarctic is gaining ice mass, not losing it. Since this is most of the ice and freshwater on earth, loss would have to be extreme elsewhere for there to be a net loss.
http://wattsupwiththat.com/2012/09/10/icesat-data-shows-mass-gains-of-the-antarctic-ice-sheet-exceed-losses/
The Greenland ice sheet may or may not be losing mass, but in any case, not much either way. The mass of montane glaciers is negligible compared to the ice sheets, and likewise hard to say whether now a net gainer or loser.
Obviously there is a bit less ice on the planet now than during the depths of the Little Ice Age 300 years ago. The massive East Antarctic Ice Sheet, with 61% of earth’s freshwater, quit receding over 3000 years ago, during the Minoan Warm Period. The longer term trend for earth’s climate is cooling, which is a bad thing.

Reply to  owenvsthegenius
August 9, 2015 11:47 am

Yes, you heard correctly. The North American Larentian land ice shield melted, retreated, and dissappeared starting aroun 20,000 yrs ago and was mostly gone by 12,500 years ago. Since then some smaller high alpine mountain glaciers that survived that wicked Climate Change have been slowly retreating as well, with occasional melt hiatuses occurring such as during the Little Ice from 1450-1850 AD.

AndyG55
Reply to  Joel O’Bryan
August 9, 2015 3:15 pm

Arctic sea ice is actually at anomalously high levels compared to all but the last few hundred years of the Holocene.
Biomarkers in sediment clearly show that an open Arctic was the norm for most of the first 2/3-3/4 of the last 10,000 years.
All this politically scary melt is because we have just climbed out of the coldest period of the current interglacial. Why would things not melt ! Arctic sea ice is a pita for anyone living up there.
Unfortunately, it looks like we might have topped out ! 🙁

AndyG55
Reply to  Joel O’Bryan
August 9, 2015 3:17 pm

whoops missed an important couple of words.
Biomarkers in sediment clearly show that an open Arctic was the norm during summer

Reply to  owenvsthegenius
August 9, 2015 12:08 pm

Owen, you will find a wealth of reference resources in essays PseudoPrecision (sea level rise, indicative of land ice loss), Tipping Points (detailed analysis of Greenland and Antarctice ice sheets), and Northwest Passage (Arctic sea ice cyclicality and measurement issues) in ebook Blowing Smoke available iBooks, Kindle,…
Short summary. Greenland was near stable in the 1990’s, lost about 200GT/year through 2012, and has again apparently stabilized. See NOAA YE 2014 report card and DMI surface mass balance for 2015 comparison to yhe peak loss year 2012.. EAIS is stable/gaining. WAIS is losing, mainly from PIG in the Amundsen embayment. Estimates vary by nearly a factor of 4 from 2011 to 2014, so the data are sketchy. Arctic sea ice is recovering significantly (especially multiyear ice) from the 2012 low (see extensive comment subthread above). Antarctic sea is setting record highs for now the fourth straight year.

PA
Reply to  owenvsthegenius
August 9, 2015 7:33 pm

1. Archimedes principle says sea ice, ice shelves, and a significant fraction of West Antarctica is irrelevant except for water cooler gossip. Only land ice above sea level counts (minus about 9% of the amount below sea level). SInce there will be isostatic rebound – even less ice counts toward sea level depending on how long it takes to melt.
2. The net ice sheet loss in Antarctica is 0-100 GT on average depending on what guesser you believe.
The average thickness of the Antarctic ice sheet is 7000 feet. There are 30 Million GT of ice. The average annual change assuming 100 GT loss is 1/300,000th (that also means it will take 300,000 years to melt). 1/300,000th of 7000 feet is 0.28 inch or less. So they are trying to measure an average change of about 1/4 inch from 6700 km (GRACE) or about 590 km from ICESAT (ICESAT died February 2010, and ICESAT 2 doesn’t go up until 2017).

Schrodinger's Cat
August 9, 2015 10:50 am

A good post. I suspect your final sentence: “Climate models unfit for purpose would be very off message for those who believe climate science is settled.” is the most important sentence in the whole post. It is also where climate modellers diverge from the rest of the scientific community.
Failed models have no scientific use and should be binned. They should never be used for policy making.
The amazing resilience of failed climate models suggests that they are not about science. They fulfil their political and financial purposes and that is why they are used for policymaking.

Berényi Péter
August 9, 2015 11:12 am

One root cause is so fundamentally intractable that one can reasonably ask how the $multibillion climate model ‘industry’ ever sprang up unchallenged

Istvan, the mystery is even deeper than that. There may be unknown physics related to entropy processes in chaotic nonequilibrium thermodynamic systems, of which class the terrestrial climate system belongs to, see my comment on Validation Of A Climate Model Is Mandatory.
Jaynes entropy of these systems can’t even be defined, so one obviously needs some as yet lacking generalization of the concept. Because measuring entropy production of the entire system is still conceptually simple: one only has to count incoming vs. outgoing photons (and some contribution from their different momentum distributions), because entropy carried by a single photon is independent of frequency in a photon gas. And the only coupling between the system and its (cosmic) environment is radiative one.
The climate system is nothing but a (huge) heat engine and trying to construct a computational model of such an engine with no understanding of the underlying entropy process is futile.
The funny thing is, while the climate system is obviously too big to fit into the lab, we could still make experimental setups of nonequilibrium thermodynamic systems with chaotic dynamics under lab conditions and study them experimentally. In that case we could have as many experimental runs as necessary while having full control over all system parameters. Unfortunately no one seems to do such experimental work, in spite of the fact that’s the way basic physics was always developed, besides, it could be done at a fraction of the cost of these pointless, oversized computer games.
We would have no electric industry without Maxwell’s breakthrough and Faraday’s monumental experimental work serving as its foundation.

Joe Crawford
Reply to  Berényi Péter
August 9, 2015 2:18 pm

I’m not sure the current crop of those who call themselves “Climate Scientists” have the knowledge to design such experiments or the math skills to analyze the results if some one else did the designs for them.

August 9, 2015 11:14 am

To contribute to this enlightening article by Rud Istvan, I would like to recommend this excellent article:
Tuning the climate of a global model. Mauritsen et al. JOURNAL OF ADVANCES IN MODELING EARTH SYSTEMS, VOL. 4, M00A01. doi:10.1029/2012MS000154, 2012.

The practice of climate model tuning has seen an increasing level of attention because key model properties, such as climate sensitivity, have been shown to depend on frequently used tuning parameters.

http://onlinelibrary.wiley.com/doi/10.1029/2012MS000154/pdf

Reply to  Javier
August 9, 2015 2:43 pm

Javier, thank you for extending my post this way. Outstanding reference contribution.. I used the paper’s figure 1 in essay Models all the way down. Dr. Curry used it in her most recent congressional testimony. Is an ‘insider’s view’ of the NOAA MAPP second type parameterization tuning. In this case, the Max Planck Institute’s Echam6 GCM for CMIP5.

JohnWho
August 9, 2015 12:31 pm

Question:
Isn’t the biggest “Trouble with Global Climate Models” the acknowledged fact that the models do not (can not?) include all of the known factors that have an effect on the climate?
In the simplest of terms, isn’t a model, any model, in order to reflect the reality being modeled, required to include all known factors?

August 9, 2015 12:38 pm

Rud Istvan says:
” ….SPM said about CMIP5: §D.1 Climate models have improved since the AR4. Models reproduce observed continental-scale surface temperature patterns and trends over many decades, including the more rapid warming since the mid-20th century and the cooling immediately following large volcanic eruptions (very high confidence).
§D.2 Observational and model studies of temperature change, climate feedbacks and changes in the Earth’s energy budget together provide confidence in the magnitude of global warming in response to past and future forcing.
Neither statement is true, as the now infamous CMIP5/pause divergence proves (illustrated below). CO2 continued to increase; temperature didn’t. The interesting question is why…. ”
Interesting question indeed! One answer could be that the Summary for Policy Makers (SPM) is deliberately falsified for political purposes anyway and is not expected to be accurate. Another aspect would be that these model makers have no idea what physical observables to use in their models. If you know for example, that carbon dioxide does not warm up the world you should not use it as an observable that controls your output. It is not a secret, for example, that during the present hiatus carbon dioxidse is increasing but temperature is not. That should be enough for you to dump those CO2 surface forcing and other telated aspects from your models, One other aspect of those models is trying to represent the entire climate story as part of a single, mathematically calculated curve. That is plain stupid. There are breakpoints in the real temperature curve where the drivers change and the curve makes unexpected turns. One of them was early 1940 that none of them even try to get. Another one was the beginning of the the twenty=first century. And still another one was the beginning of the hiatus of the eighties and nineties they so hated that they covered it up with fake warming. The two hiatuses – the present one and and the previous one I referred to – are totally outside their experience because they stubbornly resist using the correct greemhouse theory to analyze them. That theory is called MGT or Miskolczi greenhouse theory and has been available since 2007. They bkacklisted that because they did not like its predictions and grad students never even knew that it even existed. Its prediction is very simple: addition of carbon dioxide to the atmosphere does not cause greenhousecwareming. That is what we have actually observed for 18 years. During every one of these years the Arrhenius greenhouse theory predicted warming but nothing happened. A scientific theoryn that makes wrong predictions belongs in the waste bnasket of history, and tha is where Arrjenius belongs. MGT differs from Arrhenius in being able to handle several greenhouse gases, such as the mix in our atmosphere, at the same time while Arrhenius is limited to one – CO2. According to MGT carbon dioxide and water vapor in the atmosphere form a joint optimum absorption window in the infrared whose optical thickness is 1.87. This value was obtined by analyzing radiosonde data. If you now add carbon dioxide to air it will start to absorb just as Arrhenius predicted. But this will increase the optical thickness. And as soon as this happens, water vapor will start to diminish, rain out, and the original optical thickness is restored. The added carbon dioxide will of course continue absorbing but the reduction of water vapor will have reduced the total absorption enough to block any warming that Arrhenius predicts. MLT prediction is then that addition of carbon dioxide to air does not cause warming, precisely as we have observed for the last 18 years. The hiatus of the eighties and nineties also lasted 18 years and jointly the two hiatuses block out greenhouse warming from 80 percent of the ime since 1979, the beginning year of the satellite era. The remaining 20 percent consists of the 1998 super El Nino and a short warming that followed it. Heither one has any greenhouse connections. Hence, we can declare the entire satellite era since 1979 as greenjouse free. You figure bout what happened before 1979.

Scott
August 9, 2015 12:57 pm

And of course, this assume the data HAS NOT BEEN MANIPULATED.
Let’s see what revelations come upon us over the next few year regarding this potential factoid?

Chris Hanley
August 9, 2015 1:55 pm

“Akasofu’s simple idea also explains why Ar[c]tic ice is recovering, to the alarm of alarmists. DMI ice maps and Larsen’s 1944 Northwest Passage transit suggest a natural cycle in Arctic ice, with a trough in the 1940s and a peak in the 1970s. Yet Arctic ice extent was not well observed until satellite coverage began in 1979, around a probable natural peak …”.
========================
That can (I think) be inferred from the temperature record:
http://www.climate4you.com/images/70-90N%20MonthlyAnomaly%20Since1920.gif

AndyG55
Reply to  Chris Hanley
August 9, 2015 3:20 pm

Biomarkers clearly show that summer Arctic sea ice was not a thing of the past.
The first 2/3+ of the Holocene probably had an open, ice free, Arctic during a reasonable part of the year.

August 9, 2015 3:46 pm

A much too unnecessary complicated speculation about why climate models fail. They fail because CO2 does not affect temperature. End of story.
There are no “well established physics principles” in any calculation of CO2’s imaginary ability to “trap heat”.
The temperatures on Venus at equal pressures to Earth’s troposphere EXACTLY what they should be and can be calculated using nothing more than their relative distances to the sun. The FACT that this can be done completly and utterly falsifies the Greenhouse Effect. Anyone who claims otherwise after checking this FACT, can no longer claim to be a scientist. You are for ever afterwards one of the following three things: Ideologically blinded, stupid or corrupt!

Reply to  wickedwenchfan
August 9, 2015 4:06 pm

With all due respect, I think you are wrong. Read my book. Listen to AW and JC. Such a denier extremist (and scientifically proven wrong stance) weakens the skeptical arguement. Same as Inhofe. Please stop doing that. Please.

Robert Austin
Reply to  wickedwenchfan
August 9, 2015 6:32 pm

wicked,
I think you will find that it is usually the warmists that claim CO2 and H2O “trap heat”. The actual function of so called greenhouse gases in the lower troposphere is to enhance convection by intercepting outgoing long wave radiation from the earth’s surface. This enhancement mechanism increases the energy flow to the upper troposphere where GHG’s, primarily CO2, can radiate directly to space. Increased CO2 concentration raises the altitude of the characteristic emission layer. The elevation of the characteristic emissions layer is more a function of total pressure, not of CO2 partial pressure. The slight raising of the elevation results in a slightly higher surface temperature due to the lapse rate structure of the troposphere. So increased CO2 concentration does result in a higher but miniscule to unmeasurable increase in the earth’s surface temperature. Yes, it is telling that the temperatures and lapse rate structure of the Venusian atmosphere from one bar up is so earth like.

August 9, 2015 5:50 pm

” incorrectly parameterized”
I think the problem is more fundamental than poor parameterization, it’s conservation of water vaper at the surface, an innocent sounding phrase, with big ramifications. If I remember it correctly, it’s how they turned GCM’S from running cold to explaining the warming of the 80’s and 90’s.
Basically they don’t limit water vapor to 100% humidity at the surface, GCM’S are allowed to exceed the natural limiting factor for water vapor. This is how they get water vapor positive feedback.
They allow this, because otherwise they couldn’t explain the surface temperature when the PDO (?) led to natural warming.

August 9, 2015 7:09 pm

MikeB:

A blackbody absorbs ALL radiation falling on it!!!!!! You could say that is part of the definition of a blackbody. Electromagnetic radiation transports energy. When radiation is absorbed the energy it carries is also absorbed. The blackbody will absorb it all, regardless of where it came from – BY DEFINITION.

Physics is about REALITY, not DEFINITION. The definition of a black body may well be that it absorbs ALL radiation falling on it!!!!!! But reality is made of real bodies, not idealised objects acting according to some DEFINITION. They don’t do what the DEFINITION of a black body says, they do what real materials do according to the laws of physics. And that’s the real laws of physics, which might not be quite the same as we think they are – but that’s another story.

August 9, 2015 7:22 pm

Rud, it must be possible to run the programmes with different parameters.
In view of skepticism it must be the case that such runs have been done.
The fact that none have been released or talked about is proof that the models do work with different parameters, ie inputs that result in lowered climate sensitivity as an output.
Any chance that someone could leak one of those trials.
Mosher perhaps, he would know.
Or Zeke.
O course if the climate models did work better it would not change the substance of your post that the potential changes belie longterm prediction.
It seems an idea of programming in changes as they occur to modify the parameters plus use of Paleo data limits [we have had relative isothermality for 2 billion years] could put brakes on excessive prediction yet allow [a] more meaningful climate model[s] to develop.

Reply to  Chris4321
August 9, 2015 8:30 pm

Has been done. See Javier’s excellent reference. Problem in that paper’s ECS conclusion include using a single slab shallow ocean coupled model– to save computations- to full CMIP5 ECHAM6.
Apples to oranges is not valid science.

Reply to  Chris4321
August 10, 2015 6:19 am

“Rud, it must be possible to run the programmes with different parameters.
In view of skepticism it must be the case that such runs have been done.”
start here
https://www.newton.ac.uk/event/clp/seminars

MfK
August 9, 2015 8:19 pm

Great post, it brings new light onto why GCMs are not predictive tools. In my humble opinion, they never can be. The system is too complex to model with any kind of computer now or in the future. And it is provably too complex to make any kind of predictions on which to base civilization-killing policies.

August 9, 2015 8:52 pm

Attributing CO2 with influence on climate is proven to be wrong.
There has always been plenty of CO2 in the atmosphere. Without it, life as we know it could have never evolved. If CO2 was a forcing, it would cause temperature change according to the time-integral of the CO2 level (or the time-integral of a function of the CO2 level). The only way that this time-integral could consistently participate in the ‘measured’ (proxy estimate) average global temperature for at least the last 500 million years is if the EFFECT OF CO2 ON AVERAGE GLOBAL TEMPERATURE IS ZERO and the temperature change resulted from other factors.
Variations of this proof and identification of what does cause climate change are at http://agwunveiled.blogspot.com Only one input is needed or used and it is publicly available. The match is better than 97% since before 1900.

AntonyIndia
August 9, 2015 9:42 pm

GCM’s cover the world in stacked grid cells (engineering’s finite elements). Right. When these huge cells were reduced in size over the Karakoram mountains they could suddenly explain why those glaciers were gaining ice, not melting. How can we believe all other low resolution cell results now? http://www.princeton.edu/main/news/archive/S41/39/84Q12/index.xml?section=topstories

August 9, 2015 10:56 pm

I noticed on the chart comparing CMIP5 with observations that the wide light colored CMIP5 band is 5% to 95% confidence. As I recall from school (ancient history), that’s a 90% confidence interval, not 95%.

richardscourtney
August 10, 2015 12:53 am

Rud Istvan:
Thankyou for a nice article. It is sad that much of the ensuing thread has been trolled from your article and onto SK nonsensical ‘physics’. Your article deserves better.
I write to draw attention to a pettifogging nit-pick that you may want to clarify because warmunists exaggerate the importance of such trivia as a method to dismiss articles they cannot dispute.
You say

GCMs are the climate equivalent of engineering’s familiar finite element analysis (FEA) models, used these days to help design nearly everything– from bridges to airplanes to engine components (solving for stress, strain, flexure, heat, fatigue, …)

I know what you mean by that (and have said similar myself) but GCMs use finite difference analysis (FDA) and not FEA.
Richard

Alx
Reply to  richardscourtney
August 10, 2015 8:58 am

Yes quote mining quotes from comment sections is the last refuge of scoundrels who lack both a cohesive counter argument to a position and intellectual integrity.

richardscourtney
Reply to  Alx
August 10, 2015 10:22 am

Alx
You say

Yes quote mining quotes from comment sections is the last refuge of scoundrels who lack both a cohesive counter argument to a position and intellectual integrity.

So, Alx, don’t do it unless you want to demonstrate that you are a scoundrel who lacks both a cohesive counter argument to a position and intellectual integrity.
Richard

August 10, 2015 3:45 am

“Even non-linear ‘unsolvables’ like Navier Stokes fluid dynamics (aircraft air flow and drag modeled using the CFD subset of FEA) are ‘parameter’ verified in wind tunnels (as car/airplane designers actually do with full/scale models).”
The Caterham Formula one team decided that finite element analysis was all they needed and eschewed the cost of a wind uinnel.
They were the worst team on the grid, and have now gone.

Reply to  Leo Smith
August 10, 2015 7:39 am

The Caterham Formula one team decided that finite element analysis was all they needed and eschewed the cost of a wind uinnel.
They were the worst team on the grid, and have now gone.

Conversely, the Car race simulators with a driver can be within a fraction of a second of a real race lap.
I think this is a good example of the problem with simulators, you can get lost, not really understand the question you asked, and not really understand what the simulator is really telling you.

Editor
August 10, 2015 4:33 am

Thanks for a good discussion of the nuts and bolts of computer modeling. I enjoyed reading it.

August 10, 2015 7:13 am

The pole ice cap change interdependently of the earths “climate” because they are partially controlled by a solar wind interaction…
“High-latitude plasma convection from Cluster EDI: variances and solar wind correlations”
“The magnitude of convection standard deviations is of the same order as, or even larger than, the convection magnitude itself. Positive correlations of polar cap activity are found with |ByzIMF| and with Er,sw, in particular. The strict linear increase for small magnitudes of Er,sw starts to deviate toward a flattened increase above about 2 mV/m. There is also a weak positive correlation with Pdyn. At very small values of Pdyn, a secondary maximum appears, which is even more pronounced for the correlation with solar wind proton density. Evidence for enhanced nightside convection during high nightside activity is presented.”
‘Low to Moderate values in the solar wind electric field are positively correlated to convection velocity.”
“A positive correlation between Ring current and convection velocity.”
http://web.ift.uib.no/Romfysikk/RESEARCH/PAPERS/forster07.pdf
Low Energy ion escape from terrestrial Polar Regions.
http://www.dissertations.se/dissertation/3278324ef7/

Gary Pearse
August 10, 2015 10:04 am

joelobryan
August 9, 2015 at 10:51 am
Malthusian ladies and gents. We currently mine 20Mtpy of copper, the total amount used (and still on the surface for reuse as Joel points out is the 500M tonnes that has been mined from antiquity to present) and the recent US Geological Survey paper estimates 3,500M tonnes yet to be developed. we have ~7.2B people on earth heading for a peak of 8.8 to 10 (an even lower number if we accelerated the growth of prosperity for Africa and other poor regions). We have lots. We have substitutes, We have miniaturized (a computer in the 1960s that took up a sizable room with a tiny fraction of the computing power of one today that weighs less than a kilogram).
I promulgated a law recently after a review of the all the human made global disasters that have been predicted without one success that states: There is no possibility or even capability of humankind causing a disaster of global proportions. All disasters tend to be quick, local, painful and then everything heals up and evidence of the disaster itself is almost totally erased. Even a year after the Hiroshima bombing, certainly a stark horrible disaster in terms of human life, radiation levels had declined to back ground. The Chernobyl disaster resulted in a no-go zone that now is forested and full of wild animals:
“..Within a decade or so, it was noticed that roe deer, fox, moose, bears, feral pigs, lynx, and hundreds of species of birds were in the area, many seeming to thrive. Soon there were reports of an animal feared in Russian folklore, the wolf……(some mutations were reported)….Perhaps one reason why mutations are not obvious in the larger animals is because the wolves weed out the deformed as well as the weak.”
http://www.thewildlifenews.com/2012/12/31/chernobyl-wildlife/
http://zidbits.com/2013/11/is-nagasaki-and-hiroshima-still-radioactive/
My law: It is AXIOMATIC THAT PREDICTIONS FROM DOOMSTERS HAVE NOT AND, I WOULD SAY CANNOT COME TRUE because of their missing of the overpowering dynamic human ingenuity factor in their thinking. Unconstrained by this first order principal component, their thoughts (and heartfelt concerns) soar through the roof of reality.

co2islife
August 10, 2015 3:13 pm

My understanding is that the climate models are multi-variable linear regression models similar to financial models. Am I wrong? How does the process discussed in this article relate to time series models?

gammacrux
August 11, 2015 12:04 am

The opinion a real physicist about
the trouble with “climate models”

gammacrux
Reply to  gammacrux
August 11, 2015 12:05 am

Mervyn
August 11, 2015 6:09 am

What is undeniable is the world was told, before the December 2009 Copenhagen Climate Conference, that the IPCC AR4 was “the gold standard in climate science” and was often referred to as “the settled science”.
That report was rendered obsolete by Mother Nature when all those rising temperature trends, based on various rising CO2 emission scenarios, were shown to be wrong. That being the case, everything else in AR4 relating to those trends automatically became irrelevant. AR4 failed to consider a flat temperature trend despite rising CO2 emissions. It was therefore not worth the paper it was written on.
Nothing coming out of the IPCC should now be trusted. The trust ceased with AR4. It is a political body and not a scientific one. And there lies the problem.

Walter Sobchak
August 12, 2015 5:57 pm

Rud Istavan: Thank you for a very informative post. I think your post has much the same point about models as this talk by Christopher Essex:

I think readers who have read this post would enjoy the Essex talk.