Climate Models, Clouds, OLR, and ECS

By Andy May

The IPCC and the climate “consensus” believe that essentially all warming since 1750 is due to man’s emissions of CO2 and other greenhouse gases as shown in figure 1 here or in (IPCC, 2021, p. 961). This has led to a 45-year search for the value of the Equilibrium Climate Sensitivity to the doubling of CO2 (“ECS” in °C per 2xCO2). Yet, after spending 45 years trying to calculate the sensitivity of climate to man-made greenhouse gases, the “consensus” has been unable to narrow the uncertainty in their estimates and, if anything, the climate model uncertainty is now larger than in earlier reports(IPCC, 2021, p. 927). It is now clear, at least to me, that modern climate models make many critical assumptions that are poorly supported and sometimes conflict with observations. This is an attempt to explain some of these problems and how they developed over time. It is long past time for the “consensus” to stop ignoring the obvious weaknesses in their 60-year old conceptual model of climate.

The Early Models

Syukuro Manabe built the first general circulation climate model with several colleagues in the 1960s (Manabe & Bryan, 1969) and (Manabe & Wetherald, 1967). He started with a one-dimensional radiative equilibrium model of horizontally averaged temperature but realized that the troposphere was not in radiative equilibrium because of convection. The lower atmosphere is nearly opaque to most surface emitted infrared radiation or Outgoing Longwave Radiation (OLR) because of greenhouse gases. As a result, Earth’s surface is not cooled much by emitting radiation but instead mostly by the evaporation of surface water that carries surface heat into the atmosphere as latent heat inside water vapor. Water vapor is less dense than dry air, so it rises. Once the water vapor is high enough, it cools as the surrounding air pressure drops allowing air parcels to expand, causing the water vapor to condense which releases its latent heat. If this is done at a high enough altitude, some of the latent heat can make it to space as radiation or make it to surrounding greenhouse gas molecules higher in the atmosphere. The rest of the released heat simply warms the neighborhood. This process is called the “moist adiabat.”

This process works because temperature declines with altitude in the troposphere according to a ”lapse rate.” The lapse rate varies a lot around the Earth, but the average gradient stays fairly constant at about 5.08 °C/km (Benestad, 2017, pp. 24-26, supp. mat.). The lapse rate is usually given as a positive number, but it is the decrease in temperature with height. The value we most often hear is 6.5 °C/km, even though the true average rate is lower. The actual lapse rate at any given location and time of day varies a lot, it also varies with humidity, and by season. The lapse rate can be negative, a state called a temperature inversion.

For Manabe’s early models he added a constraint on the lapse rate so that it could not rise above a fixed and assumed value so the model correctly predicted the position of the tropopause. The tropopause is the altitude where the temperature gradient stops being controlled by convection. Above the tropopause, the atmosphere is closer to thermal equilibrium and temperature rises with altitude. In Manabe’s early models the tropospheric lapse rate was assumed to be linear and fixed (Held & Soden, 2000).

Manabe also assumed fixed relative humidity and fixed cloud cover. As Held writes, it is an oversimplification to assume the lapse rate does not change with temperature, but it was a convenient assumption (Held & Soden, 2000). The assumption of fixed relative humidity results in an implausibly high sensitivity to CO2. In the tropics the temperature profile is similar to a moist adiabat (described above). The condensation of water vapor at altitude is the process that forms clouds (Held & Soden, 2000). Climate models have a very tough time reproducing the tropospheric temperature profile in the critical tropical regions because they predict too much warming due to greenhouse gases (IPCC, 2021, p. 443).

In the higher latitudes the moist adiabat does not work, since sensible and latent heat horizontal transport by mid- and high-latitude storms play a significant role in the overall climate (Held & Soden, 2000).

The Fixed Lapse Rate Assumption

A fixed lapse rate means that the global average emission temperature never changes, thus the outgoing longwave radiation doesn’t change, except briefly as the surface temperature changes. A common description of the greenhouse effect (GHE) includes a diagram of the lapse rate shown in figure 1 from (Held & Soden, 2000).

Figure 1. Held and Soden’s greenhouse effect model. Notice the effective emission temperature does not change, thus the amount of OLR does not change.

Figure 1 illustrates what doubling CO2 does with a fixed and linear lapse rate in Held’s simplified model. The additional CO2 warms the lower atmosphere with additional back-radiation, then the assumption of a fixed lapse rate kicks in, forcing the emission level up to a higher altitude, which means it has a lower temperature, causing it to emit less to space. Since it is emitting less radiation (aka “heat”), after a short period, the new emission level warms to the temperature of the old emission level and the effective emission temperature remains unchanged, as does the OLR (Held & Soden, 2000).

This simple model of the GHE has many problems. It is really only appropriate in perfect conditions in the tropics, and even there the lapse rate at night is different than the lapse rate during the day. In addition, the model predicts way too much warming in the tropics. In the middle latitudes, with persistent horizontal circulation and many storms, it makes no sense at all. In the polar regions, especially in the long dark winters, the atmosphere is frequently warmer than the surface, completely invalidating the model. Further, currently global average OLR is increasing as the globe warms. It is not remaining the same as predicted by the model, although the total incoming solar radiation has changed very little.

OLR is increasing

Figure 2. Plot of HIRS (purple line, the high-resolution Infrared Radiation Sounder on NOAA and EUMETSAT’s satellites), CERES (green line, Clouds and Earth’s Radiant Energy System instruments), and ERBE (light blue line, Earth Radiation Budget Satellite) OLR at the top of the atmosphere. GISS global average surface temperature is shown in yellow. The solar cycles (SC) during this period are noted as well as the SILSO sunspot count in dark green. After (Dewitte & Clerbaux, 2018).

Figure 2 suggests that Held and Soden’s model of how the greenhouse effect works is incorrect or CO2 is not the reason for the recent warming, as explained by Javier Vinós here. Notice the qualitative pattern in OLR matches each solar cycle in shape. This suggests that the solar cycles contribute to the OLR pattern. So, while the increase in CO2 might influence recent warming, the variations throughout a solar cycle also likely contribute.

ENSO and the AMO

From 1950-1975 Dewitte’s cumulative MEI (the Multivariate ENSO Index) was decreasing, suggesting a period of strong Las Niñas. From 1975 to 1998 it was increasing, suggesting a period strong Los Niños. From 1998-2014 it was flat and Los Niños and Las Niñas were more or less equal. This is illustrated in figure 3.

Figure 3. Cumulative MEI Index, MEI, and AMO. After: (Dewitte & Clerbaux, 2018).

The deep blue line in figure 3, with no marks, is the AMO or Atlantic Multidecadal Oscillation. Notice that it roughly corresponds with the cumulative MEI index. This suggests that the North Atlantic sea surface temperatures are related to the frequency of Los Niños and Las Niñas in some fashion (An, Wu, Zhou, & Liu, 2021), or they both follow some other influence, like solar variability. Again, it looks less likely that CO2 is the “climate control knob” (IPCC, 2021, p. 179).

Clouds and ECS

From (Loeb, et al., 2021):

“Climate is determined by how much of the sun’s energy the Earth absorbs and how much energy Earth sheds through emission of thermal infrared radiation. Their sum determines whether Earth heats up or cools down.”

This is a vast oversimplification, as it ignores the impact of energy residence time which varies continuously. Energy residence time varies as a function of atmospheric and oceanic circulation trends, which are influenced by solar variability (see figures 5.3, 5.4, and 5.5 here).

Loeb, 2021 notes that an increase in absorbed energy by the Earth from 2005 to 2019 is accompanied by a decrease in cloud cover (see figure 4). The decrease in cloud cover reduces reflection of incoming solar energy, but also decreases absorption of outgoing surface IR by clouds. Dewitte (see above) reports an increase in outgoing longwave radiation at the same time.

Kauppinen & Malmi, 2019 report that low cloud cover has decreased, as shown in their figure 2.

Figure 4. Total global cloud cover as a percent of the sky. Data from EUMETSAT CM-SAF.

As explained in Kauppinen & Malmi, 2019 low cloud cover (as well as total cloud cover) decreases as the “pause” in global warming starts around the year 2000. Overall, as low cloud cover increases 1%, global average surface temperature decreases 0.11°C on average. As a function of EUMETSAT global cloud fraction, the HadCRUT4 global surface temperature decreases 0.15°C per one percent total cloud cover, as illustrated in figure 5.

Figure 5. EUMETSAT global cloud cover versus HadCRUT4 global average surface temperature. Data from the UK Met Office and EUMETSAT CM-SAF.

The data displayed in figures 2 through 5 cannot be explained as a function of the monotonically increasing atmospheric CO2 and other greenhouse gases and their so-called “heat trapping” or “OLR delaying” capabilities. While additional greenhouse gases may have some influence on increasing surface temperatures, we cannot see any sign of this influence in these data.

Ceppi & Nowack, 2021 try to show that cloud cover responds to surface temperature and figure 5 appears to support that idea. However, they and the IPCC believe that changes in cloud cover due to a warming surface increase warming on net, that is cloud cover changes due to surface warming are a net positive feedback (IPCC, 2021, p. 95). They are vague in their language and do not claim that increasing cloud cover increases warming, rather they say the net change in cloud cover due to surface warming is a positive feedback. Thus, they break clouds up into types, some are net warming, and some are net cooling, with the overall change being positive.

Figures 4 and 5, and the work of Kauppinen & Malmi suggest that an increase in clouds, as the world warms, decreases warming. Figure 2 suggests that OLR increases as the world warms and cloud cover decreases, this is not what one would expect if cloud cover changes are a net positive feedback to surface warming. However, the two ideas are not necessarily incompatible as neither the IPCC, nor Ceppi & Nowack say increasing cloud cover increases warming, they say “cloud cover changes” increase warming on net. Time will tell if their complex hypothesis is correct.

Ceppi & Nowack explicitly assume that greenhouse gases are causing global warming according to the RCP4.5 scenario. They do not consider the possibility that variations in solar activity (figure 2) or ocean oscillations (like the AMO plotted in figure 3) have an impact. Loeb, 2021 does discuss changes in the PDO, but not as a driver in climate change, just a type of internal natural variability.

As Ceppi & Nowak report, ECS correlates very well with cloud cover, see figure 2 here. It is also true that cloud cover is the most uncertain feedback in greenhouse gas climate models and climate modelers have reported fiddling with their cloud parameters (Koonin, 2021, p. 93) to achieve a pre-determined ECS. It lessens our confidence in climate models when the most uncertain and poorly understood component of them (clouds) is used to create a desired result. Further, as AR6 WGI reports on page 927, the ECS calculated by climate models is becoming more uncertain and the reason given is that the modelers are fiddling more with their cloud parameters to try and match cloud observations. Something is clearly amiss in modern climate models, and the problem is getting worse with time.

EEI (Earth Energy Imbalance)

The net radiation in or out is the Earth Energy Imbalance or EEI. When it is positive the Earth is warming and collecting heat and when it is negative the Earth is cooling. Because most solar energy absorbed on Earth’s surface is stored in the oceans (~90%) the ocean heat content is a sensitive indicator of long-term EEI and can be used as a check and calibration point for satellite radiation measurements which are not accurate enough on their own to directly measure EEI (Loeb, et al., 2022).

Table 1 in (Loeb, et al., 2022) lists the absorbed solar radiation (ASR), the outgoing longwave radiation (OLR), and the net radiation in (positive) and out (negative) for various satellite instrument analyses. The trends vary from 0.026 to 0.42 W/m2/decade ±~.24. This is a fairly large and uncertain difference. The numbers are positive since the world is warming.

The actual EEI and the trend in the EEI are unknown, but ocean heat content estimates from 2005 to 2019 suggest it is small and between 0.24 and 0.98 (±~0.7) W/m2/decade according to Table 4 in (Loeb, et al., 2022). ARGO has greatly increased our knowledge of ocean temperature to a depth of 2,000 meters, but trends in temperature below 2,000 meters and under sea ice are still largely unknown. It seems safe to conclude that the trend, since 2000, is somewhere between ~-0.7 and ~1.5 W/m2/decade and more likely positive than negative, because the Earth is warming, but beyond that it is hard to say. The total incoming solar radiation and the total outgoing longwave radiation are large numbers, with large uncertainty, and the difference between the two is very small and below the accuracy of current measurements.

The change in ocean heat content is useful as a check, but we don’t have a good handle on ocean temperatures, either aerially or at depth. ARGO improves things, but the coverage is still poor. As for measuring incoming and outgoing radiation, there are problems in the eastern Pacific, the Arctic, and in many other areas. For a complete discussion of the problems estimating the EEI trend the interested reader is referred to (Loeb, et al., 2022).

Conclusions

As I’ve written before the current group of IPCC/CMIP climate models do not compare well with observations, and unfortunately, in the tropics they do better if the man-made portion of the enhanced greenhouse effect is removed. It seems that the conceptual model they have pursued since 1990 (the first IPCC report) that CO2 is the “control knob” for global warming is flawed. There is abundant evidence that climate has many drivers, and man-made CO2 is only one, and one that may not be that significant. It is way past time that the IPCC quit pushing this man-made climate change idea and go back to the drawing board and develop a new conceptual model that makes sense with the data presented in this post.

Download the bibliography here.

5 34 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

429 Comments
Inline Feedbacks
View all comments
Tom Halla
December 17, 2024 10:16 am

The IPCC models seem to be an exercise in question begging, assuming global warming due to GHGs, then purporting to prove global warming by the models.

Reply to  Andy May
December 17, 2024 2:01 pm

The IPCC would never give up the trace gas CO2, which has been dubbed an evil control knob, etc., for scare-mongering purposes, unless better banners can be waved around.
For the IPCC, it is not about science, but command/control.

It would be like asking the Church to give up the Holy Cross, the Virgin Mary, and Rituals.

The hapless people are trapped in a web of verbal nonsense!

Reply to  wilpost
December 18, 2024 6:03 am

In the tropics and subtropics, CO2 a weak photon absorber, plays no measurable role, because, near the surface, it is outnumbered by at least seventy to one by water vapor, a strong photon absorber,

plus water vapor, 18, is lighter than CO2, 44, and air, 29, so it condenses into clouds at about 2000 meter elevation, which, with prevailing winds, are transported to northern latitudes, to areas underserved by the sun, especially during winter.

That means the water vapor and clouds we see up north comes from faraway places, because up north there is not enough energy to evaporate the water, which often lies on the ground as snow and ice.

But, even up north, near the surface, CO2 plays no measurable role, because water vapor outnumbers it at least 20 to one.

CO2 begins to play a measurable role when the presence of water vapor is lower, say 3 to 1, which is above the clouds.

However, above the clouds it is colder, and any photons emitted there would have longer wavelengths beyond the CO2 15 micrometer absorption window, but water vapor would absorb these photons, because it has a much wider window starting at about 15 micrometer.

No liberal arts journalists would ever write this, because it is way above their heads. That is the reason they stick to talking points provided by self-serving $stakeholders and their associates

Sparta Nova 4
Reply to  wilpost
December 18, 2024 9:03 am

One nit. Black body calculations do not work with widely dispersed molecules.
One nit. It is colder, not because each molecule has less kinetic energy, but rather due to a much lower density.

Reply to  Sparta Nova 4
December 18, 2024 2:59 pm

Good comments
At abot what density would BB calls no longer apply?

Reply to  wilpost
December 18, 2024 4:23 pm

At lower temperatures at higher elevations, the speed (kinetic energy) of molecules decreases, per gas laws.
At higher elevations, density is less. There are fewer collisions, because molecules are further apart.
No matter what the temperature, photons (packages of energy with a wavelength) move at the speed of light, which is many orders of magnitude greater than the speed of molecules.

Reply to  Andy May
December 18, 2024 9:55 am

They owe their existence to their favorite gas that is why they will NEVER let go of the bullcrap they have built around it.

It is WHY I stopped reading the IPCC reports as it is an exercise of regurgitating that same bullcrap narrative from day one of something that was essentially solved 30 years ago.

Reply to  Tom Halla
December 17, 2024 10:51 am

They are adhering to their charter which narrows their focus to AGW.

The role of the IPCC is to assess on a comprehensive, objective, open and transparent basis the scientific, technical and socio-economic information relevant to understanding the scientific basis of risk of human-induced climate change, its potential impacts and options for adaptation and mitigation.

Reply to  Andy May
December 17, 2024 8:16 pm

The bloke in the white coat is one of Richard Green’s 99.999% who he says we should believe.

Reply to  Mike
December 18, 2024 6:11 am

Obama is getting the faux result he wants, with our money, for scare-mongering purposes. Who is more evil?

Did you vote for Obama, who speaks okie dokey woke honey with forked tongue?

Reply to  Ollie
December 18, 2024 7:46 pm

The problem with that mandate is that if one focuses on just one aspect of the problem it is easy to overlook that things don’t balance. Whereas, if they deal with the ‘Big Picture,’ they are more apt to discover that things don’t work as assumed.

Reply to  Tom Halla
December 17, 2024 2:29 pm

It’s worse than that. The IPCC’s “proof” that CO2 is the cause of warming is the fact that their models don’t show warming if they remove human CO2 emissions. This “proof” assumes the models are correct, but they have never been validated and probably can’t be.

The IPCC’s claim is not different from saying, “We know CO2 causes warming because we assume it does.”

Reply to  Thomas
December 17, 2024 8:19 pm

That is exactly what they are saying but they may very well be too stupid to even realize it.

Someone
Reply to  Mike
December 18, 2024 6:40 am

They are not stupid. They are paid to NOT understand it.

Sparta Nova 4
Reply to  Thomas
December 18, 2024 9:04 am

change from “we assume” to “we say”

AlanJ
December 17, 2024 10:24 am

Manabe also assumed fixed relative humidity and fixed cloud cover. As Held writes, it is an oversimplification to assume the lapse rate does not change with temperature, but it was a convenient assumption (Held & Soden, 2000). The assumption of fixed relative humidity results in an implausibly high sensitivity to CO2. In the tropics the temperature profile is similar to a moist adiabat (described above). The condensation of water vapor at altitude is the process that forms clouds (Held & Soden, 2000). Climate models have a very tough time reproducing the tropospheric temperature profile in the critical tropical regions because they predict too much warming due to greenhouse gases (IPCC, 2021, p. 443).

This argument is muddled – are you suggesting that modern GCMs assume a fixed relative humidity? Because that is decidedly untrue.

Figure 2 suggests that Held and Soden’s model of how the greenhouse effect works is incorrect or CO2 is not the reason for the recent warming, as explained by Javier Vinós here.

Models forced with CO2 yield increasing OLR:

comment image

AlanJ
Reply to  Andy May
December 17, 2024 10:50 am

I think it goes without saying that conceptual models used to illustrate a single idea or concept are going to fall short of describing everything about the phenomena under question. This shouldn’t be used to say that large scale GCMs are wrong, or that the physics underpinning GHE theory are wrong. The simple models are just… simple.

Both errors are reduced if they reduce (or eliminate) the human-enhanced CO2 greenhouse effect.

Well, yes, if you have a warm bias and you remove something producing warming, the warm bias is reduced. That doesn’t mean you’ve removed the thing causing the bias. The bias in the tropics is mainly due to how models represent clouds and ocean mixing, not that they consider the role of GHGs in driving long term warming.

AlanJ
Reply to  Andy May
December 17, 2024 11:27 am

The models are getting SAT right:

comment image

comment image

And the primary driver of warming in the models is anthropogenic forcing. So the issue is almost certainly some poorly resolved cloud/ocean dynamics in the tropics as opposed to getting the basic physics of the GHE wrong.

It’s also important to recognize that it isn’t a binary – the models are neither wholly right nor wholly wrong. They do some things well and do other things poorly, and those things differ across different models.

AlanJ
Reply to  Andy May
December 17, 2024 1:33 pm

This is not a “better” assessment, it is an assessment of tropical mid-tropospheric warming, the plots I provided show global SAT. Importantly, the tropospheric warm bias is present in the models whether forced with CO2 or forced with solar (I’ll leave you to guess which is which):

comment image

So the issue cannot possibly be that the CO2 hypothesis is yielding the bias.

AlanJ
Reply to  Andy May
December 17, 2024 2:49 pm

The point is that the pattern of warming yielded by the models is the same whether forced by CO2 or the sun. This invalidates the claim that the tropical mid-tropospheric warm spot can be the result of an incorrect GHE hypothesis and points to the fact that the bias is almost certainly the result of some other dynamics like ocean mixing or clouds.

Reply to  AlanJ
December 17, 2024 3:05 pm

It shows absolutely NOTHING, nor does it invalidate anything.

It is just GARBAGE output from irrelevant models..

Reply to  AlanJ
December 17, 2024 5:51 pm

This invalidates the claim that the tropical mid-tropospheric warm spot can be the result of an incorrect GHE hypothesis 

In what sense does it do that? If the hypothesis is that GHE produces x amount of tropical mid-tropospheric warm spot warming and reality has it as y then was the hypothesis incorrect?

Maybe if you use different scales and colours to hide the difference then “incorrectness” can be swept under the mat.

AlanJ
Reply to  TimTheToolMan
December 18, 2024 6:07 am

The models indicate that any surface warming, from any cause, produces the tropical mid-tropospheric warm spot. It is not specific to CO2. Thus the presence or absence of the tropical mid-tropospheric warm spot is not a test of CO2-driven warming in the models.

There is something missing – either owing to some deficiency in the way the models are handling local dynamics in this region or to some structural uncertainty in the observations (likely some mix of both), but there is zero evidence that the issue is the physics of an enhanced GHE.

Reply to  AlanJ
December 18, 2024 11:08 am

There is something missing – either owing to some deficiency in the way the models are handling local dynamics in this region or to some structural uncertainty in the observations

The thing that’s missing is physics. What is there is, is a fit. You’ll not see it of course but that’s what a tuned, non physical solution is.

Reply to  AlanJ
December 18, 2024 3:03 am

But you are going to have to prove that the GHE and specifically CO2 causes x not simply by showing a model that is based on a hypothetical assumption and correlation and play Occam’s razor and make attribution statements. A neutral observer might as well shrug his shoulders and say: whatever..
Proper science needs to state harder claims. All goes to show how slippery the climate system is. I have for some time come to the conclusion you cannot equate your way out of it simply because of the uncertainties of the corrolation between the multiple factors. It is enough for me ( and others) to witness that more intrinsic models cannot make it better. Throw in politics and money et voila: lots of opinions and assumptions.

Reply to  Andy May
December 17, 2024 3:03 pm

Yep, a PETTY and insipid attempt at propaganda.

No more than ever expected from AJ.

Reply to  AlanJ
December 17, 2024 2:05 pm

“models forced with”

So more total computer game BS either way. !

Reply to  AlanJ
December 17, 2024 2:35 pm

Yeah, from those scales it looks like the models are out by a factor of 2. The fact you present this as evidence “the issue cannot possibly be that the CO2 hypothesis is yielding the bias” is telling.

Reply to  AlanJ
December 17, 2024 1:14 pm

So the issue is almost certainly some poorly resolved cloud/ocean dynamics in the tropics as opposed to getting the basic physics of the GHE wrong.

The issue is that the GCMs are fitted to expected warming at the global level but get none of the actual climate components correct. The issue is that they’re not modelling climate at all and so have no true capability to project.

Reply to  AlanJ
December 17, 2024 1:21 pm

ROFLMAO.

Using FAKED URBAN SURFACE DATA again. Seriously DUMB.

There is no evidence of human “forcing” in the UAH atmospheric data.. none, nada, zip !!

Reply to  AlanJ
December 17, 2024 1:24 pm

The models are little more than pre-assigned computer games.

They have very little scientific validation.

Putting them against concocted URBAN surface data is and pretending validation. …

HILARIOUS. !

Robert Cutler
Reply to  Andy May
December 17, 2024 11:50 am

The GCM models will always be wrong until solar forcing is properly accounted for. The earth has a minimal response to the 11-year cycle which is why I filter it out in my models. Also, the sunspot signal as shown in Figure 2 is a proxy for solar activity. It’s not solar activity.

Alan, even Neptune plays a much larger role in climate change than anthropogenic emissions. The largest contribution comes from what’s commonly known as the Eddy cycle, which is often given as 980 years, but which I’ve found has a variable period closer to 940 years.

comment image

The rise in temperatures since the 1900s is mostly due to being on the upswing side of the Eddy cycle which will peak in about 100 years.

comment image

Reply to  AlanJ
December 17, 2024 8:22 pm

fall short of describing everything about the phenomena under question. This shouldn’t be used to say that large scale GCMs are wrong

Yes it should. You can’t be a little bit right when it comes to modelling the Earth’s climate.

AlanJ
Reply to  Mike
December 18, 2024 5:52 am

I am pointing out the difference between simplified explanations meant to elucidate a concept for a broad audience and the actual, fully realized and complex scientific theory behind the simplified explanation. A shortcoming in the simplified explanation does not indicate a shortcoming in the full theory.

Let me know if I need to simplify this concept further for you.

Reply to  AlanJ
December 18, 2024 11:28 am

fully realized and complex scientific theory behind the simplified explanation.

You think the models represent this? The models are simplified explanations.

AlanJ
Reply to  TimTheToolMan
December 18, 2024 1:06 pm

The models represent complex scientific theories, though of course they are simpler than the system being modeled, it is impossible for it to be otherwise.

Reply to  AlanJ
December 18, 2024 6:06 pm

The models represent complex scientific theories

A simple explanation “represents” a complex scientific theory but isn’t actually a complex scientific theory. Models are no different.

AlanJ
Reply to  TimTheToolMan
December 19, 2024 5:30 am

This is trivially true but quite unimportant. Models are not theories, they are mathematical representations of the system based on theory and physical laws. They are indeed simplifications, since a system representing the true complexity of the climate would require building an earth-sized earth in a solar system-sized solar system, etc. That has zero bearing on their usefulness. In fact the simplicity is a feature, not a bug, since it allows us to run experiments in sub-geologic timeframes to analyze potential outcomes.

Reply to  AlanJ
December 19, 2024 5:50 am

They are indeed simplifications, since a system representing the true complexity of the climate would require building an earth-sized earth in a solar system-sized solar system, etc.

Why would you need a computer model the size of the earth?

Are you somehow anthropomorphizing computer models into something physically real?

You have the only real physical earth right at hand to use in verifying what computer models use for inputs and for verifying the output of the models. You have 150+ years of temperature data. Why the need for another earth?

AlanJ
Reply to  Jim Gorman
December 19, 2024 7:05 am

If you wanted to create a non-simplified model of the earth that perfectly captured every element of the dynamics to run experiments on you would indeed need a full scale exact replica earth. I say this not to suggest that we should build such a thing but to point out the absurdity of knocking climate models for being simplifications.

We are currently haphazardly running a single large-scale experiment on our one earth, but unfortunately we all get to live with the consequences.

Reply to  AlanJ
December 19, 2024 7:33 am

We are currently haphazardly running a single large-scale experiment on our one earth, but unfortunately we all get to live with the consequences.

Panic!

Reply to  AlanJ
December 19, 2024 7:32 am

That has zero bearing on their usefulness.

Bullshit. Why do you try to hide the fact the emperor is naked?

Reply to  AlanJ
December 19, 2024 12:48 pm

Models are not theories, they are mathematical representations of the system based on theory and physical laws.

GCMs are heavily parameterised. They are not “based on theory and physical laws”

Reply to  AlanJ
December 18, 2024 9:02 pm

Hypotheses, not “theories.”

Reply to  AlanJ
December 18, 2024 12:52 pm

Wrong is wrong. Doesn’t matter if it is a simple explanation or an actual, fully realized and complex scientific theory.

Reply to  AlanJ
December 17, 2024 11:33 am

ROFLMOA.

Your love of low-level computer games is hilarious. 🙂

But they are not fit for scientific purposes.

Reply to  AlanJ
December 17, 2024 11:36 am

“Ensemble mean” = averaging meaningless computer numbers to generate more computer numbers with even less meaning.

Reply to  karlomonte
December 17, 2024 12:56 pm

You’ve noted the hypocrisy and inconsistency. Alarmists include all of the ‘blunderbuss’ (highly variable) ‘projections’ of their beloved GCMs to calculate an ‘ensemble mean’, which implicitly assumes that individual model results can be treated as independent draws from the same normal distribution. Yet, when one logically points out that this assumption also allows for a simple calculation that yields a high probability that none of their models are correct, they disavow the same assumption they previously relied upon to justify their calculation of an ensemble mean!

https://wattsupwiththat.com/2024/12/13/climate-science-settled-until-its-not/#comment-4007004

Reply to  Frank from NoVA
December 17, 2024 2:56 pm

They invoke statistics while ignoring the IFF basis for same!

AlanJ
Reply to  Frank from NoVA
December 18, 2024 6:46 am

The issue in your calculation is primarily that the truthiness of a model result is not binary – a modeled ECS of 3.501 degrees is functionally indistinguishable from a “true” ECS of 3.500 degrees, even if the modeled ECS is technically wrong. Your calculation merely provides the odds that a value will be exactly the “true” ECS value, with no allowance for dispersion around that true value. Many model results close to the true value is a precise estimate of ECS even if the “true” value is unknown.

Reply to  AlanJ
December 18, 2024 7:59 am

Well this is complete nonsense.

AlanJ
Reply to  karlomonte
December 18, 2024 8:02 am

Give it some more thought. If it is nonsense, you should be able to clearly articulate why. Otherwise the issue is not my comment but that you’ve failed to understand it.

Reply to  AlanJ
December 18, 2024 8:38 am

ECS is not a real parameter, it cannot be measured, the only values are guesstimates or model-generated fantasies.

AlanJ
Reply to  karlomonte
December 18, 2024 1:10 pm

ECS is real by definition – it is the warming that occurs after CO2 levels are doubled and the climate reaches a new equilibrium. This value exists and can be determined. Using words like “guesstimate” just ignorantly minimizes the level of sophistication of the efforts to assess this value.

Reply to  AlanJ
December 18, 2024 4:20 pm

1 ECS values that models generate are not measurements.

2 The acronym “equilibrium climate sensitivity” itself is a gross oversimplification: is the atmosphere ever in equilibrium? How can the quantity ever be measured if the atmosphere is never in equilibrium?

3 From the units of ECS, it also assumes that global average delta-T (GATs) tell everything needed to know about “the climate” (singular).

4 ECS very likely it isn’t even a constant, but is treated as one.

AlanJ
Reply to  karlomonte
December 19, 2024 5:38 am
  1. Unequivocally true, no one has ever said otherwise.
  2. By this logic, the ideal gas law and Newtonian mechanics are also “gross oversimplifications” because they assume conditions that are rarely met exactly. This has no bearing on their usefulness for understanding the systems in question.
  3. This is nonsense and no one has ever said this. The ECS tells us something about the climate system’s response to a particular forcing.
  4. The ECS being context-dependent does not diminish its usefulness for understanding climate system response to forcing, in the same way that knowing the speed of light in a vacuum is useful for understanding physics, even though light doesn’t always travel in a vacuum. 

These objections just smack of pointless complaining. ECS isn’t the be-all end-all climate metric, it’s one of many different aspects of the system that scientists try to understand, but it happens to be one that’s useful for assessing probable outcomes on human-relevant timescales.

Reply to  AlanJ
December 19, 2024 7:36 am

What exactly does ECS tell you [tinu]?

More propaganda full of inappropriate analogies to prop up the human-caused PANIC!

Reply to  AlanJ
December 18, 2024 9:10 pm

The value ONLY exists if real world forcings remain stable long enough for equilibrium to be reached, which isn’t guaranteed. Therefore, in that sense it is a hypothetical value that may not have any practical value.

Reply to  AlanJ
December 18, 2024 8:20 am

Many model results close to the true value is a precise estimate of ECS even if the “true” value is unknown.

How do you get “results close to the true value” if you you don’t know the true value? Your logic is flawed.

The chances are that the precise estimates from a model is wrong. This is true especially when the “true value” is unknown.

The dispersion around the mean is meaningless if all the values are wrong regardless of how precise the values are.

One of the reasons the dispersion around the mean is important in measurements is that the measuring devices used to make physical measurements have at some point, been referenced to an accepted “true value”. More expensive and precise instruments will have smaller and smaller dispersions around a stated value.

If I have a micrometer whose frame is bent, I may get readings that are very, very precise but the chances of the mean of multiple readings being a true value is nil.

AlanJ
Reply to  Jim Gorman
December 18, 2024 9:18 am

How do you get “results close to the true value” if you you don’t know the true value? Your logic is flawed.

You don’t have to know that the result is close to the true value for it to be close to the true value. You are missing both the thrust of Frank’s argument and of my rebuttal.

The point is that knowing the odds of obtaining the exact precise true value are rather irrelevant if, “functionally indistinguishable from the true value” is on the table. To make the argument that Frank is making, he needs to determine the probability of obtaining results that are near enough to the true value that we don’t care if the exact true value is obtained or not, which is functionally impossible using his prescribed framework.

If the true value is 3.5, and there is a 0.4% likelihood that one model has yielded this value, that does not give you the odds of a model yielding 3.5000 or 3.4999 or any other combination of “close enough” you can conceive of.

Reply to  AlanJ
December 18, 2024 9:48 am

You don’t have to know that the result is close to the true value for it to be close to the true value.

More flawed logic! If you DON’T KNOW the true value, you will NEVER KNOW when an result is close to the true value. Therefore, YOU CANT CLAIM that the result is the true value because you don’t know the true value.

You can only claim that the result MIGHT BE the true value. That is the epitome of pseudoscience, not experimentally based physical science.

The point is that knowing the odds of obtaining the exact precise true value are rather irrelevant if, “functionally indistinguishable from the true value” is on the table.

You can not verify “functionally indistinguishable” unless you KNOW the true value. You are missing the most important part of determining indistinguishable. More pseudoscience using circular logic.

AlanJ
Reply to  Jim Gorman
December 18, 2024 12:35 pm

You’re arguing with ghosts again. I’ve not said that we will know if our result is close to the true value, I’ve said that determining the exact odds that one of N models yielded the exact correct true value is irrelevant if “almost exactly the true value” will also do. When determining the ECS, finding that the likely value is about 3.5 degrees +/- a degree is quite desirable, we don’t need to know that the value is exactly 3.5 degrees. Obviously we would determine this in an ideal world, but we will do just fine with an estimate.

Reply to  AlanJ
December 18, 2024 9:21 am

Nice try with the straw man argument! We’re not talking about the minuscule errors from repeated measurements of, say, a single 2×4 that is exactly 3.5 ft long, we’re talking about the wildly disparate outputs of multiple models, all of which are demonstrably meaningless because the magnitude of their input (CO2) ‘forcing’ is swamped by the radiative impact of the errors in their output cloud cover.

AlanJ
Reply to  Frank from NoVA
December 18, 2024 9:53 am

This is no straw man argument, it’s a straightforward and direct rebuttal to the argument you made. Your inability to counter it speaks volumes.

The output of the models is anything but meaningless, first because, as noted earlier in the thread, they reproduce historic and forecasted observed SAT trends quite well, and second because the models produce real features of the climate system as emergent patterns, like the jet stream, ocean currents, cyclones, etc. They are quite skillful.

Reply to  AlanJ
December 18, 2024 10:57 am

they reproduce historic and forecasted observed SAT trends quite well, and second because the models

Read this and then tell us how good the models are. Even Hausfather recognizes the problem!

https://www.science.org/content/article/use-too-hot-climate-models-exaggerates-impacts-global-warming

Models are TUNED to meet historical data. That makes them not physics based.

AlanJ
Reply to  Jim Gorman
December 18, 2024 11:29 am

The article is correct, and using TCR-screened ensembles, as the IPCC does, produces much better results. I showed the effect of this earlier in the thread:

comment image

But the takeaway is not that models are not skillful or not useful, it’s simply that researchers need to use them thoughtfully.

That makes them not physics based.

They are unequivocally physics based, whether they include parameterizations or not. And they produce real features of the climate system emergently, and are thus unquestionably skillful.

Reply to  AlanJ
December 18, 2024 1:10 pm

“They are unequivocally physics based”

If the physics are wrong then the model will be wrong as well.

You are basically arguing that if you try to pin the tail on the donkey enough times you just *might* succeed. If you are facing the wrong way (i.e. wrong physics) you’ll *never* succeed. And you won’t know as long as your blindfold is on whether you succeeded or not!

AlanJ
Reply to  Tim Gorman
December 18, 2024 2:01 pm

If you lift the blindfold and see that the tails are all closely clustered around the donkey’s behind, it will instill confidence that you are facing the right way. Thankfully we have robust observations of the climate to compare against model output which allow us to evaluate and improve the models.

Reply to  AlanJ
December 18, 2024 4:23 pm

Thankfully we have robust observations of the climate to compare against model output which allow us to evaluate and improve the models.

More propaganda, and who are “we”?

AlanJ
Reply to  karlomonte
December 19, 2024 5:39 am

who are “we”

Humans. At least, I think everyone here is a human, though you’re giving me reason to wonder…

Reply to  AlanJ
December 19, 2024 7:37 am

Because I don’t buy your marxist propaganda, I am no longer to be considered human.

Panic-man drifts off into clown world.

Reply to  AlanJ
December 19, 2024 2:44 am

Lifting the blindfold = measuring the value

Guessing at a value that can’t be verified is *NOT* lifting the blindfold. It’s seeing an answer in a cloudy crystal ball – just like a carnival fortune teller!

You have *nothing* to verify the output of the models. The GAT is a phantom – and that includes both absolute temps as well as anomalies. You simply can’t average intensive properties and get a meaningful value. Doing so requires assuming that you can add intensive properties and/or assuming a single homogenous measurand where you are finding a gradient between measurement spots. You can’t add intensive properties and the global biosphere is *not* a homogenous single measurand. It’s the same idiocy as assuming you can average the temperature on top of Pikes Peak with the temperature in Colorado Springs and get a meaningful value. Or that you can average the temperature in San Diego with the temperature in Ramona and get a meaningful value. Even worse is the climate science assumption that you could substitute the temperature on Pikes Peak for the temperature in Colorado Springs or the temperature in San Diego for the temperature in Ramona because both sets of locations are only a few miles apart (i.e. homogenization and/or infilling).

Reply to  Tim Gorman
December 19, 2024 5:17 am

And yet climatology treats these numbers as everything you need to know about “climate”.

AlanJ
Reply to  Tim Gorman
December 19, 2024 5:43 am

You have *nothing* to verify the output of the models.

We have abundant observations to validate against the models, the SAT is simply one metric of many, including sea level, vegetation cover, OHC, land and sea ice, etc. And your whining about the average SAT is quite a waste of time and effort, since modelers compare observed temperature fields with model outputs.

Reply to  AlanJ
December 19, 2024 7:39 am

Not one of which supports your PANIC dogma.

Reply to  AlanJ
December 19, 2024 4:21 am

Thankfully we have robust observations of the climate 

Robust observations based on two temperatures a day for most of the 20th century? Come on dude, you are fooling no one.

What do the models show for the 2oth century warming in Kansas and other U.S. States? Show us a graph of what your favorite model shows as compared to this.

comment image

Reply to  AlanJ
December 18, 2024 4:22 pm

You really, really luv posting this inappropriate abuse of statistics by the IPCC.

Its bunk.

Reply to  AlanJ
December 18, 2024 9:26 pm

and are thus unquestionably skillful.

Then why are the new (CMIP6) models running even warmer?

AlanJ
Reply to  Clyde Spencer
December 19, 2024 5:46 am
  1. The SAT is not the singular measure of model skill.
  2. Not all of the CMIP6 models run “warm,” only a subset do.
  3. Most of the disparity seems to arise from the aforementioned warm-bias in the tropics in this subset of models, resulting from SST and ocean mixing dynamics.
Reply to  AlanJ
December 19, 2024 7:40 am

More propaganda.

Keep telling yourself this tripe, maybe you’ll start to believe it.

Reply to  AlanJ
December 19, 2024 5:38 pm

It is my understanding that the regional precipitation predictions are even worse than the SAT because they frequently have opposite predictions for flooding and drought.

Again, it is my understanding that even among the CMIP5 ensemble runs, only the Russian model is close to reality and all the others run warmer. CMIP6 runs even warmer for the unskillful models.

Under no circumstances should water temperatures be conflated with Stevenson Screen air temperatures from 2 meters above the ground.

Reply to  Clyde Spencer
December 20, 2024 5:19 am

Enthalpy of air is basically the energy of dry air plus the energy of moist air. The energy of moist air depends on at least three factors: humidity, pressure, and temperature. Climate science ignores two of these three factors. Precipitation also depends heavily on humidity, pressure, and temperature. When climate science ignores humidity and pressure it has no chance of getting precipitation right. And since climate has precipitation as a significant factor, the climate models, driven by temperature, have no chance of getting climate correct.

It’s like Freeman Dyson pointed out so long ago, the biggest problem with the climate models is that they are far, FAR, from being holistic, they ignore most of the factors that actually determine climate. To climate science 100F in Las Vegas is the exact same climate as 100F in Miami.

Reply to  AlanJ
December 18, 2024 10:39 pm

They are unequivocally physics based, whether they include parameterizations or not.

F = ma is a physics based formula. However if you dont know what “m” to use and instead come up with a lookup table of m vs some arbitrary criteria, lets say distance from the origin…then F = ma is no longer physics based and will give incorrect results, especially when used outside of prior usage.

Why exactly do you believe a parameterised equation is “physics based”?

AlanJ
Reply to  TimTheToolMan
December 19, 2024 5:52 am

Parameterization isn’t arbitrary—it’s grounded in physical laws and constrained by empirical data. Your example misunderstands the concept. If we, say, estimated mass using volume and density, constrained it within observed ranges, ran thousands of F = ma simulations, and validated against real-world results, we’d still be applying fundamental physics. Parameterization doesn’t discard physics—it supplements it to handle complexity where direct measurement isn’t feasible. For climate simulations, parameterization allows representation of micro-scale processes where direct computation isn’t practical.

Reply to  AlanJ
December 19, 2024 7:41 am

Parameterization isn’t arbitrary—it’s grounded in physical laws and constrained by empirical data.

So you claim (over and over) sans real evidence.

Reply to  AlanJ
December 19, 2024 8:22 am

Parameterization isn’t arbitrary—it’s grounded in physical laws and constrained by empirical data.

The very use of parameters is because the physical functional relationships ARE NOT KNOWN. If those aren’t known, then you choosing them to obtain a desired outcome does not make them physically correct.

Did you ever take some symbolic logic in college? It doesn’t appear so because your protestations have terrible logic. “The answer is correct because I made it correct”!

AlanJ
Reply to  Jim Gorman
December 19, 2024 9:03 am

The very use of parameters is because the physical functional relationships ARE NOT KNOWN. 

First, that is often not the case, it’s that the physical process operates at a sub-grid scale, and running the models at higher resolutions has exorbitant computational and time costs, so it is better to approximate these processes via parameterization. Improvements in computing technology over time will reduce this need, but the computational costs rapidly outpaces increases in computing power as you increase model resolution.

When it is the case that the functional relationship is not well understood, the parameterization is not arbitrary, but is based on empirical observation and theoretical principles.

Reply to  AlanJ
December 19, 2024 9:36 am

When it is the case that the functional relationship is not well understood, the parameterization is not arbitrary, but is based on empirical observation and theoretical principles.

Talk in circles you.

Reply to  AlanJ
December 19, 2024 1:24 pm

When it is the case that the functional relationship is not well understood, the parameterization is not arbitrary, but is based on empirical observation and theoretical principles.

But you understand this is a fit, right?

So with clouds, at the end of the day, a cloud is either formed or not based on a probability given the broad conditions in the grid cell.

How well do you think those clouds are going to be represented in a warmer world?

Reply to  AlanJ
December 19, 2024 1:14 pm

If we, say, estimated mass using volume and density

That’s not an estimation, its a definition. That is physics. In the case of F=ma the problem comes in when m varies with velocity but that’s not what I’m talking about even though F=ma fails when m has velocity approaching c.

To better see it, clouds are not set by physics. They’re set by a bunch of physicsy sounding things like humidity, altitude, temperature, potentially CCNs etc…but that’s NOT physics, its a rule of thumb and then tuned to reproduce observed behaviour.

They’re quite different examples.

Parameterization doesn’t discard physics—it supplements it to handle complexity where direct measurement isn’t feasible.

This is your misunderstanding. Parameterisation in models is not based in physics and as soon as you add a non-physics value into a calculation, that calculation is no longer based on physics.

If it was a single calculation then the result might still be useful but when its a stepwise calculation then the uncertainty accumulates and there are millions of steps in a GCM and the projection is worthless.

Reply to  TimTheToolMan
December 19, 2024 2:04 pm

Alan doesn’t understand that:

  1. A parameterization used to “tune” a model to give a desired output is *not* physics based.
  2. A parameterization that is supposed to be an “average” value in a dynamic process simply can *NOT* give a real world output.
  3. A parameterization that can not be itself be modeled must, by definition, contain uncertainty. That uncertainty multiplies as it is used in an iterative calculation.
AlanJ
Reply to  TimTheToolMan
December 19, 2024 2:13 pm

To better see it, clouds are not set by physics. They’re set by a bunch of physicsy sounding things like humidity, altitude, temperature, potentially CCNs etc…but that’s NOT physics, its a rule of thumb and then tuned to reproduce observed behaviour.

I don’t know what you mean by “physicsy sounding things,” but these are parameters based on our understanding of cloud physics and informed by empirical observation.

By your logic, estimating friction coefficients or turbulence in fluid dynamics would disqualify the field from being physics-based.

Reply to  AlanJ
December 19, 2024 2:46 pm

By your logic, estimating friction coefficients or turbulence in fluid dynamics would disqualify the field from being physics-based.”

Clyde tried to explain his experience with carburetors to you. Friction and turbulence are major contributors to how they work. Guess what? You’ll never get exact answers from any model, only estimates. Primarily because unless you know the temperature, humidity, and pressure (those “physicsy” sounding things) you are only modeling a fake world, not the real world. Climate models give us “fake world” estimates, not real world estimates.

AlanJ
Reply to  Tim Gorman
December 19, 2024 3:22 pm

I’ve explicitly said exactly the same thing in this very thread. Climate models are simulations of the real world, not exact reproductions of it. An exact reproduction is impossible.

Estimates can be incredibly useful, particularly when they are all that you have to work with.

Reply to  AlanJ
December 20, 2024 4:58 am

An exact reproduction is impossible.”
“Estimates can be incredibly useful”

No one is asking for an exact reproduction. This is just one more use of this red herring argumentative fallacy. The estimate, however, needs to be reasonably accurate in order to be useful, and that means that uncertainty intervals for future projections needs to be provided along with the model output. The estimate needs to be based on actual physics and not based on a data fitting algorithm, especially when it is fit to a factor that may be correlated but is not proven to be *the* casual factor or even “a” partial causation factor. You may as well base your climate model on postal rates or butter prices, you would get a similar correlation and a similar extrapolation into the future.

AlanJ
Reply to  Tim Gorman
December 20, 2024 5:47 am

Guess what? You’ll never get exact answers from any model, only estimates.

I was agreeing with your comment above, where you said, “Guess what? You’ll never get exact answers from any model, only estimates.”

It’s unclear what you mean by “the estimate needs to be based on actual physics and not a data-fitting algorithm.” Climate models are physics-based, not data-fitting exercises. They aren’t programmed to say “x CO2 produces y warming.” Instead, they are built on the principles of radiative transfer, fluid dynamics, and thermodynamics. The CO2-driven greenhouse effect isn’t an input, it is an emergent property of the underlying physics.

Reply to  AlanJ
December 20, 2024 6:32 am

Climate models are physics-based, not data-fitting exercises.”

Then why are they evaluated based on hindcasting results? This means they are *trained* by data fitting and not physics.

The greening of the earth *should* be an emergent property of the underlying physics but apparently isn’t an output of the models. The growth in grain harvests should be an emergent property of the underlying physics but isn’t an output of the models. The stagnation of maximum temps in much of the globe should be an emergent property of the physics but isn’t an output of the models.

If CO2 isn’t an input to the models then why is it the major forcing factor in the SSP’s?

from the UCAR: “Climate models predict that Earth’s global average temperature will rise an additional 4° C (7.2° F) during the 21st Century if greenhouse gas levels continue to rise at present levels.”

Really? Exactly what temperatures are driving the average higher? Can the models tell us? Is it maximum temps? If so, how much do they contribute to the average? Is in minimum temps? If so, how much do they contribute to the average?

UCAR predicts an 18% loss of insect species for a +2C change of temperature. Since most species do better when minimum temps go up, if it is minimum temps driving the average up we *should* see a higher survival rate for most species, not a loss.

So it is obvious that most of climate science ASSUMES that the average going up is mainly driven by increased maximum temps. What are the modelers doing to rectify this so very wrong assumption? What are the climate models themselves doing to rectify this misunderstanding of what climate change actually means?

AlanJ
Reply to  Tim Gorman
December 20, 2024 8:22 am

Then why are they evaluated based on hindcasting results? This means they are *trained* by data fitting and not physics.

Hindcasting is validation, not model training.

What you’re describing with “greening of the earth,” changes in harvests etc. is in the domain of Earth Systems modeling, not so much modern GCMs. But these are absolutely things that scientists look at and study.

If CO2 isn’t an input to the models then why is it the major forcing factor in the SSP’s?

CO2 concentration is an input to the models, the climate sensitivity to this CO2 is an output.

Really? Exactly what temperatures are driving the average higher? Can the models tell us? Is it maximum temps?

Of course the ycan, see studies like Lobell, 2007 for such analysis.

Reply to  AlanJ
December 20, 2024 8:48 am

Hindcasting is validation, not model training.

ROTFLMAO! No parameters or other inputs are modified so hindcasts are accurate? Tell us another story!

Reply to  AlanJ
December 20, 2024 9:44 am

“Hindcasting is validation, not model training.”

And if the validation fails then what happens? Answer: the model is tweaked (i.e. TRAINED) until the output matches the validation (i.e. training) data.

Grain harvests, etc ARE IMPACTED BY CLIMATE! If the climate models can’t be used to predict climate accurately then of what use are they?

The climate sensitivity is a Parameterized (i.e. guess) value based on uncertain data. Therefore it is uncertain as well.

AlanJ
Reply to  Tim Gorman
December 20, 2024 12:53 pm

Model tuning is not an arbitrary exercise, and typically the tuning target is radiation balance at TOA in long term steady state, not a specific historic target. Obviously you can validate parameter tuning by running hindcasts, and this is an effective way of testing the assumptions of your tuning process.

The climate sensitivity is a Parameterized (i.e. guess) value based on uncertain data. Therefore it is uncertain as well.

Obviously there is uncertainty in estimate of sensitivity, else it would not be provided as a range of possible values.

Reply to  AlanJ
December 21, 2024 5:36 am

Obviously you can validate parameter tuning by running hindcasts, and this is an effective way of testing the assumptions of your tuning process.

LOL, curve fitting at its best. If you don’t know why a parameter is what it is, then you have no idea what it should be. It is a GUESS.

Validating a parameter(s) by making sure you get the right answer is curve fitting. It is basically circular logic at its finest.

Reply to  AlanJ
December 19, 2024 4:45 pm

I don’t know what you mean by “physicsy sounding things,” but these are parameters based on our understanding of cloud physics and informed by empirical observation.

When the model decides to create a cloud based on the conditions in the grid cell that includes humidity, temperature etc, its done so because statistically under those conditions we see that probability of clouds being created.

What is the probability of the clouds being created in that grid cell when its average temperature is higher than when the empirical measurements were originally taken?

Reply to  TimTheToolMan
December 20, 2024 5:00 am

Climate science doesn’t won’t even recognize the fact that humidity and pressure play a part in the process by using enthalpy instead of temperature as their base measurement. If the models provided enthalpy figures instead of just temperature (and anomalies at that) they would be much more believable.

Reply to  AlanJ
December 18, 2024 11:18 am

Your inability to counter it speaks volumes.

It’s one of those “not even wrong” statements. You can’t recognise the implications of the calculation not being based on physics and not being a projection.

Reply to  AlanJ
December 18, 2024 9:22 pm

… they reproduce historic … SAT trends quite well, …

That is because they are tuned to give the best possible fit. Those are not predictions!

As to predictions, they have the same flaw as any extrapolation. Namely, the sections that fit well do not guarantee that the extrapolation will be correct. The unstated assumption is that a good fit to history will give a good prediction. However, I don’t believe that is logically supported. One of the issues is future Black Swans may swoop in and change reality, even if the models are 100% perfect, based on past history. However, they can’t be perfect because the individual runs in an ensemble produce different results.

AlanJ
Reply to  Clyde Spencer
December 19, 2024 6:02 am

The models are not tuned to produce forecasted trends, since that would require time-travel. Nor are model projections simple numerical extrapolations of historical trends, this is a common misconception around these parts.

The unstated assumption is that a good fit to history will give a good prediction

Modelers spend a lot of effort ensuring that any historical tuning does not “overfit” the model arbitrarily. And, yes, models are never expected to produce a 100% accurate projection of the real future outcome because by their very nature they are producing simulations of a chaotic system. You could run the same “real earth” multiple times given the same initial conditions and get different outcomes because the system is nonlinear, and models should be no different.

One of the major misconceptions from the WUWT contrarian set is that if models were good, they would produce the exact same deterministic output every time and this output would exactly match observational data. That just isn’t the nature of modeling systems like the climate.

Reply to  AlanJ
December 19, 2024 7:42 am

Nor are model projections simple numerical extrapolations of historical trends, this is a common misconception around these parts.

Panic-man is lying (again).

Reply to  AlanJ
December 19, 2024 1:29 pm

One of the major misconceptions from the WUWT contrarian set is that if models were good, they would produce the exact same deterministic output every time and this output would exactly match observational data. That just isn’t the nature of modeling systems like the climate.

Is it really a major misconception?

Given the models cant model climate without a forcing, I would think they precisely cant model the past.

But of course, the modellers manipulate aerosol history to try to do exactly that.

AlanJ
Reply to  TimTheToolMan
December 19, 2024 2:07 pm

Is it really a major misconception?

It’s a misconception expressed in this very thread. The climate is a chaotic system, so even if you had a perfect model with perfectly known initial conditions, there should not be an expectation of the model exactly reproducing observations.

Reply to  AlanJ
December 19, 2024 2:39 pm

exactly”

Why do you keep using this weasel word? Do you *really* think no one sees what you are doing? The issue is not “exact” results, the issue is the uncertainty interval associated with the answer!

AlanJ
Reply to  Tim Gorman
December 19, 2024 3:27 pm

I’m saying that exact reproduction is not possible even in the perfect case, much less the “good” case. Models are skillful and provide valuable insight into the climate system despite being flawed.

Reply to  AlanJ
December 19, 2024 6:27 pm

Their value is in providing insight on when and how the model prediction are wrong so that they can be corrected.

Reply to  AlanJ
December 20, 2024 5:05 am

Models are skillful and provide valuable insight into the climate system despite being flawed.”

The models are not skillful and they do *NOT* provide valuable insight. This can be measured by how many predictions based on the model outputs have turned out to be wrong. The models don’t even allow determining what temperatures might actually be going up, minimums or maximums. How then can they be useful at all?

AlanJ
Reply to  Tim Gorman
December 20, 2024 5:54 am

This is what I mean by model skill:

https://www.youtube.com/watch?v=NDTWVDuq2aQ

This visualization presents model simulation output. The model was not directed to produce a typhoon off the coast of China, this weather phenomenon arose emergently from the model’s physics. All of those atmospheric currents and interactions are simulated by the physics, and they accurately reproduce the dynamics of real climatic phenomena.

When you run these simulations for many years, other long-term phenomena also arise emergently, like modes of internal variability and forced responses.

The models don’t even allow determining what temperatures might actually be going up, minimums or maximums. How then can they be useful at all?

The models produce temperature fields, and you can do whatever you want with those, including analyzing diurnal temperature shifts. I think you’re conflating the common annual global mean temperature visualizations you’ve seen with the actual model outputs. These visualizations are just simplified aggregations.

Reply to  AlanJ
December 20, 2024 6:36 am

When you run these simulations for many years, other long-term phenomena also arise emergently, like modes of internal variability and forced responses”

What emergent phenomena?

Increased food production?
Greening of the earth?
Fewer tornadoes and hurricanes?
Longer growing seasons?
Less desertification?

Do the models output these emergent phenomena?

“The models produce temperature fields, and you can do whatever you want with those, including analyzing diurnal temperature shifts.”

What temperature fields? Can you provide a link showing model outputting daily minimum and maximum temps for 2050 for locations around the globe?

AlanJ
Reply to  Tim Gorman
December 20, 2024 8:40 am

What temperature fields? Can you provide a link showing model outputting daily minimum and maximum temps for 2050 for locations around the globe?

You can download the model outputs here, I believe through 2100:

https://cds.climate.copernicus.eu/datasets/projections-cmip6?tab=overview

It’s a bit disappointing to discover that this is not something you’re aware of, given your forceful opinions on the subject.

Reply to  AlanJ
December 20, 2024 10:49 am

You made the assertion, it is up to you to provide the info in graphical and tabular form. That isn’t up to me.

I have shown you my support, you provide your own support. Maybe you should write an article for WUWT showing your support with references.

If you are too lazy to perform the work good luck with making a professional argument!

AlanJ
Reply to  Jim Gorman
December 20, 2024 12:45 pm

It is not up to me at all, I’ve provided a link to the CMIP6 data repository, quite clearly showing that the daily max and min temps are freely available to download and study. You do not need to shy away from displaying humility and acknowledging your mistake. I will not think less of you, but will respect you more for it.

Reply to  AlanJ
December 21, 2024 5:41 am

It is not up to me at all

It is up to you to support your assertions.

I (and others) show our support for our assertions through graphs, tables, and quotes.

If you can’t (or won’t) provide the support for all to see, then you have no basis for making an argument. You lose.

AlanJ
Reply to  Jim Gorman
December 21, 2024 9:24 am

I’ve supported the assertion that the daily Tmax and Tmin model outputs are available by providing a direct link to them. You’re now just sticking your fingers in your ears and yelling “la la la can’t hear you!” It’s childish and you’re better than that.

Reply to  AlanJ
December 21, 2024 11:00 am

I’ve supported the assertion that the daily Tmax and Tmin model outputs are available by providing a direct link to them.

Telling other people to do your work is is an admission that you have nothing to support your assertions. You haven’t done your work prior to making an assertion.

You’ve obviously never taken debate in high school or college. You make an assertion and get called on it, it isn’t up the person calling you out to prove the assertion you made. The proof is up to you, or

YOU LOSE THE ARGUMENT!

https://homepage.ntu.edu.tw/~karchung/debate1.htm

Rules of Debate

(condensed from Competitive Debate: Rules and Techniques,

by George McCoy Musgrave. New York: H.W. Wilson, 1957)

5. He who asserts must prove. In order to establish an assertion, the team must support it with enough evidence and logic to convince an intelligent but previously uninformed person that it is more reasonable to believe the assertion than to disbelieve it. Facts must be accurate. Visual materials are permissible, and once introduced, they become available for the opponents’ use if desired.

AlanJ
Reply to  Jim Gorman
December 21, 2024 5:44 pm

I’m not telling you to do my work – my work is already done. I linked the data in a standard format, widely used across the entire field. It is the easiest format to work with the data in. You can download the data freely and do with it as you please. Your continued attempts to ignore the fact that you were flagrantly wrong about the existence of this data is frankly embarrassing and what could have been a simple admission of ignorance is now a pathetic attempt by you to save face.

Reply to  AlanJ
December 21, 2024 5:56 pm

Clown.

Reply to  AlanJ
December 20, 2024 8:14 am

The models produce temperature fields, and you can do whatever you want with those, including analyzing diurnal temperature shifts.

Really? Why don’t you show us the temperature fields for the next 30 years for the central U.S., specifically, the state of Kansas. Show the Tmax and Tmin values that the diurnal range is calculated from.

While you are at it, since you are declaring these to be measurements, give the uncertainty values for those measurements.

Here is the baseline to compare against. It has ~125 years of NOAA data.

comment image

It is time to provide some sources/evidence for your assertions. Without backup, you are just a politician blowing smoke up peoples arses.

Reply to  Tim Gorman
December 19, 2024 6:23 pm

Because he is a weasel.

Reply to  AlanJ
December 19, 2024 4:49 pm

It’s a misconception expressed in this very thread.

Well I haven’t read all the posts but you must have seen it so maybe you could point out someone who thinks the models ought to:

produce the exact same deterministic output every time and this output would exactly match observational data.”

Reply to  AlanJ
December 19, 2024 6:23 pm

… there should not be an expectation of the model exactly reproducing observations.

I’m not asking for “exactly.” I’d be happy with the range of the ensemble predictions being symmetrically distributed around the mean of the observations and being no greater than +/-2 standard deviations of the de-trended observations, and agreement on the sign of the regional precipitation predictions. As it is, the ensemble envelope is skewed, with high predictions being more abundant than low predictions, and it has only gotten worse with CMIP6.

Reply to  TimTheToolMan
December 19, 2024 2:24 pm

Any model that can’t match observational data is just one more cloudy crystal ball. It isn’t a matter of being EXACTLY right. It’s a matter of the uncertainty interval associated with what the crystal ball is telling you. But we never get a uncertainty interval from climate science concerning their predictions. It’s always the canard of “no model is ever EXACTLY right”.

Whether you trust the output of that crystal ball is solely a matter of religious faith if you can’t specify an uncertainty interval.

AlanJ
Reply to  Tim Gorman
December 19, 2024 3:18 pm

The issue is that you fundamentally cannot employ deterministic uncertainty estimates to climate model outputs, for the reasons I’ve explained ad nauseam in this thread. You instead need to employ stochastic methods to obtain probabilistic uncertainty estimates.

Your position seems to be that if you can’t employ straightforward traditional techniques of error propagation to estimate uncertainty then you can’t estimate uncertainty at all, which is quite untrue.

Reply to  AlanJ
December 19, 2024 6:25 pm

The issue is that you fundamentally cannot employ deterministic uncertainty estimates to climate model outputs, for the reasons I’ve explained ad nauseam in this thread.

Another lie, from a troll who doesn’t understand metrology.

Reply to  AlanJ
December 19, 2024 6:34 pm

You instead need to employ stochastic methods to obtain probabilistic uncertainty estimates.

I don’t disagree with that. However, the point you are missing is that when the uncertainty envelope is skewed it says, to me, that the modelers have done something wrong. Most probability statistics are based on an assumption of normality and frequently the best that one can do with skewed distributions is come up with inequalities that bound the probabilities.

Reply to  Clyde Spencer
December 20, 2024 4:49 am

Most probability statistics are based on an assumption of normality”

This goes right along with their common meme that all measurement uncertainty is random, Gaussian (i.e. normal), and cancels.

Reply to  AlanJ
December 20, 2024 4:44 am

You instead need to employ stochastic methods to obtain probabilistic uncertainty estimates.”

What in Pete’s name do you think a standard deviation is?

What is the measurement uncertainty of a set of experimental data from multiple measurements of the same thing using the same instrument under the same conditions?

AlanJ
Reply to  Tim Gorman
December 20, 2024 5:56 am

Model outputs aren’t measurements.

Reply to  AlanJ
December 20, 2024 12:00 pm

Model outputs aren’t measurements.

If they aren’t measurements, then you can’t compare them to measured temperatures as if they are measured.

This is just more illogical assertions from you. Good luck in making a cogent argument.

AlanJ
Reply to  Jim Gorman
December 20, 2024 12:41 pm

Of course you can. If I model the speed of a falling object after t seconds as v = gt, then I drop a ball 1000 times and measure its velocity after t seconds, I can very much validate my model against these observations.

Reply to  AlanJ
December 20, 2024 2:17 pm

But you can’t MEASURE ECS! So how do you classify it as an an empirical value you can use to calculate other values such as temperature and classify those as empirical as well?

Reply to  AlanJ
December 21, 2024 5:54 am

Of course you can. 

Quit using an AI bot to get some word salad.

Look at what you said.

Model outputs aren’t measurements.

Here is what I responed.

If they aren’t measurements, then you can’t compare them to measured temperatures as if they are measured.

You are just spouting gibberish. If model outputs ARE NOT MEASUREMENTS, then you can not turn around and compare them to actual physically measured values, i.e., measurements. Model outputs are either values that should be seen when actual measurements are taken, or they DO NOT represent values that can be measured.

The measurement model you are using (v = gt) PREDICTS values that you can measure in the real world, that is, measurements. If your model doesn’t predict measurements, then don’t try and tell people that the values you obtain from the model are what will be measured in the real world. The models are nothing more than a virtual game at that point.

AlanJ
Reply to  Jim Gorman
December 21, 2024 9:26 am

The basis of the scientific method is testing theory against observation. The models are theory, the measurements are observation. It is not only appropriate to compare model output to measurements, it is necessary.

Reply to  AlanJ
December 21, 2024 10:32 am

It is not only appropriate to compare model output to measurements, it is necessary.

That makes the output an expectation of a physical measurement.

Please don’t try to say that means it isn’t a measurement. That’s metaphysical mumbo jumbo. IT IS the expectation of a measurement value.

Reply to  Jim Gorman
December 21, 2024 12:09 pm

Please don’t try to say that means it isn’t a measurement. That’s metaphysical mumbo jumbo.

Exactly.

AlanJ
Reply to  Jim Gorman
December 21, 2024 5:47 pm

It’s an expectation of a physical measurement in exactly the same way that v=gt is an expectation of a physical measurement. In both cases I can model I think that will never be measured – I am applying theory. Theory that can be validated by observation. I’m through with the pointless quibbling over semantics here.

Reply to  AlanJ
December 21, 2024 5:56 pm

Nonsense.

Reply to  AlanJ
December 19, 2024 6:01 pm

The models are not tuned to produce forecasted trends, …

I never said that they were. I was responding to the quote from you about “historic … SAT trends” or hindcasting, and some of their potential faults.

Nor are model projections simple numerical extrapolations of historical trends, this is a common misconception around these parts.

I am not one who believes that forecasts are simply linear extrapolations of past weather. This is an exchange between the two of us and I’d appreciate it if you didn’t attempt to denigrate me by associating me with people saying things I haven’t said. That is the behavior of a lawyer — assassination by association.

Having said that, Pat Frank has convincingly demonstrated that the output of the models can be simulated effectively with a linear trend. Similarly, I have demonstrated that a linear extrapolation of James Hansen’s raw temperatures do a better job of predicting the future than the model he used in 1988, despite resorting to resorting to tricks like inserting a couple of hypothetical volcanic eruptions for the scenarios of reduced anthro’ CO2, while not treating the business as usual scenarios similarly. Chicanery of the type you support.

Reply to  Clyde Spencer
December 20, 2024 5:30 am

Having said that, Pat Frank has convincingly demonstrated that the output of the models can be simulated effectively with a linear trend.”

It doesn’t matter how convoluted they make the calculations in the models, how many differential equations they use, or how many parameterized factors they add, if the outputs are linear and have historical temperatures as a training baseline then they *are* simple linear extrapolations of past historical trends.

The *real* bottom line is that climate science, including AlanJ as a defender, are unable to admit that they simply don’t know what the future holds. They don’t know the physics well enough. They don’t have enough computing power. They don’t even recognize the measurement uncertainties in the temperature data sets they use let alone the iterative additions of the uncertainties in their calculations.

Climate scientists today more closely resemble the carnival hucksters saying they can tell your future than they do actual physical scientists.

AlanJ
Reply to  Clyde Spencer
December 20, 2024 6:04 am

Fair enough, I apologize for making the assumption.

Pat Frank has convincingly demonstrated that the output of the models can be simulated effectively with a linear trend.

That is simply because over the period of observation, projected forcing does not differ very much between the different scenarios or pathways, and the observed forcing has been mostly linear. This is expected to be the case until at least 2050:

comment image

That does not imply that such a linear model has predictive power.

Reply to  AlanJ
December 20, 2024 8:28 am

That is simply because over the period of observation, projected forcing does not differ very much between the different scenarios or pathways

What you mean is that the “forcing” from CO2 is baked in the programming to make CO2 the control knob.

Do you know what circular logic or tautologies are? It is making the assertion and conclusion the same. If CO2 is the control knob, then CO2 is the control knob!

AlanJ
Reply to  Jim Gorman
December 20, 2024 8:49 am

That is not what I mean and that is not what I said. The amount of warming indicated by the projections is a result of the net forcing, this is not baked in but arises from the model physics.

Reply to  AlanJ
December 20, 2024 9:15 am

The amount of warming indicated by the projections is a result of the net forcing, this is not baked in but arises from the model physics.

No one believes that. You are convinced that the models are not programmed such that CO2 is the primary force causing a rise in temperature. You are dedicated to “proving” to all here that is the case. Give it up.

The pauses in warming while CO2 continues to rise is a counterexample to that assumption. No one knows the TH volcano effect, not even modelers. That means the models are not complete and can not be trusted.

Everyone admits that climate models do not and can not properly account for cloud effects. That means the models are not complete and can not be trusted.

Do I need to go on?

AlanJ
Reply to  Jim Gorman
December 20, 2024 12:34 pm

You are conflating “CO2 is the primary forcing agent driving the modern warming trend” with “the models have been deliberately programmed to falsely indicate CO2 as the primary driving force behind the modern warming trend.” I can ultimately not persuade you of the former against the latter because it would require you to learn about how climate models work and actually dig into some of the code running the models, which no one here will ever do, so it produces an impasse of sustained ignorance.

Climate models will never be “complete” and will never offer us 100% certainty, that does not mean they cannot be useful and provide us with good information that increases our understanding of the climate system.

Reply to  AlanJ
December 20, 2024 1:54 pm

Again, two models that reproduce the same linear outputs from the same linear inputs are equivalent in predictive power.

It simply doesn’t matter how complex you make the models. They exist inside a black box with inputs and outputs. ANY algorithm can exist inside the black box, a simple one or a very complex one. If you can’t distinguish the outputs apart then it does not matter what is inside the black box.

Reply to  Tim Gorman
December 20, 2024 2:21 pm

His response is a direct indication that he has had no training in engineering. It is no wonder that models have a problem.

I suspect the term “transfer function” has no meaning in climate science. It is what a climate model should be made of. An impulse function is input and the output is measured. The transfer function is then developed. No doubt a climate function is complicated. However I have never seen anyone supporting the GCM’s mention the process which makes you think it isn’t used at all.

Reply to  Jim Gorman
December 20, 2024 2:38 pm

I doubt he’ll understand.

Reply to  AlanJ
December 21, 2024 6:06 am

You are conflating “CO2 is the primary forcing agent driving the modern warming trend”

I and others here are only following the lead of the IPCC which uses your precious models to validate that CO2 is an evil molecule and humans must stop emitting it. That means the IPCC believes the models are using CO2 as the control knob for temperature. If you have a beef with that, take it up with the IPCC.

As to convincing folks the models are correct in their outputs, that ain’t gonna happen until the models start to output predictions that match what occurs in the future. That hasn’t happened for several decades worth of model outputs so don’t expect people to believe your protestations otherwise.

Likewise, your lack of evidence is remarkable. Look at all the graphs and quoted articles in the essay Andy has provided. I would urge you to do the same to support your assertions if you want people to recognize you know what you are talking about.

Reply to  AlanJ
December 20, 2024 1:50 pm

That is simply because over the period of observation, projected forcing does not differ very much between the different scenarios or pathways, and the observed forcing has been mostly linear.”
“That does not imply that such a linear model has predictive power”

ROFL! The models produce a linear output because of linear inputs – meaning the models are nothing more than a simple linear factor applied to the linear input!

If the models take a linear input and produce a linear output AND have predictive power then *any* linear algorithm that reproduces the very same linear output as the models from the same linear inputs has the same predictive power as the models.

You are caught between a rock and a hard place. Do the models have predictive power or don’t they?

Reply to  Tim Gorman
December 20, 2024 2:38 pm

And I doubt he’ll understand this as well.

AlanJ
Reply to  Tim Gorman
December 20, 2024 3:00 pm

Yes, it goes without saying that a naive linear model will produce a good estimate if the climate response is linear. It will cease producing a good estimate as soon as the response is nonlinear. See how well a simple statistical linear model performs against the last century and a half of observations. The models capture the patterns of change, but I suspect the linear curve fit is going to struggle:

comment image

I can see, for instance, Pinatubo expressed in the model runs. Will your line capture it? How will your line allow us to explain the dynamics driving the observed changes?

Reply to  AlanJ
December 20, 2024 4:57 pm

Where’s the marina sauce?

Reply to  karlomonte
December 21, 2024 7:44 am

It’s just becoming more and more obvious that AlanJ is either an AI robot that is stupid or he is using one to develop his posts.

All he keeps posting is word salads that aren’t even consistent internally!

Reply to  Tim Gorman
December 21, 2024 12:11 pm

Yeah, and it explains why/how he keeps going and going and going — just like a chat bot.

Reply to  AlanJ
December 21, 2024 7:42 am

Yes, it goes without saying that a naive linear model will produce a good estimate if the climate response is linear. It will cease producing a good estimate as soon as the response is nonlinear.”

This means that since the climate models produce a linear output from a linear input that the response in that black box is a linear algorithm. All that is needed is to reproduce that linear algorithm using a simple linear functional relationship – as PF did.

If the response becomes non-linear then the model inside the black box has to be changed as well in order to match. That means that all that is required is to develop a simple non-linear functional relationship that reproduces the new non-linear black box output.

You are either an AI robot untrained in both physical science and/or math or you are using one to develop your responses. Whichever it is all you are doing is posting word salads that are not internally consistent.

AlanJ
Reply to  Tim Gorman
December 21, 2024 9:28 am

This means that since the climate models produce a linear output from a linear input that the response in that black box is a linear algorithm.

That is unequivocally incorrect and is logically incoherent. The models produce nonlinear output when the forcing is nonlinear. Your linear model will always produce linear output. This is one you really need to sit and mull over a bit more carefully, being this far off the mark is a bit indefensible.

Reply to  AlanJ
December 21, 2024 10:02 am

That is unequivocally incorrect and is logically incoherent.

One of your previous posts showed a time series of time vs temperature as an example of an nonlinear input. What a joke. Temperature is supposedly driven by CO2 not time.

Does the transfer function of a GCM include time as one of the variables that determine temperature? I don’t recall anyone ever declaring time as an initial condition of an input variable.

Reply to  AlanJ
December 21, 2024 12:57 pm

You are doing nothing here but denying that Pat Frank reproduced the model outputs using a linear functional relationship between the input and the output.

The models do *NOT* produce a non-linear output. Anyone that doesn’t have a vision problem can see that from the graph of the outputs. You see some variation close in to the present but over the long term the output becomes perfectly linear. If it didn’t Pat’s linear algorithm would not have reproduced the model output!

Reply to  AlanJ
December 18, 2024 9:07 pm

Part of the problem is that you are reinforcing a bad habit of climatologists and modelers of ignoring the uncertainty envelope and not even stating whether a 68% or 95% probability in in play.

AlanJ
Reply to  Clyde Spencer
December 19, 2024 6:06 am

The model ensembles are usually presented either as spaghetti graphs showing individual model runs are as a 95% envelope. But caution should be exercised in treating this envelope as a true probabilistic uncertainty range.

Reply to  AlanJ
December 19, 2024 7:43 am

showing individual model runs are as a 95% envelope. But caution should be exercised in treating this envelope as a true probabilistic uncertainty range.

Its bullshit and you know it is.

No technical basis whatsoever for doing this.

Reply to  AlanJ
December 19, 2024 8:07 am

But caution should be exercised in treating this envelope as a true probabilistic uncertainty range.

Caution? Caution? Model outputs are NOT MEASUREMENTS. Multiple outputs CAN NOT be evaluated as measurements of the same thing under repeatable or reproducibility conditions. Can you identify systematic uncertainty in a computer model? Tell us how!

Uncertainty in the output is accumulated throughout the computations in a run not by comparing individual runs. For one thing, the runs are very correlated and there is no good way to measure the correlation effects of program variations on the output.

Reply to  Jim Gorman
December 19, 2024 8:11 am

He either doesn’t understand, or refuses to understand because it conflicts with the gaslighting he pushes.

AlanJ
Reply to  Jim Gorman
December 19, 2024 9:33 am

Yes, caution. Models are sometimes treated as ensembles of opportunity, and modelers have cautioned against relying on “model democracy” as a best estimate, even though in practice it is often done with little difference (in CMIP5 for instance the spread of model results fell within the empirical estimates of sensitivity). But the CMIP6 results show quite clearly why this can be a fraught enterprise. The Hausfather et al. Paper cited elsewhere in this thread goes into excellent detail on this.

There isn’t a way to express the uncertainty of a single model run as far as I’m aware, because uncertainty arises from forcings, structural uncertainty in the model itself, and the nonlinear nature of the thing being modeled.

Reply to  AlanJ
December 19, 2024 9:38 am

Insanely expensive exercises in invalid extrapolation.

fraught enterprise

There should be a clue for you here.

Reply to  AlanJ
December 19, 2024 7:00 pm

Let me see if I can re-phrase that for the readers:
You don’t weight the model runs for accuracy, but assume (without justification) that all are equally valid; there is no way to determine the uncertainty of individual model runs (other than making a subjective decision to throw away some runs that ‘appear’ anomalous); therefore, you really don’t have a handle on the validity or precision of the addends used to determine an average.

AlanJ
Reply to  Clyde Spencer
December 20, 2024 5:22 am

What is being recommended is precisely to screen the models for accuracy, e.g. against empirical constraints on ECS as the IPCC does. This is not a subjective or arbitrary process but is grounded in our understanding of climate dynamics. The “anomalous” runs are also not discarded – first because scientists can learn from these models and make improvements, and second because a high sensitivity doesn’t make the model unfit for many purposes – particularly when the subject of interest isn’t the timing of changes, but simply the atemporal climate response to a given forcing.

The broadest takeaway is simply “don’t rely on the multi-model ensemble mean unless you have strong justification for doing so.”

Reply to  AlanJ
December 20, 2024 6:13 am

What is being recommended is precisely to screen the models for accuracy, e.g. against empirical constraints on ECS as the IPCC does.”

If you can’t directly measure ECS then how do you get empirical constraints?

“The broadest takeaway is simply “don’t rely on the multi-model ensemble mean unless you have strong justification for doing so.””

Then why does climate science rely on the multi-model ensemble instead of focusing on the Russian models which appear to be more in line with reality?

Reply to  Tim Gorman
December 20, 2024 7:20 am

Climate jockeys will never understand the futility of true values.

AlanJ
Reply to  Tim Gorman
December 20, 2024 8:29 am

We can directly observe ECS by studying paleoclimate archives and monitoring ongoing observations of the climate system for forced response.

Then why does climate science rely on the multi-model ensemble instead of focusing on the Russian models which appear to be more in line with reality?

This is a rather complicated question to answer straightforwardly. Model supremacy is not better than model democracy, so scientists absolutely should not throw away every model except a single one, that would be stupid. Different models are built for different purposes and have different strengths and weaknesses, and only by studying numerous models together can we improve our ability to model the climate. That’d be like saying that the Koenigsegg Jesko Absolut is the fastest car on earth, so why bother ever building different cars?

To the broad question of why ensembles are used, there are many reasons. Often because there is some desire to understand the structural differences in the models reflected in their outputs, or to obtain multiple projections of potential outcomes without assuming we know which model is “best” for the intended purpose; often it is a simple matter of convenience (and sometimes even an ill-informed one). Always climate modelers are urging researchers to be careful in treating ensembles as PDFs with consistent statistical properties, but this has not been so blatantly obvious before CMIP6, and very often didn’t matter much (but now it matters a lot).

Reply to  AlanJ
December 20, 2024 9:29 am

If you can’t measure it and have to calculate it using theoretical relationships then it is NOT an empirical value but, instead, a theoretical one. If your theory is wrong then so is your calculated value.

The operative word in “climate model” is climate. If the models are built for a different purpose then they shouldn’t be averaged together.

An ensemble average tells you nothing about the structural differences in the model. The average should be accompanied by a standard deviation to flag the variance in outputs. The larger the variance the less the average can be trusted.

Your analogy is fatally flawed. Different engines are built for different purposes. Climate models are built for the same purpose – projecting the future climate. You don’t average the horsepower output of an IH-1206 tractor with the horsepower of an LS-6 powered car to get an “average” horsepower, they are built for different purposes.

If you are comparing Top Fuel dragsters then you *do* want to find the best and emulate it, not the worst or even the median.

AlanJ
Reply to  Tim Gorman
December 20, 2024 12:25 pm

If it is based on empirical observation and validated against empirical observation, it is an empirical value, it’s not clear to me what else you would call it. Theoretical values are derived from pure theory, as you say, not drawn from observation. If you want to quibble semantics, be my guest. We can constrain ECS using theory, to be sure, but we can also constrain it purely observationally.

An ensemble average tells you nothing about the structural differences in the model. 

I’ve not said anything contradictory to this, and typically the multi-model mean is provided with the model spread as an indication of variance.

You perhaps did not mean to confirm the usefulness of my analogy, but have done so quite handily. Just as different engines are optimized for different purposes, so are climate models. “Projecting future climate” is the ultimate goal in the same way that “getting from A to B” is the goal of every automobile.

Reply to  AlanJ
December 20, 2024 2:14 pm

No, the factors you can observe are empirical. The factors that you calculate based on a theoretical functional relationship using those empirical factors are *NOT*, in and of themselves, empirical. They are theoretical calculations. They are not observed empirical factors.

Theoretical values are derived from pure theory, as you say, not drawn from observation. “

No, theoretical values can be calculated using a theoretical functional relationship of empirical factors. Pure theory does not have to be involved at all in giving the factors their values. It is the functional relationship that is the theory.

I can measure the air velocity in the throat of a carburetor and use that along with a guess at a theoretical functional relationship  between that value and the air-fuel mixture at the inlet valve of the cylinder. That does *NOT* make that air-fuel mixture value an empirical value – it is a theoretical value calculated from theoretical functional relationship I guessed at. If that theoretical functional relationship is wrong then my air-fuel mixture value will be wrong. Only when I can measure the air-fuel mixture value does it become empirical – either that or I must somehow prove my theoretical functional relationship is accurate. That’s hard to do if you can’t actually measure the air-fuel mixture.

“I’ve not said anything contradictory to this, “

Of course you did. AlanJ: “To the broad question of why ensembles are used, there are many reasons. Often because there is some desire to understand the structural differences in the models reflected in their outputs,”

So which is it? Can the ensembles tell you about the structural differences in the models are can they not?

“typically the multi-model mean is provided with the model spread as an indication of variance.”

If the models produce different results from run to run then they have an in-built variance, they are a random variable with an average and a variance. When you add those random variables to calculate an average you must also add their variances. Do you understand why that is? The variance is *not* the single output value from each model, which is presumably an average of each model, but the variance resulting from adding all the variances.

It’s becoming more and more plain that AlanJ is an AI robot that can do nothing but create argumentative fallacy arguments and incoherent physics assertions!

Reply to  Tim Gorman
December 20, 2024 3:17 pm

I’m sure many of these responses are from an AI. They are word salads.

AlanJ
Reply to  Tim Gorman
December 20, 2024 3:18 pm

This is, again, quibbling over semantics. If we measure the amount of CO2 trapped in air bubbles in ice sheets, and we observe the amount of temperature change recorded in paleoclimate archives, we can positively say that “temperature change did not likely exceed x degrees when CO2 concentration doubled from a to b.” Thereby empirically constraining ECS.

So which is it? Can the ensembles tell you about the structural differences in the models are can they not?

You are conflating two distinct concepts: the ensemble, which comprises the entire suite of individual model runs, and the multi-model mean, which is the average of those runs. My original comment referred to the ensemble as a whole, which can provide insights into structural differences between models based on the differences in the model outputs. The multi-model mean, on the other hand, is a statistical summary that smooths over those differences to highlight common features.

Reply to  AlanJ
December 20, 2024 4:59 pm

A distinction without a difference.

Averaging garbage you get ___________.

Reply to  AlanJ
December 20, 2024 5:37 pm

The multi-model mean, on the other hand, is a statistical summary that smooths over those differences to highlight common features.

🤡

“smooth over differences” – That’s a good example of throwing variance in the trash can because you don’t want to tell folks what it is. So find a way to hide it.

Carroll D. Wright was a prominent statistician employed by the U.S. government, and he did use the expression in 1889

and said:

“”The old saying is that “figures will not lie,” but a new saying is “liars will figure.” It is our duty, as practical statisticians, to prevent the liar from figuring; in other words, to prevent him from perverting the truth, in the interest of some theory he wishes to establish.””

From: https://quoteinvestigator.com/2010/11/15/liars-figure/?amp=1

Reply to  AlanJ
December 21, 2024 8:10 am

Thereby empirically constraining ECS.”

Constraining ECS is *NOT* measuring ECS. It is not even a hint as to what ECS is. All you know is that it is less than some value. What the true value is remains hidden.

” My original comment referred to the ensemble as a whole, which can provide insights into structural differences between models based on the differences in the model outputs.”

You just keep on confirming that your knowledge of statistical analysis is sadly lacking. The variance of the averages of the different models, your multi-model ensemble, tells you nothing except how close you are to the average of the multi-model ensemble. In essence it is a metric for sampling error. That can’t tell you *anything* about the *structural* differences between models.

It’s much like trying to take the drag strip times of ten Ford Mustangs with different builds and setups, average them, and then say that you can tell the structural differences between the Mustangs from the variance in their times. It’s physically impossible. Same with your climate models.

Go away AI robot! You keep on making less and less sense each time you post!

AlanJ
Reply to  Tim Gorman
December 21, 2024 9:21 am

It’s much like trying to take the drag strip times of ten Ford Mustangs with different builds and setups, average them, and then say that you can tell the structural differences between the Mustangs from the variance in their times. It’s physically impossible. Same with your climate models.

You need to read my comments more carefully before replying. The ensemble is a suite of individual model runs. To liken your analogy, studying the ensemble would be like taking the ten Mustang runs and comparing them all together to see how they differ. You can of course take the average of the ensemble, but this will only help you understand their commonalities – by design the average minimizes disparate features.

I think that my comments are quite clear and consistent, but if you are finding them hard to understand, asking for clarification would be more productive than the combative tone you’ve adopted. Instead of assuming ipso facto that you disagree with every single word I write, look for common ground and build from there to explore the differences. Half the time you’re saying the same thing I am.

Reply to  AlanJ
December 21, 2024 12:15 pm

You can of course take the average of the ensemble, but this will only help you understand their commonalities – by design the average minimizes disparate features.

More your usual bullshit. Averagings and anomalies throws away variances (“disparate features”) and forgets they ever existed.

Reply to  AlanJ
December 21, 2024 12:31 pm

The ensemble is a suite of individual model runs.”

It doesn’t matter. The outputs of all the runs made by each model, when compared, can *NOT* tell you anything about the structural difference between the models.

Neither can the average of the means of the model runs.

I could run each of those ten Ford Mustangs down the track 10 times and record their value. Comparing the runs of Mustang_1 with the runs of Mustang_2 will not tell me anything about the STRUCTURAL differences between the two cars. I can average each of the ten runs for each Mustang and then calculate an average from those averages. It *still* won’t tell me anything about the structural differences between the cars.

The exact same thing applies to the climate models.

The proof is that you don’t believe Pat Frank can duplicate your black box with his black box which gives exactly the same the output as yours. The truth is that he can. And unless he tells you what is inside his black box you won’t know a single thing about the structural differences with what is inside your black box.

Instead of whining about me trying to correct your mis-understandings of physical science, why don’t you try to learn when someone corrects you. It’s obvious that you have no idea of what a black box and a transfer equation is. The transfer equation defines the relationship between the input to a black box and the output of the black box. There are *always* multiple ways to implement that transfer equation – which you apparently don’t believe and which you will not accept as truth.

When you post something accurate and true I’ll be the first to agree with you. I haven’t seen anything from you that meets those two requirements.

AlanJ
Reply to  Tim Gorman
December 21, 2024 5:41 pm

Pat Frank’s black box will not produce anything except a linear response to ghg forcing. It cannot account for volcanic activity, changes in solar output, internal variability, feedbacks, etc., unless the net of those drivers happens to yield a linear temperature change. His black box can also do nothing more than extrapolate linear temperature trends, while GCMs model numerous aspects of the climate system, such as wind patterns and ocean currents. It’s frankly silly that we are arguing this.

Reply to  AlanJ
December 21, 2024 5:57 pm

And neither do the GC computer games, you silly bot.

Reply to  AlanJ
December 21, 2024 6:52 pm

Pat Frank’s black box will not produce anything except a linear response to ghg forcing.

Just like GCM’s, isn’t that funny. You just keep throwing out red herrings. You are winning nothing.

AlanJ
Reply to  Jim Gorman
December 22, 2024 6:22 am

GCMs produce feedbacks, response to non-GHG forcing, response to internal variability, etc. while simulating numerous aspects of the climate system.

There is no winning because there was no contest. There is a basic and simple fact, which I am relaying, and your ad nauseam denial of it.

Reply to  AlanJ
December 20, 2024 5:33 am

There isn’t a way to express the uncertainty of a single model run as far as I’m aware”

Of course there is a way. But you have to abandon the meme that all measurement uncertainty is random, Gaussian, and cancels. Pat Frank has shown the way!

Reply to  Jim Gorman
December 19, 2024 6:51 pm

A point I should have made above in my December 19, 2024 6:44 pm comment is that while the spaghetti graphs show a time-series with predictions for the various models, each and everyone of them has a different uncertainty that probably isn’t even known. So, applying the Empirical Rule or Tschebysheff’s Theorem will provide an estimate on the ensemble uncertainty envelope that is low.

Reply to  AlanJ
December 19, 2024 6:44 pm

The Empirical Rule suggests that the standard deviation will be within +/-range/4 or 6, depending on how conservative one wants to be with the estimate.

Tschebysheff’s Theorem allows one to work with skewed distributions, but one is restricted to a range of probabilities instead of a discrete value.

MarkW
Reply to  karlomonte
December 17, 2024 5:48 pm

Only in climate science can get the right answer by averaging together a bunch of wrong answers.

Reply to  MarkW
December 17, 2024 10:14 pm

I.e. it is a liberal art.

Reply to  karlomonte
December 17, 2024 11:28 pm

Great comment ! 🙂

Reply to  AlanJ
December 18, 2024 8:42 pm

One of the first non-insignificant System Dynamics models I wrote, using BASIC, was during the Arab Oil Embargo. I was trying to use my Atari 800 to determine the shape of the gas-mileage to speed curve for my ’65 Corvette. Things went quite smoothly with the factory data I had on torque, horsepower, and road speed versus RPM, and the coefficients of friction for air and rolling resistance versus speed. However, I hit a wall when I got to modeling the carburetion. I thought I understood how a carburetor worked. I spent time time talking with a good friend who had previously raced a ’57 Corvette and had a double-major BS in mathematics and physics. I eventually worked it out, and using a waterfall-display, and a game controller to mimic the gas pedal, had a model that very closely simulated the actual 0-60 MPH acceleration, top-end speed, coast-down time, and actual road gas-mileage. I didn’t have to use any tricks to correct for anomalous behavior under special conditions. I felt comfortable with the shape of the curve, even if the absolute gas mileage was not right on.

The point of what I learned first hand is that there are a lot of things that can be tweaked, even in a ‘simple’ deterministic computer model, to make it behave as one thinks it should behave, which is not necessarily reality. By looking at several different parts of a complete model — acceleration, top speed, coasting time, and mileage — the model is more likely to show problems with the design than if it is blithely written to have only certain parts of it calculated as outputs and to have the design based primarily on assumptions about how everything interacts.

The fact that cloud energy relations are parameterized, and the GCMs frequently give non-physical outputs that have to be clipped before the next iteration, suggests to me that there are ‘carburetors’ in the GCMs that aren’t modeled properly, and instead of fixing the design, the modelers use a brute force method of keeping the outputs physically realistic. They should have asked themselves, “Why did that calculation go out of bounds?” In summary, I don’t think that the behavior of GCMs are what are expected of a well-designed computer model, and the solutions used to compensate for design flaws don’t give me confidence in them. The fact that ensembles with and without CO2 give the ‘expected’ results could well be the result of the initial assumption that CO2 causes warming and it got ‘baked’ into the code.

Reply to  Clyde Spencer
December 19, 2024 2:56 am

Carburetor’s are much like the atmosphere. The air-fuel mixture is highly dependent on humidity, pressure, and temperature. Those can be different from one end of a 1/4 mile drag strip to the other! You can “parameterize” the variables by assuming fixed values for the factors but doing so means you’ll never get the right answer for even the average value, it can be hugely different for a run in Las Vegas vs Miami vs Chicago.

AlanJ
Reply to  Clyde Spencer
December 19, 2024 6:09 am

I genuinely enjoyed reading your anecdote, thanks for sharing. It sounds like a fun exercise, if frustrating.

They should have asked themselves, “Why did that calculation go out of bounds?”

Modelers spend thousands of hours examining why these things happen in models and working to improve them. That is why we have new iterations of models and new CMIP experiments.

Reply to  AlanJ
December 19, 2024 7:57 am

So why after spending billions on them, do the “models” get worse with time?

AlanJ
Reply to  karlomonte
December 19, 2024 8:56 am

The models are not getting worse, they are getting better, and decidedly so. Biases in Southern Ocean SSTs, sea ice extent, rainfall, modes of internal variability all show significantly increased skill compared to the previous generation. The greater spread in sensitivity is just one measure of model performance, and in most cases is actually the result of the models improving an element of negative cloud bias that previously was offsetting a mode of positive cloud bias – so an improvement has revealed an existing deficiency that scientists can work on.

Reply to  AlanJ
December 19, 2024 9:39 am

The models are not getting worse, they are getting better, and decidedly so.

Bullshit, and you know it.

Reply to  AlanJ
December 19, 2024 7:12 pm

What comes to mind is the Heisenberg Uncertainty Principle where trying to improve on determining the momentum, one introduces greater uncertainty in the position of the particle.

Temperature of the atmosphere and oceans is probably the most important prediction from the GCMs because temperature controls melting of ice and sea level, and impacts the survivability of life. I’m getting the impression that you work in the field, so be careful that you don’t trade off accuracy in the most important parameter for increased accuracy in a host of less important parameters.

AlanJ
Reply to  Clyde Spencer
December 20, 2024 5:27 am

“The most important” is a subjective measure – it depends on what you’re studying and trying to understand. What we want overall is that models skillfully represent the entire system holistically. You should never avoid improving one aspect of a model because you’re worried it might reveal the need for more work on another aspect. That would be like avoiding getting your car tires changed because you’re afraid the mechanic is going to see that your brake pads need to be replaced.

Reply to  AlanJ
December 20, 2024 6:06 am

What we want overall is that models skillfully represent the entire system holistically. “

What a joke! Freeman Dyson pointed out years ago that the climate models are *NOT* holistic in any way. Climate is so much more than temperature. The models did *not* predict the growth of food output we have seen over the past twenty years. The models did *not* predict the greening of the earth we’ve seen over the past twenty years. The models did not predict the paucity of hurricanes and tornadoes we’ve seen over the past twenty years. The models did not predict the stagnation of maximum temperatures we’ve seen over the past twenty years.

Holistic? ROFL!!

Reply to  AlanJ
December 20, 2024 7:22 am

What we want overall is that models skillfully represent the entire system holistically.

This is meaningless word salad.

That would be like avoiding getting your car tires changed because you’re afraid the mechanic is going to see that your brake pads need to be replaced.

Another bizarre analogy.

AlanJ
Reply to  karlomonte
December 20, 2024 8:45 am

Karlomonte, please try to be more substantive in your replies to me. You clutter these threads with a lot of petty insults that detract from the quality of discourse for everyone, and I cannot imagine it is a fulfilling way for you to spend your time. I generally ignore you for this reason, but it does not need to be that way, if you are actually interested in joining in the debate.

Reply to  AlanJ
December 20, 2024 9:00 am

please try to be more substantive in your replies to me.

Take your own advice and start providing evidence and support for your assertions

Here is one of your assertions with no support.

The models are not getting worse, they are getting better, and decidedly so.

I even gave you an article where Hausfather had agreed the model’s forecasts were running too hot. You didn’t even bother to refute it.

That is troll behavior, not legitimate argumentative intent.

Reply to  AlanJ
December 20, 2024 10:40 am

Stop whining.

Reply to  AlanJ
December 19, 2024 7:03 pm

That is why we have new iterations of models and new CMIP experiments.

Which seem to be getting worse over time instead of better. That should be telling the modelers that they may be doing something wrong.

Robert Cutler
Reply to  Clyde Spencer
December 19, 2024 7:14 am

To leverage your analogy, the GCM models neglected to include accelerator position and used radio volume instead because, well, speeds are higher when the radio volume is louder.

As I posted earlier solar activity is the major climate driver. If it’s not properly accounted for in the models, then the models won’t perform well and they’ll always be looking for another anthropogenic forcing to explain the failures. If my models are correct, we’re in for at least decade of slight cooling, a trend which started in 2016.

Previously I showed the GISP2 Greenland ice core temperature spectrum, I didn’t point out the 60-year influence because it’s not very well resolved in that data. However, the 61-year and 20-year cycles (Jupiter-Saturn beats) are quite obvious in global temperatures, as is the lack of sensitivity to the 11-year Schwabe cycle. For periods shorter than 20 years, the sun still plays a role, though greatly diminished.

comment image

As for radio volume controlling speed, the measured data shows that CO2 concentrations lag temperature. This can be seen in the frequency response computed between CO2 concentrations and temperature. The negative phase slope indicates that CO2 lags temperature by six months over most periods. The exceptions are an unknown process at 0.75 yr^-1, and the seasonal process at 1 yr^-1. The seasonal delay is 0.13 year. The second marker labeled13.8dB (4.9) is at 1/10 yr^-1. For 10-year periods the delay is six months and the sensitivity is 4.9ppm/°C.

comment image

I wrote my first spectral analysis program to run on a TI-99/4A. It had a 16-bit processor and took 10 minutes to do a 256 point FFT. The 8-bit Commodore computers were faster.

Reply to  Robert Cutler
December 19, 2024 7:33 pm

… well, speeds are higher when the radio volume is louder.

That is why when I bought my first Corvette at age 21, I purposely ordered it without a radio. 🙂

Reply to  Robert Cutler
December 19, 2024 8:13 pm

The negative phase slope indicates that CO2 lags temperature by six months over most periods.

I suspect that what you are seeing is seasonal changes of biogenic CO2.

The seasonal delay is 0.13 year.

The seasonal variation changes with latitude, being minimal in Antarctica and maximal in Barrow, and 6-months out of phase. However, there are two different major biogenic processes going on: 1) a continuous production of CO2 from bacteria and fungi all year long, reaching a peak in the northern hemisphere Summer, and 2) a rapid draw-down during the local Summer, resulting in a net decline during the Summer. Because the net ramp-up phase appears to last from about September (in northern hemisphere) to May, while the stronger draw-down phase lasts from about late-April to late-September or early-October, the durations and flux rates are different. I suspect that the peak in May indicates a shorter lag time than 0.13. Although, without a clear definition of “lag” and integrating the CO2 flux over the mid-latitudes, these are only approximations.

https://wattsupwiththat.com/2021/06/11/contribution-of-anthropogenic-co2-emissions-to-changes-in-atmospheric-concentrations/

Reply to  Clyde Spencer
December 20, 2024 6:09 am

I believe even the paleo proxies show CO2 lagging temp. Those proxies don’t seem to have the granularity to distinguish seasonal changes from longer term results.

Robert Cutler
Reply to  Clyde Spencer
December 20, 2024 8:17 am

Thanks, Clyde. My analysis of CO2 causality was done in response to a posting on Judith Curry’s site. While I liked the result, I wasn’t convinced by the analysis, mainly due to the combined use of a 1-year moving average and difference function. This combination results in a filter with a peak amplitude response at 0.5 year. I actually spent quite a bit of time trying to prove the result wrong. I compared two different CO2 datasets to different temperature datasets and to ENSO/SOI. For example, here’s Mauna Loa CO2 concentrations compared to Southern Hemisphere temperature. Note that annual response is phase wrapped so the delay is much greater than six months here.

comment image

I also looked at the data in the time domain in an attempt to see if there was something interesting going on for periods longer than 10 years. This required both detrending and deseaonalizing the data. Here are two results where the temperature is linearly detrended in both, and the [CO2] was detrended with either a ln() or a 2nd-order polynomial. As can be seen in the plots the polynomial is a better fit, suggesting that [CO2] is integrally related to temperature rather than temperature being logarithmically related to [CO2].

comment image

comment image

Because of these results I never developed an interest in carbon sources, sinks, mass balance, isotope ratios and all of the other noise that distracts from the basic results showing CO2 concentrations lag temperature.

Reply to  Robert Cutler
December 20, 2024 10:55 am

Robert,

Nice work. Too bad some of the CAGW folks posting can’t publish the same kind of analysis supporting their assertions. Most think models supply data.

Congratulations!

December 17, 2024 10:25 am

nice discussion; if Willis is reading, would be interested on his take on this analysis

abolition man
Reply to  Jeff L
December 17, 2024 1:36 pm

And also, Willis, have you done a full post on your 2020 scattershot graph on benthic foram CO2 vs. temps? I would be interested in any info on that!

Reply to  Jeff L
December 18, 2024 5:49 am

But why would you want an opinion about radiation from an ignorant hypocritical lying charlatan who doesn’t know what the word “radiation” means? Maybe you should ask a physicist instead…

Rud Istvan
December 17, 2024 10:33 am

While I agree that current climate models have multiple fatal flaws, I do not think reconceptualizing them will help. This is for a very basic mathematical reason.

Computational constraints imposed by the CFL theorem (for numeric solutions to partial differential equations) would make models at meaningful grid scales (e.g. 4 km to resolve thunderstorm convection cells) computationally intractable by 6-7 orders of magnitude. As a rough UCAR CFL rule of thumb, halving grid size increases the computational burden by 10x. Not something a faster supercomputer can solve this century.

So all models have to be parameterized. The CMIP first run submission requirement is a 30 year hindcast. The parameters are tuned to best hindcast. (And actual ‘best hindcast’ temperature divergence of about +/-3C between tuned models is hidden from the public by expressing the hindcasts as anomalies.) BUT we know that some portion of past warming is from natural variation (as Andy shows), which models assume away. (CO2 control knob.)

No model reconceptualization solves this inherent attribution problem.

As a footnote, the only CMIP6 model that does not produce a spurious tropical troposphere hotspot is INM CM5. The result is so important that the Russians wrote a ~40 page paper showing their result and explaining why. Its ocean rainfall was parameterized using ARGO salinity data, so its tropical water vapor feedback is about half of the rest of CMIP6. Plus INM CM5 produces an ECS of about 1.8, (lowest in CMIP6), and close to observational EBM estimates of ECS~1.7.

Sparta Nova 4
Reply to  Andy May
December 17, 2024 12:37 pm

Originally the charter was to understand the climate. That changed to determining the effects of CO2 on the climate.

Reply to  Andy May
December 18, 2024 3:08 am

This is what ( i think) the russians did in their models as Christy showed..

Sparta Nova 4
Reply to  Rud Istvan
December 17, 2024 12:36 pm

Hindcasting is purely and simply curve fitting.

Reply to  Sparta Nova 4
December 17, 2024 2:07 pm

And if they fit the model to FAKE urban temperature data then, even if the model is correct (LOL)…

… anything that comes out of that model is also totally FAKE.

December 17, 2024 10:45 am

Nicely stated. However, things get much worse. In response to the alarmist projections from the flawed models, we have set out to completely overhaul our principal sources of energy in accordance with an even more flawed energy plan based on low-density and intermittent sources of power.

Overall we have embarked on a course that will result in pretty much the absolute worst case outcomes.

Reply to  honestyrus
December 17, 2024 2:20 pm

We?
No, we are being led by a cabal of leaders by the nose into a pit, and the cabal says the pit is Eden, while the private plane folks are laughing all the way to the bank.

Reply to  wilpost
December 18, 2024 6:13 am

That used to be true. However, there are good people in the world too, and they have initiated the Great Awakening. The end of this road is no longer the pit, but a new Golden Age 🙂

Milo
December 17, 2024 10:54 am

GCMs won’t have the skill to be useful beyond politically until computing power increases 10,000-fold, so that, for instance, clouds can be modeled rather than merely parameterized.

Reply to  Milo
December 18, 2024 3:10 am

Computer power is not the solution.
It is often assumed..

Someone
Reply to  Milo
December 18, 2024 7:54 am

While quantum computing can purportedly be that much faster, speed is not the only problem. One unresolvable fundamental problem is non-deterministic chaotic nature of climate. Another is absence of detailed factual initial conditions and boundary conditions.

Yet, something much simpler and semi-quantitative like Milankovitch cycle model works reasonably well without super computers.

Rud Istvan
December 17, 2024 11:37 am

A comment on how disproportionate things have gotten.

Per NOAA, in 2024 the US spent about $78 million on improved weather forecasting models, but still lags ECMWF by a lot.

Per GAO, in 2024 the US spent about $2.5 billion just on the ‘science’ part of climate change (e.g. climate models), while knowing they cannot be made to work properly.

Erik Magnuson
Reply to  Rud Istvan
December 17, 2024 12:09 pm

With respect to weather models versus climate models: Reducing the error in the 5 day forecast for the position of the eye of a hurricane could be worth billions.

Cliff Mass and other meteorologists have been arguing for years on the NWS needing to improve their weather models.

December 17, 2024 11:49 am

The 1967 M&W paper claimed that a doubling of the CO2 concentration from 300 to 600 parts per million (ppm) would increase the equilibrium temperature of the earth by 2.9 °C for clear sky conditions. This was just a mathematical artifact created by the oversimplified one dimensional radiative convective (1-D RC) model. There were three fundamental errors.
 
First, they copied the 1896 Arrhenius steady state air column approach that created a warming artifact when the atmospheric CO2 concentration was increased. As CO2 increases, there is an initial decrease in the long wave IR (LWIR) flux radiated back to space within the spectral range of the CO2 emission bands. The model was configured to warm up to a new steady state. However, in the real world, the additional heat released in the troposphere is simply radiated back to space by wideband emission, mainly by water vapor. It is decoupled from the surface by molecular line broadening. There is almost no change to the energy balance of the earth. A doubling of the CO2 concentration produces a maximum decrease in tropospheric cooling rate (or a slight warming) of +0.08 °C per day at low and mid latitudes [Iacono et al, 2008; Ackerman, 1979]. This warming does not accumulate over time. It is buried deep in the normal daily and seasonal temperature variations in the troposphere.
 
Second, they imposed a fixed relative humidity distribution on the air layers in their model. This created a ‘water vapor feedback’ that amplified the initial Arrhenius warming artifact. Again, the normal variations in the tropospheric humidity are so large that any changes produced by CO2 are to small to measure and they do not accumulate over time. 
 
Third, they used a time integration algorithm that allowed the small increases in temperature at each model integration step to accumulate over time. In the real world, this does not happen.
 
These errors have never been corrected and provided the foundation for the pseudoscience of radiative forcings, feedbacks and climate sensitivity still used in the climate models today. 
 
At present the annual average increase in the atmospheric CO2 concentration is about 2.5 ppm per year. This produces an increase in the downward LWIR to the surface of 40 milliwatts per square meter per year. The climate modelers should explain how this produces an increase in surface temperature and enhances extreme weather events. 
 
What part of TOO SMALL TO MEASURE don’t the climate modelers understand?
 
Further details see ‘A Nobel Prize for Climate Modeling Errors’, Clark 2024.

Reply to  Roy Clark
December 17, 2024 1:20 pm

What part of TOO SMALL TO MEASURE don’t the climate modelers understand?

Its worse than that. The models have to model climate change at the granularity of their step change which is typically 20 minutes.

Reply to  Roy Clark
December 18, 2024 3:14 am

I mostly agree though i would say there is very little water vapour at high altitudes to help with radiation back to space. I think that’s where CO2 comes in..

rovingbroker
December 17, 2024 12:13 pm

” … the “consensus” has been unable to narrow the uncertainty in their estimates … “

So … the more we know, the less we know.

Reply to  rovingbroker
December 18, 2024 3:17 am

The idea is to get more money and computer power to solve the problem they have created by flawed models.
I remember somebody once said: ‘throwing money at it makes a problem worse’.

Someone
Reply to  ballynally
December 18, 2024 8:07 am

Throwing money at this makes peoples careers.

Sparta Nova 4
Reply to  rovingbroker
December 18, 2024 9:35 am

Someone once concluded that what we know we don’t know is not the risk. The risk is what we don’t know we don’t know.

Anthony Banton
December 17, 2024 12:13 pm

““Figure 2 suggests that OLR increases as the world warms and cloud cover decreases, this is not what one would expect if cloud cover changes are a net positive feedback to surface warming.”

It does if ASR is greater than OLR and the EEI persists.
As a warmer Earth is a feedback to that discrepancy, which gives us the increasing OLR.
And as per AlanJ that is what the models show happens.

Andy:
Did you not know of this paper?
https://arxiv.org/abs/2405.19986

Recent global temperature surge amplified by record-low planetary albedo (2024)
Helge F. GoesslingThomas RackowThomas Jung

Click “view PDF”

Abstract:
In 2023, the global mean temperature soared to 1.48K above the pre-industrial level, surpassing the previous record by 0.17K. Previous best-guess estimates of known drivers including anthropogenic warming and the El Nino onset fall short by about 0.2K in explaining the temperature rise. Utilizing satellite and reanalysis data, we identify a record-low planetary albedo as the primary factor bridging this gap. The decline is caused largely by a reduced low-cloud cover in the northern mid-latitudes and tropics, in continuation of a multi-annual trend. Understanding how much of the low-cloud trend is due to internal variability, reduced aerosol concentrations, or a possibly emerging low-cloud feedback will be crucial for assessing the current and expected future warming.”

“The data displayed in figures 2 through 5 cannot be explained as a function of the monotonically increasing atmospheric CO2 and other greenhouse gases and their so-called “heat trapping” or “OLR delaying” capabilities. While additional greenhouse gases may have some influence on increasing surface temperatures, we cannot see any sign of this influence in these data.”

Why would you expect a monatomic increasing GHGs to explain the data?
Climate science is well aware of NV overlaying the “monotonically” increasing radiative forcing of Anthro GHGs.
Just look at the effect that ENSO has on GMST FI.
The question is, is this trend a consequence of a GW feedback

“This simple model of the GHE has many problems. It is really only appropriate in perfect conditions in the tropics, and even there the lapse rate at night is different than the lapse rate during the day. In addition, the model predicts way too much warming in the tropics. In the middle latitudes, with persistent horizontal circulation and many storms, it makes no sense at all. In the polar regions, especially in the long dark winters, the atmosphere is frequently warmer than the surface,”

You said it in the first three words.
The model is simple.
Taking the global atmosphere and giving it an average LR consistent with -g/Cp, and then warming it yields that conceptual result.
And in polar regions the surface is nearly always colder than the immediate atmosphere above both in winter and in summer (see DMI Arctic temp graphs).
In the Antarctic this is so extreme in the Polar night that a -ve GHE occurs.
None of that invalidates the notion of average an LR, whereby warming raises the effective emission Lyr height and hence is colder and weaker.

Reply to  Anthony Banton
December 17, 2024 1:19 pm

Maybe you can explain the relatively recent change (special pleading?) in the ‘canonical’ mechanism behind climate doom that has occurred since the satellite data threw a bucket of cold water on the ol’ CO2 reduces OLR chestnut:

https://www2.cgd.ucar.edu/staff/trenbert/trenberth.papers/2009GL037527.pdf

Reply to  Anthony Banton
December 17, 2024 1:25 pm

The paper was recently covered here at WUWT.

A key part of it is that there is “a possibly emerging low-cloud feedback” which confirms what everyone already knows but apparently doesn’t understand. Clouds are fitted and have no projective power. To argue that the models show anything using their cloud calculations is comical.

Reply to  TimTheToolMan
December 17, 2024 5:03 pm

And the cloud problem causes the uncertainty of the outputs to increase after each and every iteration step.

Reply to  Anthony Banton
December 17, 2024 1:33 pm

Wow.. looks like AB swallowed a Kamala.. now speaks like one. Totally incoherent gibberish.

Changes in albedo, and absorbed incoming solar, combined with strong persistent El Nino and HT eruption effects.

Absolutely ZERO EVIDENCE of any human causation.

1… Please provide empirical scientific evidence of warming by atmospheric CO2.

2… Please show the evidence of CO2 warming in the UAH atmospheric data.

3… Please state the exact amount of CO2 warming in the last 45 year, giving measured scientific evidence for your answer.

Anthony Banton
Reply to  bnice2000
December 18, 2024 12:12 am

Ah, bless and here comes the demented WUWT attack-dog to drag down his pack.
By asking his usual question that even if answered would never be accepted by

It is amusing though that cognitive dissonance can cause such anger in a person.
As though the world must at all costs fit his ideological mindset and that ranting on here makes any difference, other than to let him vent a bit.

Sparta Nova 4
Reply to  Anthony Banton
December 18, 2024 9:37 am

So now you resort to defamation and insult.

Reply to  Anthony Banton
December 18, 2024 12:46 pm

Meanwhile you ignored his three questions for YOU to reply to.

This means you are o for 3 it’s a strike out!

Bwahahahahahahaha!!!

Reply to  Anthony Banton
December 19, 2024 8:21 pm

Could I please has some freshly ground black pepper on top of that word salad?

Giving_Cat
December 17, 2024 12:34 pm

You say; …since 1990 (the first IPCC report) that CO2 is the “control knob” for global warming is flawed. There is abundant evidence that climate has many drivers, and man-made CO2 is only one, and one that may not be that significant.

You also say; …the current group of IPCC/CMIP climate models do not compare well with observations, and unfortunately, in the tropics they do better if the man-made portion of the enhanced greenhouse effect is removed.

I say; If CO2 can be ignored for some modelling purposes it can be suspect for all.

There are some 100+ serious models of global climate and at best one is correct and the others wrong. Who here would take a medicine that was 99% wrong and possibly 100% wrong?

Richard M
December 17, 2024 12:37 pm

While a agree with much of what Andy states, he also repeats several misconceptions that are often repeated by skeptics.

Earth’s surface is not cooled much by emitting radiation 

The surface is cooled by convection, latent heat and radiation. Radiation is an important part of the overall process. While it is true that the lowest few meters of the atmosphere absorb almost all of surface radiation which is not emitted through the atmospheric window, that energy is reemitted as an upward flux of IR which then feeds energy into all the atmospheric layers above.

This provides stability to the atmosphere. As fronts and convection are constantly moving energy out of a stable configuration, this radiation is constantly reestablishing the lapse rate structure.

If this is done at a high enough altitude, some of the latent heat can make it to space as radiation

It is not required that the energy be “at a high enough altitude”, all layers of the atmosphere emit radiation to space just about equally. This is precisely why the lapse rate exists.

Manabe also assumed fixed relative humidity and fixed cloud cover.

This is the major problem with climate models even today. The atmosphere does not maintain a fixed relative humidity. If it did we would see a tropical hot spot. The higher altitudes see a reduction in humidity as CO2 levels rise. This is precisely why CO2 doesn’t cause warming.

Figure 1 illustrates what doubling CO2 does with a fixed and linear lapse rate in Held’s simplified model. 

The lapse rate is essentially fixed. This is also why it is only dependent on gravity and heat capacity.

The additional CO2 warms the surface with additional back-radiation, then the assumption of a fixed lapse rate kicks in, forcing the emission level up to a higher altitude, which means it has a lower temperature, causing it to emit less to space.

Back radiation cannot warm the surface since it comes from so low in the atmosphere. All that happens is the energy conducts back into the atmosphere or induces enhanced evaporation.

The emissions level does not increase in altitude either. That false claim comes from assuming the upwelling IR is a constant. It’s not, it increases as CO2 increases. This is required by Kirchhoff’s Law. As a result, the lapse rate is also unchanged.

This simple model of the GHE has many problems. 

It has more than just problems. It is physically impossible. Models have the basic science of the atmosphere complete wrong. They will never produce anything even mildly useful.

Richard M
Reply to  Andy May
December 17, 2024 1:43 pm

It is not

Come on. I was referring to the average. In fact, that is almost imbedded in the definition. It’s usually abbreviated ALR.

I guess it depends upon the starting point. 

Yes, it does depend on how much CO2 is in the atmosphere. However, the Earth has always had enough CO2 to eliminate any warming from back radiation. That doesn’t mean it isn’t important though. Back radiation drives a reduction in high altitude water vapor which is why CO2 increases do not cause warming.

Greenhouse gases delay cooling, which is why they are important.

I agree they are important. Understanding how this actually works is also important and you haven’t reach that point yet. I’m trying to help.

Obviously not true. Little radiation makes it to space from the boundary layer, you must get above the cloud tops to emit to space

I’m not surprised you don’t understand this topic. You and just about everyone else. So please explain why the atmosphere gets cooler as your rise in the troposphere if no energy is lost. Where did it go?

Not understanding how gravity (density change) affects energy transfer in the atmosphere is a major hurdle you (and many others) need to overcome. You are not only wrong on this subject. You are wrong on many other claims simply because you don’t understand this topic.

Radiative gases don’t just absorb energy, they also emit energy. When you increase CO2 you do get higher absorption at every layer of the atmosphere. However, you also get higher emission at every layer. Essentially, this causes the total amount of energy flowing upward to increase, and also to slow down. These two opposite effects balance out. That means the flow of energy stays constant.

This is one of those “ah ha” concepts. It explains so many other things.

Reply to  Richard M
December 17, 2024 2:36 pm

There are many who have shown that CO2 does not re-radiate in the lower atmosphere but thermalises to the remaining 99.96% of the atmosphere.

Never gets a chance to “back radiate”.

Tom Shula and Markus Ott : The “Missing Link” in the Greenhouse Effect | Tom Nelson Pod #232

Even if it did radiate, some 68% of that radiation would have more sideways component than downwards. Back radiation from CO2 is a furphy. !

As to gravity effects, this, in combination with the gas laws, is what controls the energy transfer in the atmosphere.

it has been shown by analysis of balloon data that in the troposphere, the energy gradient is absolutely linear with the molecular density.

https://youtu.be/XfRBr7PEawY?t=1431 (good luck with the accent ! 😉

Richard M
Reply to  bnice2000
December 17, 2024 2:58 pm

Don’t believe them. While CO2 does thermalize absorbed energy, it also can be excited by the very same collisional energy transfers and emit energy. Kirchhoff’s Law tells us these should occur at about the same rate. I read the paper you mentioned and I know why they are wrong.

Ignore the sideways component and just look at the average upward/downward vector. After all, that is what you end up with after averaging out all the emissions. There’s still plenty of energy radiated back to the surface. It just happens to come from only a few meters above the surface and this is key.

Since the energy is derived from right near the surface, most of that energy is conducted right back due to the 2nd Law. This is because the low atmosphere is cooled and the surface warmed by the radiation event. This creates an energy imbalance which is easily corrected by ongoing conduction between the two areas.

You are absolutely correct about the energy gradient. It is driven by the changing density. That is in fact why the surface is warmer. There is no greenhouse effect, there is simply a gravitationally defined distribution of mass and it is the number of energized molecules that determine temperature.

Reply to  Richard M
December 18, 2024 3:26 am

Where is CONVECTION in all of this? That was the point that Ott/ Shula were making in relation to assumed radiation which they stated was much less than the models assumed.

Sparta Nova 4
Reply to  Richard M
December 18, 2024 9:43 am

Ah, thermalize again.

I know the definition of thermalize and the way it is used is incorrect.

So, pray tell, what is your definition?

Reply to  Sparta Nova 4
December 18, 2024 10:06 am

So, pray tell, what is your definition?

https://modern-physics.org/thermalization-processes/

Classical vs. Quantum Thermalization

The distinction between classical and quantum thermalization is significant. Classical thermalization can often be modeled using Newtonian mechanics and statistical thermodynamics, focusing on macroscopic properties like temperature and pressure. Quantum thermalization, however, requires a deeper understanding of quantum states and probabilities. This quantum perspective is essential in modern physics, especially in the study of low-temperature systems or high-energy states where quantum effects dominate.

Collisions between molecules results in a classical assumption of kinetic energy being equally distributed in a closed system.

Reply to  bnice2000
December 17, 2024 8:58 pm

What accent?

Reply to  Richard M
December 17, 2024 3:13 pm

How does one obtain ‘IR back-radiation’ when, throughout the troposphere, vibrationally excited CO₂ overwhelmingly decays by collisional deactivation, not by radiative decay.

Richard M
Reply to  Pat Frank
December 17, 2024 4:15 pm

The “collisional deactivation” does not destroy any energy, It simply moves it into the bulk atmosphere. CO2 molecules also can be excited by the very same collisional energy transfers and emit energy. Some of that energy will be directed towards the surface.

Due to Kirchhoff’s Law, the instances of radiation towards the surface are about equal to 1/2 the instances of absorption. While this supports the greenhouse effect narrative, what happens next destroys it.

As I indicated above, the back-radiation cannot warm the surface. However, it can induce evaporation. This actually ends up cooling the surface/low atmosphere combo while sending additional latent heat upward into the higher troposphere.

Reply to  Richard M
December 17, 2024 4:39 pm

It simply moves it into the bulk atmosphere.”

ie Thermalises

“CO2 molecules also can be excited by the very same collisional energy transfers and emit energy.”

No, it is irreversible . Collision with a molecule does not instill vibrational activity, that can only be done by a photon..

Watch the video.. learn ! https://youtu.be/JtvRVNIEOMM?t=1026

Richard M
Reply to  bnice2000
December 17, 2024 5:03 pm

They are wrong. They are assuming when a collision occurs it takes the same amount of time, as it takes for a molecule to relax after absorbing a photon, before emission can occur .

No reason why that should be true. Relaxation is a different process. I expect the photon emission occurs almost instantaneously (nanoseconds). You still have the relaxation afterwards but the photon is already gone.

They assumed another collision would occur to stop the emission. It all gets down to exactly when a collision excited CO2 molecule emits the photon. I think my view supports Kirchhoff’s Law of Radiation. Theirs doesn’t.

Reply to  Richard M
December 17, 2024 6:17 pm

Collisions occur some 50,000 faster than emittance, according to this video, and to Will Happer.

You “expect” .. how sciency of you. 😉

Their view does support radiative laws.. Yours doesn’t.

They discuss this.

Richard M
Reply to  bnice2000
December 17, 2024 8:23 pm

Bnice, my view actually supports radiation laws as well. IMO, even better.

By “discuss” you really meant “they assert this”. That’s what I noticed the first time I watched the video. They provided nothing to support their assertion. I have no problem with the absorption side of the question. That seems pretty clear. Not so with the emission side.

I see no reason for the same long delay after a CO2 molecule is excited and the photon emission occurs. In fact, since the long delay happens AFTER absorption, why wouldn’t the long delay also happen after emission as well? It actually makes logical sense.

Don’t get me wrong. The molecule wouldn’t be able to absorb/emit another photon until after the delay period. That says nothing about exactly when the photon is emitted.

If I am right, the opportunity for an excited molecule to emit a photon and not get the energy re-thermalized is significantly higher.

So what does the scientific data tell us? There are many papers available which claim to MEASURE CO2 energy at the surface. How is that possible if your video’s claim is true? Any CO2 emitted photons directed downward from high in the atmosphere would be absorbed long before reaching the surface. Nothing measurable would ever show up at the surface. In fact, same would be true for water vapor. Where exactly is downwelling IR coming from?

Until you can explain the data, I’ll stick with my current interpretation of the process.

Richard M
Reply to  Andy May
December 18, 2024 2:04 pm

Glad to know Einstein said exactly the same thing I have been saying. That has nothing to do with CO2 molecules being excited by collisions and then emitting energy.

Reply to  Richard M
December 18, 2024 6:13 am

Kirchhoff’s Law:For an arbitrary body emitting and absorbing thermal radiation in thermodynamic equilibrium, the emissivity function is equal to the absorptivity function.

You say “…my view supports Kirchhoff’s Law…”. If the emissivity of CO2 at atmospheric temperatures and pressure is zero (0) then it can’t absorb any energy and pass that energy anywhere.

IMG_0319
Richard M
Reply to  mkelly
December 18, 2024 2:01 pm

If the emissivity is zero, so is the absorptivity (according Kirchhoff’s Law). This means CO2 couldn’t absorb any energy either. That’s not what they are claiming.

Reply to  Richard M
December 17, 2024 4:58 pm

CO₂ molecules excited by collision, will also decay by collision.

I just don’t see any way ‘IR back-radiation’ can ever be important in the troposphere.

Richard M
Reply to  Pat Frank
December 17, 2024 5:29 pm

Pat, they can decay by either collision or photon emission. However, if they are excited enough for emission, I believe the event occurs rapidly. Keep in mind that collisions are happening extremely fast so even if a few photon emissions get cut off, they will still occur.

Reply to  Richard M
December 17, 2024 6:19 pm

I believe…” how wonderful for you 😉

Collisions occur at 50,000 times the rate of emissions.

Reply to  Richard M
December 17, 2024 6:43 pm

Rich, the rate of decay by photon decay is known. In the troposphere, it’s several thousand times slower than collisional decay.

In the troposphere, decay by photon emission is negligible.

Curtis & Goody (1956) The collisional relaxation lifetime for CO₂ in atmosphere of nitrogen at 220 K is about 15 microseconds. The radiative decay lifetime is 0.43 sec.

For CO₂, the ratio of (radiative lifetime)/(collisional lifetime) = (λ)/(φ) = 3.5×10⁻⁵ at 220 K and 1 atm pressure.

The height at which λ = φ is about 74 km.

Richard M
Reply to  Pat Frank
December 17, 2024 8:31 pm

Pat, I have no problem with your numbers. However, those are for absorption delay. It tells us nothing about the emission situation. That is the issue.

As I explained to Bnice above, if emission of a photon by an excited CO2 or H2O molecule was delayed by the same amount of time, we would not see any measurable IR at the surface. How do you explain all the papers which measure it? How do you explain IR thermometers?

Reply to  Richard M
December 17, 2024 9:07 pm

The radiative decay lifetime is 0.43 sec.

That’s not absorption delay. it’s the time it takes for a vibrationally excited CO₂ molecule to spontaneously emit an IR photon and decay back to its energetic ground state.

That time is 29000 times longer than the time it takes for a vibrationally excited CO₂ molecule to collide with a nitrogen or oxygen molecule at 1 atm., offload the vibrational energy as N₂ or O₂ kinetic energy, and decay back to ground state.

I see no opportunity for ‘IR back-radiation’ from radiative decay of vibrationally excited CO₂ in the troposphere, although, as you note, everyone talks about it.

Richard M
Reply to  Pat Frank
December 17, 2024 11:37 pm

Pat, I disagree with your description of radiation decay. In the case of the absorption of a photon the energy is available immediately to be transferred away during a collision event. If that is true the photon is gone. Energy cannot exist in two places at the same time.

The delay until another radiation event is different. It relates to the molecule returning to a state where another radiation event is possible. Otherwise, you’ve violated the 1st Law.

The same must also be true for emissions. At the time of the collision the energy photon is immediately created and emitted. There is also a delay but the photon is already gone. It cannot be transferred to another molecule in a subsequent collision.

Reply to  Richard M
December 18, 2024 8:06 am

Rich, when a 15μ IR photon encounters CO₂, the interaction involves the electric dipole of the C=O bond — negative at oxygen, positive at carbon.

This dipole oscillates with ground state vibrational energy. This oscillating dipole is in resonance with the frequency of the 15μ photon.

In an absorption event, the photonic energy is transferred to the vibrating bond. The vibrational energy state increases by the amount of the photonic energy.

The new quantum energy state is higher than the ground state by an amount equal to the energy of the photon.

Photons are pure energy. No mass. When the photonic energy is absorbed and resident in a higher CO₂ vibrational energy state, the original photon is gone (Wheeler and others would have trouble with that statement, but for simplification it can be taken as true).

All the energy of the photon has disappeared into the CO₂ molecule, which is now vibrationally excited.

Emission of a new photon takes 0.43 sec.

Collisional transfer of that energy at 1 atm. in the troposphere takes 15 μsec.

The intensity of the original IR radiation is reduced by a factor of ~29,000. Effectively, nothing is left for back-radiation.

I can see you are working hard to figure out what’s going on, and commend you for it. But you need to correct your view of absorption and emission processes.

Rick Wedel
Reply to  Pat Frank
December 18, 2024 8:59 am

Thanks for this explanation.

Reply to  Pat Frank
December 18, 2024 11:43 am

Emission of a new photon takes 0.43 sec.

My understanding is that emission of the photon is a probabilistic event most likely to occur at about 0.43s. But there are a huge number of molecules and so a large amount of radiation happens by chance even though the odds are so low.

Reply to  TimTheToolMan
December 19, 2024 7:48 am

Re-radiation will always be a distribution around (1/28,666)th of the absorbed radiation. 🙂

Richard M
Reply to  Pat Frank
December 20, 2024 1:39 pm

Pat, nothing I said disagrees with your description. That is exactly how I understand the absorption process. No reemission is the norm.

My problem with your claim of little tropospheric radiation is during collision induced emission processes. In this case the energy comes from another atmospheric molecule passing energy to a CO2 molecule during a collision.

At the time the CO2 molecule gains the energy it either incorporates that energy internally, emits a photon or possibly both. When a photon is emitted it happens immediately and then the CO2 molecule experiences relaxation before any other radiation action is possible. Nothing prevents the photon emission.

This means there is lots of radiation going on in the troposphere, it doesn’t come from immediate reemission, it comes from these collision induced emissions.

Reply to  Pat Frank
December 18, 2024 5:52 am

In the troposphere, decay by photon emission is negligible.

Pat,

This is why attribution of the atmospheric warmth is so difficult. Conduction from the surface and from CO2 all gets mixed together.

Reply to  Richard M
December 18, 2024 3:32 am

But please consider context, the relation to its environment and specifically water vapour, plus the actual nr of Co2 molecules, its limitated vibrational state.

Reply to  Richard M
December 17, 2024 4:59 pm

‘…the back-radiation cannot warm the surface. However, it can induce evaporation.’

I think you missed the point of Pat’s question.

Richard M
Reply to  Frank from NoVA
December 17, 2024 5:14 pm

I responded to Pat’s concern. One can argue if the term “back-radiation” is legit, but radiation is emitted by CO2 molecules back towards the surface. It just takes a more circuitous route.

Reply to  Richard M
December 17, 2024 6:20 pm

It just takes a more circuitous route.”

Which dance steps does it follow ??

Reply to  Richard M
December 17, 2024 7:21 pm

but radiation is emitted by CO2 molecules back towards the surface”

Except it doesn’t get a chance to !!

Watch and try to understand the video.

Richard M
Reply to  bnice2000
December 17, 2024 8:41 pm

My description of the process still makes more sense to me.

Here’s my view, two timelines:

1) Axxxxx…xxxx
2) CPxxxx…xxxx

The 1) timeline shows a CO2 molecule absorbing a photon (A) and the long delay (xxxx) afterwards.

The 2) timeline shows a collision (C) followed immediately by photon emission (P) and then the required delay.

Collisions can happen anytime in the delay sequence (xxxx), In both cases another emission/absorption can only happen after the delay.

Reply to  Richard M
December 17, 2024 11:30 pm

50,000 to 1….. great odds ! for horses.

Richard M
Reply to  bnice2000
December 18, 2024 4:47 am

Your odds only determine the delay time at the molecular level. It doesn’t tell you what causes the delay time. You are assuming it is the time it takes a molecule to absorb/emit the energy. I’m saying the absorption/emission itself occurs quicker and the logic why this must be true, that is, you can measure IR at the surface.

Here’s another logical argument.

If the time of the entire radiation decay is 0.43 sec, then the delay time for the kinetic transfer to take place would have to be the same. No energy would be available until after the decay.

That’s not what your video claimed. They are saying the energy is available 50,000 times faster. They claim the energy is available immediately after the A in timeline 1). If the energy is available then it must already have been moved into the CO2 molecule. Energy cannot exist in two places at the same time.

That means the cause if the decay time is not the energy movement itself. The cause must be something else.

This also means the 0.43 sec decay time after a kinetic collision does not have to be equal to the time it takes for a photon emission. It could also happen immediately.

Someone
Reply to  Richard M
December 18, 2024 2:34 pm

It could also happen immediately.

Or it could happen later than 0.43 sec. The 0.43 sec is the mean of the distribution.

Anthony Banton
Reply to  Pat Frank
December 18, 2024 12:34 am

Then IR radiometers are wrong when they show that indeed there is downwelling IR from a clear night sky?

Reply to  Anthony Banton
December 18, 2024 5:26 am

Anthony, you may have failed to grasp how radiometers work, and what the results mean, as well as the difference between “energy” and “power”. The only way to measure positive downwelling IR power from the sky is to cool your radiometer far below ambient surface temperature (and therefore below the temperature of the sky as well). But this is now a different physical system than the one we all live in, isn’t it? Where I live, the surface temperature is not 77 K. So you can’t make any conclusions about the real world from the artificial environment of a liquid-nitrogen-temperature radiometer.

Richard M
Reply to  stevekj
December 18, 2024 7:11 am

Now explain the results from Feldman et al 2015 or Gero/Turner 2011. They use a device built specifically to capture all the downwelling IR.

Reply to  Richard M
December 18, 2024 7:17 am

Feldman et al. are using an AERI device. This radiometer is cooled to liquid nitrogen temperatures, as I said.

Reply to  stevekj
December 18, 2024 7:56 am

This only makes sense when you think about it. Heat flows from warm to cold. If the device is warmer than the atmosphere, no heat will be absorbed. Therefore no exchange of heat (rise in temperature) will be measurable. One could maybe measure a change in gradient but removing extraneous sources/sinks would be a nightmare.

AlanJ
Reply to  Jim Gorman
December 18, 2024 8:21 am

I can measure the temperature of my freezer by standing in my warm kitchen with an IR thermometer. How does this work?

Reply to  AlanJ
December 18, 2024 8:37 am

I am not sure if Jim will be able to answer that question, Alan, since he doesn’t know what “work” means, and will insult anyone who tries to teach it to him, but the correct answer is as follows:

Your IR thermometer is perfectly capable of measuring both negative and positive power. This will be represented internally as either a negative or positive voltage (or corresponding change in resistance, depending on what kind of sensor it is using). It also knows its own temperature, which it measures via an internal thermometer. So it calculates the relative temperature of the target, which in the case of your freezer will be a negative number, and adds its own internal temperature, to give you the absolute temperature of the freezer. No violations of the 2nd Law of Thermodynamics required, fortunately. The IR thermometer is actually losing energy to the freezer when it does this.

AlanJ
Reply to  stevekj
December 18, 2024 8:44 am

The sensor in the IR thermometer surely is absorbing IR emitted by the freezer during this process, yes? No violation of the second law?

Reply to  AlanJ
December 18, 2024 11:33 am

The sensor in the IR thermometer surely is absorbing IR emitted by the freezer during this process, yes? No violation of the second law?

Do you understand what optics do on an IR thermometer?

Look at this video.

https://youtu.be/nKHiafxUebk?feature=shared

Do you think the suns energy is being concentrated?

Is IR concentrated by optics in the thermometer?

With your rationale, your thermometer should read down to 0K. Why doesn’t it?

Reply to  stevekj
December 18, 2024 12:25 pm

I am not sure if Jim will be able to answer that question, Alan, since he doesn’t know what “work” means, and will insult anyone who tries to teach it to him, but the correct answer is as follows:

Nice ad hominem. Why don’t you give a source for your description of how an IR thermometer works.

Here is a diagram of how an IR thermometer is made. See that lens, what does it do?

comment image

What is the temperature range of an IR thermometer without a lens? Why?

Can the sensing end of a thermistor/thermopile be cooled lower than ambient just by allowing it to see a cooler object? That kinda violates heat going from hot to cold, right?

There is a reason to cool the sensing part to a very low temperature. Can you guess what it is?

Reply to  Jim Gorman
December 20, 2024 5:29 am

I am not the one who started with the ad hominems, Jim. Remember when you called me a clown for pointing out that you don’t know what the word “work” means? After you contradicted yourself, and then lied about it?

IR thermometers use lenses to make their target a smaller point. Not to change the direction of energy flow.

Yes, the target end of the thermistor will be cooler than ambient when exposed to a colder object than itself. No, that does not violate the 2nd Law, since the sensing end of the thermistor will never be colder than the target. Energy can flow from the thermistor to the colder target, and the thermistor cools down in the process. This is all standard 2nd Law behaviour.

Reply to  AlanJ
December 18, 2024 9:23 am

You can look up how these work yourself. But, be aware optics are used to focus incoming EM thereby creating more heat at the sensor, kinda like a magnifying glass. This is compensated by algorithms in the firmware. The spot distance must be evaluated to obtain accurate readings. The distance from your kitchen to the freezer adds to the uncertainty in the reading primarily from collecting a lot of extraneous heat from in the field of view and hot objects and reflections in the air.

AlanJ
Reply to  Jim Gorman
December 18, 2024 9:44 am

The question is: is there IR emanating from the freezer, and is that IR being picked up by the sensor in my much warmer IR thermometer?

Reply to  AlanJ
December 18, 2024 9:52 am

What does that have to do with cooling the sensor as originally stated?

AlanJ
Reply to  Jim Gorman
December 18, 2024 10:00 am

I don’t need to cool the sensor of the IR thermometer to detect the IR from the much colder freezer.

Reply to  AlanJ
December 18, 2024 9:54 am

Alan, the freezer emits infrared energy, just as the thermometer does. But “work” and “power” are only developed from the warmer object to the colder one. So when you say “IR”, you should be careful to distinguish between infrared energy and infrared power. The freezer does not develop power onto the thermometer, but rather the other way around.

(The entire climate “science” community gets this basic principle wrong.)

AlanJ
Reply to  stevekj
December 18, 2024 10:03 am

I think climate scientists have this exactly right, but you might have drawn a mistaken conclusion from it.

If the above is accurate then when I take a warm IR thermometer outside on a clear, cold night and point it upward, I am detecting “downwelling” IR from the atmosphere as it is absorbed by the IR sensor.

Reply to  AlanJ
December 18, 2024 10:44 am

Well, Alan, the accuracy of your claim depends on what you mean by “IR”. You cannot detect “downwelling IR power” from the atmosphere with your ambient-temperature thermometer. But it can detect upwelling IR power, i.e. energy leaving the thermometer and being transferred to the sky via radiation, and give you a (colder) temperature reading. And the magnitude of this power is affected by the IR energy being emitted by the sky (which you can think of as “resistance”, of a sort). In other words, the colder the sky, the greater the rate of energy loss from the thermometer. Is that what you meant?

By the same token, all of the IR power arrows (in Watts) pointing downwards, that are produced by climate “scientists”, are fictional. For example, see every “atmospheric energy balance” diagram ever published. And every time they write a paper that presumes that electromagnetic radiation has an inherent power characteristic, which is all of the papers I’ve seen so far, they are lying. EM radiation does not have an intrinsic power. That’s not how power works.

AlanJ
Reply to  stevekj
December 18, 2024 11:12 am

Yes, I think you’re saying the same thing in different words. The reason the colder sky results in greater energy loss from the thermometer is because the downwelling radiation from the sky reaching the sensor is less intense. The total energy per unit time leaving the thermometer is E_out – E_in, where E_in depends on the temperature of the atmosphere above and E_out depends on the temperature of the thermometer.

By the same token, all of the IR power arrows (in Watts) pointing downwards, that are produced by climate “scientists”, are fictional.

The arrows in energy budget diagrams are not describing power, but radiant flux density (Watts per unit area). There are indeed real, physical processes driving directional fluxes upward and downward (the atmosphere and solid earth both absorb and emit). You can certainly combine these fluxes into a single net flux, and describe a downward flux as merely a “resistance” or a reduction of the upward flux, but this loses important information about how energy is moving throughout the system.

These diagrams are also just schematic simplifications, they are not meant to illustrate the physics of EM radiation but simply to “tally up” the total energy entering and exiting the system to determine how it balances.

Reply to  AlanJ
December 18, 2024 12:45 pm

No, Alan, I think you have drunk a bit too much of the Klimate Kool-Aid. There is no “downward power” or “flux” from a colder object to a warmer one, that gets “combined” into a “net flux”. None of that is real physics. It is all imaginary “climate fizix”. In real physics, energy only flows from hotter objects to colder ones. Never the other way around. That is what the Second Law tells us. No one has ever demonstrated the opposite, neither with radiometers, nor with infrared cameras, nor with pyrgeometers.

To give you a better idea of what I’m talking about, in real physics, energy is defined as “the ability to do work”, work is defined as equivalent to a force applied through a distance (i.e. the “expenditure of energy”), and power is defined as the rate of energy being expended, i.e. work being done. There is no “net” in any of these definitions, nor does there need to be. The concept of “net flux” or “net heat” or “net power” only appears in fictional “climate fizix”.

If you think energy can “move throughout the system” from a colder object to a warmer one, please demonstrate this with an experiment. I would like to see an isolated system consisting of only two objects, A and B, where A starts off warmer than B, and then some energy moves from B to A – resulting in A getting hotter than it was before, and B getting colder. That is what would be meant by “energy moving”, or “being transferred”, or “work being done”. Have you ever seen that happen?

AlanJ
Reply to  stevekj
December 18, 2024 1:30 pm

There is no “net” in any of these definitions, nor does there need to be. The concept of “net flux” or “net heat” or “net power” only appears in fictional “climate fizix”.

It appears in any field dealing with radiant energy fluxes, including stellar astronomy. The “net flux” is nothing more than the sums of the fluxes through the unit surface. Simplified, it is the number of joules of energy spent per unit time. If a downward flux replaces some of the joules lost by the outward flux, the net flux is necessarily reduced.

If you think energy can “move throughout the system” from a colder object to a warmer one, please demonstrate this with an experiment. I would like to see an isolated system consisting of only two objects, A and B, where A starts off warmer than B, and then some energy moves from B to A – resulting in A getting hotter than it was before, and B getting colder. That is what would be meant by “energy moving”, or “being transferred”, or “work being done”. Have you ever seen that happen?

This is not the case that scientists are describing. Imagine a spherical blackbody suspended in a vacuum with a temperature T. In the absence of any external energy source, the sphere will cool at same rate dT/dt. It will emit energy in all directions following plank’s law.

Introduce a second spherical blackbody at some temperature <T. It too will cool and it too will emit radiation in all directions. Both spheres being blackbodies, each will absorb the radiation incident upon its surface.

The case that scientists describe is simply that the warmer sphere will cool more slowly in the presence of the second sphere than it would cool in its absence. This is the case regardless of the fact that the cooler sphere radiates less intensely than the warmer sphere, and so can never under any circumstance reverse the flux of energy emanating from the warmer sphere (this is what you think scientists are claiming happens).

Understanding this thought experiment should present no great challenge, and you can continue adding layers of complexity and arrive eventually at an earth/atmosphere model as scientists describe, never once violating any law of physics insodoing.

Reply to  AlanJ
December 19, 2024 4:26 am

In your thought experiment of the black bodies, at no time is any energy moving from a colder body to a warmer one. So we are in agreement about that. Note that I never said that the rate of energy transfer from a warmer object to a colder one did not depend on the temperatures of both.

I’m not buying this though: “You can certainly combine these fluxes into a single net flux, and describe a downward flux as merely a “resistance” or a reduction of the upward flux, but this loses important information about how energy is moving throughout the system.”

There are no “component” fluxes, and no “net” flux. Those are fictional concepts, based on false assumptions about an environment at 0 K – whether in stellar astronomy or earth atmosphere diagrams. No “important information” is lost when energy is correctly considered as flowing only from hotter objects to colder ones. The 2nd Law permits only this kind of flow. The definitions of energy, work, and power are not divided into “gross” and “net” variants.

AlanJ
Reply to  stevekj
December 19, 2024 6:17 am

The flux is not the flow, the flow is the net, the fluxes are simply the directional components of the flow, which exist undeniably, for the simple reason that the earth receives visible light from the sun and unquestionably emits infrared light to space. These are actual, physical emissions of radiant energy, and it is no different for the earth and atmosphere, as I can measure with my IR thermometer.

In your thought experiment of the black bodies, at no time is any energy moving from a colder body to a warmer one. So we are in agreement about that. Note that I never said that the rate of energy transfer from a warmer object to a colder one did not depend on the temperatures of both.

Yes, but you see the inescapable implication of this. At no point is the net flow from colder to hotter, yet the hotter object cools more slowly in the presence of a colder object than it otherwise would.

Add a layer of complexity and introduce a constant source of energy to the hotter object that prevents it from cooling beyond an equilibrium temperature above 0K. Now you see that the presence of the cooler object actually yields a higher equilibrium temperature for the warmer object than it would have in its absence.

We’ve arrived at the basic physics of the GHE, and seen that no violation of any physical laws occur.

 No “important information” is lost when energy is correctly considered as flowing only from hotter objects to colder ones.

There is indeed information lost in this treatment. We could say that the earth at equilibrium has no energy flow to space. This would hardly tell us anything about the various systems or energy balance at play. We need to examine the directional components of the solar and terrestrial fluxes to do this.

Reply to  AlanJ
December 19, 2024 12:03 pm

“The flux is not the flow,”

What do you think the word “flux” means?

“the fluxes are simply the directional components of the flow”

No, that’s not it. Try again…

“which exist undeniably”

What do you think the word “exists” means?

“earth receives visible light from the sun and unquestionably emits infrared light to space.”

I never said that either of these processes isn’t happening. In the shortwave band, the sun is hotter than the earth, so energy flows from the sun to the earth. In the longwave band, in which the sun emits very little energy because of its high temperature, the earth is of course hotter than space, hence in that band, energy flows from the earth to space. Note that no energy flows from space to earth, nor from earth to the sun. Those flows would, of course, violate the 2nd Law.

“yet the hotter object cools more slowly in the presence of a colder object than it otherwise would”

I explicitly never said that it didn’t. That’s why I used the word “resistance” to describe the energy emitted by the colder one. It’s not precisely the correct word, but a pretty good analogy if you haven’t grasped all of the physics properly yet. (More precisely, energy will roll down the potential slope resulting from the interaction of multiple emitted interacting fields, etc.)

The “basic physics of the GHE” is a lot more complex than you have alluded to here, but certainly if the atmosphere warms up, the surface will as well, all else being equal. But there are lots of other variables going on too, and it is very difficult to prove that any of this is actually happening, and if it is, why.

“We could say that the earth at equilibrium has no energy flow to space”

No we couldn’t. The earth is warmer than space, hence there is no equilibrium here. Unless the pathway is blocked, energy will always flow from a hotter object to a colder one, in this case from earth to space. (Unless you are imagining some future earth that has cooled down all the way to 3 K itself, possibly after the sun has gone out.)

“We need to examine the directional components of the solar and terrestrial fluxes to do this.”

Sure, and we did that above. Energy flows from the sun to earth, and from earth to space. No one is arguing about that. My point is that energy does NOT flow from the atmosphere to earth, because the atmosphere is colder than earth. Yet the climate “scientists” claim it does…

Here is another attempt to show why your fictional “flux components” are indeed fictional (besides the glaringly obvious problem that no one can measure them). Let’s imagine a two-body system again, with objects A and B, and A is warmer than B, so, all else being equal, “real” energy (my term) or “net” energy (your term) will flow from A to B, until they reach the same temperature eventually. But initially, there is a measurable transfer of energy from A to B. Let’s call it 1 Watt. (Dividing by area makes no difference to this thought experiment) You claim that this measurable flow is actually composed of two OTHER flows, one from B to A and a larger one from A to B, such that the “net” is from A to B. This is what atmospheric energy “balance” diagrams do. And that’s all well and good, until we try to calculate and prove a definitive magnitude of these components. For instance, you might make some assumptions and some calculations (not measurements!) and determine that the component from B to A is 10 Watts, and then from A to B is 11 Watts, resulting in our agreed objective “net” flow of 1 Watt. However, perhaps I make some different assumptions, and then I claim that the “components” are actually 20 Watts B to A, and 21 Watts A to B. My “net” is the same as yours, but whose components are correct? And how can anyone prove it? And if no one can, then we’re obviously not dealing with objective facts here, are we? Merely opinions, speculations, and fantasies. Is that how science works in your world? Because it’s not how it works in mine.

AlanJ
Reply to  stevekj
December 19, 2024 12:46 pm

it is very difficult to prove that any of this is actually happening, and if it is, why.

It’s not difficult to prove this – physicists did it more than a century ago. Most simply it can be proved by walking outside and seeing that the temperature is not -18 degrees Celsius, which is the equilibrium temperature for a body at the earth-sun distance. There must be a process that elevates the mean surface temperature of the planet, and that process must involve moving emission of radiant energy to space away from the surface and to high altitude. The only way to do that is via atmospheric emission.

You claim that this measurable flow is actually composed of two OTHER flows

This is not conjectural, it is a fact. Both bodies emit radiant energy as a function of their temperature according to the Stefan-Boltzmann law, and both bodies absorb radiation incident upon their surfaces because they are blackbodies. These fluxes are real and measurable. A satellite orbiting either body with an IR camera would see them glow.

A lot of what you’re saying is not in contradiction to what I’m saying, you just seem to be doubting the obvious conclusions that arise inescapably from our common understandings.

Reply to  AlanJ
December 20, 2024 5:23 am

I said “measurable flow”

You said “emit radiant energy”, then you said “fluxes”

Emitting radiant energy is not the same as a “flux”. You didn’t answer my question about what you think the word “flux” means. While we are at it, do you know what the word “energy” means?

Can you demonstrate an example of (positive) radiant flux from a cooler body measured at a warmer one, since you claim this is a “real and measurable” phenomenon?

The question of why the Earth’s surface is not -18 C is not a consequence of “downwelling IR power”, because there isn’t any. Energy cannot flow that way. Instead, this is a consequence of gravitationally induced atmospheric thermal gradients combined with the effective radiating layer (i.e. altitude).

AlanJ
Reply to  stevekj
December 20, 2024 6:31 am

Flux is the rate of energy transfer per unit area; energy is the capacity to do work. All objects above 0K emit radiant energy, and that energy is emitted as a flux in all directions. Shall we return to the actual argument?

As for your request: a simple example is standing on the warm surface of the Earth with an infrared camera, which can detect radiation from the much colder Moon. This is direct evidence of radiant flux from a cooler object reaching a warmer one. The second law of thermodynamics isn’t violated because net heat transfer still flows from hot to cold, while individual radiant fluxes occur in both directions. Your disagreement with me seems to imply that you believe that radiant energy from cooler objects somehow “chooses” to avoid absorption by warmer objects, which is unphysical.

Reply to  AlanJ
December 23, 2024 1:27 pm

“Flux is the rate of energy transfer per unit area; energy is the capacity to do work. All objects above 0K emit radiant energy, and that energy is emitted as a flux”

You are very confused, Alan.

Your definition of “flux” here is the one that climate scientists use, yes. It is power per unit area, i.e. “energy flow”, and there’s nothing wrong with that definition.

You are also correct that all objects above 0 K emit radiant energy.

And you are correct that energy is the capacity to do work.

But then you go completely off the rails when you say “energy emitted as a flux”. Energy is emitted, yes, but power (flux) isn’t.

Remember, we just defined “energy” as “the capacity to do work”.

Work is therefore the expenditure of energy (for everyone except Jim, who has his own unique branch of bizarre Jim-physics, which he can’t explain, other than to say that it’s different from the physics the rest of us use, and anyone who doesn’t use his version is a clown).

And power is the rate of work.

So yes, objects above 0 K emit “the capacity to do work”.

But they do not emit “work“, hence they do not emit “power” (nor “flux”).

Let’s try this: in any other scenario, how does “energy” get converted into “power”? Does that happen automatically and all the time, or is there some other requirement that has to be met first? What is it? What does the word “equilibrium” mean to you, for instance?

AlanJ
Reply to  stevekj
December 24, 2024 4:34 am

This is quibbling over semantics. Replace “power” with “radiant exitance,” expressed in Watts per square meter.

All objects above 0K emit radiant energy, expressed as Watts per unit surface area. This radiant energy can be absorbed by other objects in view, no matter whether they are warmer or colder. The net energy loss from any object thus depends on the net energy in Watts per square meter being received by other objects in its view.

The energy input from a colder object can never exceed the radiant exitance from a warmer object, but it can indeed reduce the net movement of energy away from the warmer object.

If I’m handing you $10 per second and you start handing me $1 per second back, my wealth is still going down, just at a rate of $9 per second now. No laws of physics or finance have been broken.

Reply to  AlanJ
December 24, 2024 6:32 am

“This is quibbling over semantics”

Alan, yes, the difference between “energy” and “power” is indeed “semantics”. That word itself means, simply, “meaning”. Naturally, “energy” and “power” have different meanings, which you can guess because they are different words; and you can’t simply substitute one for the other in the middle of a sentence and hope no-one notices.

You already explained to us, above, the different meanings of “energy” and “power”. But now you are trying to pretend they are the same thing. Why?

You wrote: “All objects above 0K emit radiant energy”

That is true.

“expressed as Watts”

No, Alan, energy is not expressed in Watts. What unit do we normally use to denominate energy? Remember, you defined “energy” for us as “the capacity to do work”, in case you forgot already.

AlanJ
Reply to  stevekj
December 24, 2024 6:40 am

The emission of radiant energy is expressed in Watts per square meter, the number of joules of energy passing through a square meter of surface area each second. You are trying to quibble over semantics when the common meaning is quite well understood by everyone in the discussion. I think you are doing this to avoid confronting the substance of what I’m saying.

All bodies emit energy, which can be absorbed by other nearby bodies, whether they are warmer or colder. If the emitted energy is absorbed by another body, the net energy being lost by the other body is reduced.

Reply to  AlanJ
December 25, 2024 7:18 am

“The emission of radiant energy is expressed in Watts”

No it isn’t. Remember, energy is defined as “the capacity to do work”. What unit do we use to denote this capacity?

“You are trying to quibble over semantics”

Defining the correct meanings of the words you are using is not “quibbling”.

“the common meaning is quite well understood by everyone”

The “common meaning” is simply wrong, and is not the one understood by any of the physicists in the discussion.

“I think you are doing this to avoid confronting the substance”

I am not the one who is trying to denote energy in Watts, am I?

Reply to  AlanJ
December 28, 2024 5:44 am

No response to my physics lesson, Alan? Just out of curiosity, what do you think the word “avoid” means? How about the word “hypocrite”?

Reply to  stevekj
December 20, 2024 2:21 pm

Jumping in here to thank you and AlanJ for your considered debate on the issue of ‘back radiation’.

Until very recently, I would have considered myself a ‘lukewarmer’ on the basis of various Winjngaarden & Happer (WH) papers, but am beginning to think the whole radiative GHE narrative is a cropper due to the near complete thermalization of IR photons by WV and CO2 within mere meters of the Earth’s surface.

I’ll have to charge up my old FLIR One and take some readings of the night sky. Speaking of which, I presume it works on the basis of the Stefan-Boltzman law, but does that actually apply to gases, i.e., ‘non-condensed’ matter?

Reply to  Frank from NoVA
December 20, 2024 2:57 pm

Speaking of which, I presume it works on the basis of the Stefan-Boltzman law, but does that actually apply to gases, i.e., ‘non-condensed’ matter?

Planck has a discussion of this in his book, THEORY OF HEAT RADIATION. Here is a piece of the text.

CHAPTER—II IDEAL MONATOMIC GASES

The essential parts of this calculation are already contained in the investigations of L. Boltzmann 41 on the mechanical theory of heat; it will, however, be advisable to discuss this simple case in full, firstly to enable us to compare more readily the method of calculation and physical significance of mechanical entropy with that of radiation entropy, and secondly, what is more important, to set forth clearly the differences as compared with Boltzmann’s treatment, that is, to discuss the meaning of the universal constant k and of the finite region elements G. For this purpose the treatment of a special case is sufficient.

You might want to get a copy of his book. It is free at some places to download. I have a Kindle copy.

Reply to  Jim Gorman
December 20, 2024 3:44 pm

Thanks, Jim, but I have no idea what the good doctor is saying here. (My shortcoming, not his or yours). I do have this snip from Howard Hayden:

‘The spectrum of blackbody radiation was well known in the middle-to-late 1800s. The Stefan-Boltzmann radiation law tells us the total amount of radiation emitted, and Planck’s equation (1900) tells us the spectral distribution (how much IR at what photon energy). The atmosphere is not a blackbody, but solid materials and liquids are. That is, the earth’s surface is a blackbody radiator.’

http://www.sepp.org/science_papers/Climate%20Physics%203.pdf

Reply to  Frank from NoVA
December 20, 2024 4:54 pm

Frank, I mainly wanted to confirm your thoughts that gases need to be analyzed differently than solids and liquids.

The earth isn’t exactly a black body. Look at outgoing radiation graphs. There is a reason the distribution doesn’t match any of the Planck curves.

There is also the fact that a black body radiates the same as what is absorbed. The earth doesn’t do that. Consequently, the concentration on individual atom’s and molecule’s radiation spectrum.

Complications on top of complications.

Reply to  Frank from NoVA
December 23, 2024 1:06 pm

Hi Frank, glad to see you join in. I will be curious to see what your FLIR One tells you about the night sky. It will probably say “cold” 🙂

I am fairly sure the S-B law applies to all matter, whether solid, liquid, gaseous, or even plasma, but that latter one might be stretching things. Of course it also depends on the emissivity of the material in question, which varies quite a bit from one material to another.

Most low-cost IR cameras/thermometers use sensitive bolometers at room temperature to take pictures. These bolometers will respond to either positive or negative thermal gradients (meaning thermal energy is either entering the sensor or leaving it, depending on whether the target is hotter or colder) by changing their resistance in a positive or negative direction. The camera then adds the local temperature to the deltas calculated from the bolometer array to produce a picture. They’re very fascinating devices.

Reply to  stevekj
December 23, 2024 7:48 pm

Steve,

The FLIR One worked as expected on indoor objects, walls, pets, etc. Seemed less accurate outside on a very clear, cold evening (12F) – not sure if I was up against the instrument’s limits or if one needs to wait a sufficient amount of time for it to assume ambient temperature. In any event, was not inclined to hang around outside to do so, but a couple of quick shots of the night sky indicated -76F. Not exactly compelling evidence of a runaway greenhouse effect.

Reply to  Frank from NoVA
December 24, 2024 6:41 am

Thanks for the report, Frank, very interesting since I have not tried that particular experiment myself. -76F would be around -60 C and about 213 K. It’s not intuitively obvious what layer of the atmosphere that corresponds to; it could be any part of the troposphere, or even above it, averaged over various emission layers, and will depend greatly on the humidity and ambient surface temperatures. But it seems to be in the ballpark that I would expect, anyway. And I definitely agree that it is not a scary “greenhouse effect” measurement 🙂 We need more warming, not less! Brrr!

AlanJ
Reply to  stevekj
December 20, 2024 3:26 pm

Instead, this is a consequence of gravitationally induced atmospheric thermal gradients combined with the effective radiating layer (i.e. altitude).

The effective layer of emission must be close to the equilibrium temperature of 255K. Without an absorbing atmosphere, this layer would be the surface. There might still be a convective engine producing a lapse rate, but the surface would be frozen and temperatures at altitude colder still.

Reply to  AlanJ
December 20, 2024 5:23 pm

the surface would be frozen and temperatures at altitude colder still.

Really? 255K –> 0°F –> -18°C

Water would be frozen.

CO2 –> -78.5 °C –> -109.3 °F No freezing.

https://flourishingplants.com/can-soil-freeze/#:~:text=How%20Deep%20Down%20Does%20Soil%20Freeze

The soil freezes in response to the temperature, which varies with altitude. The freezing point of water is 0°C, but the soil is not pure water, and different soil has different freezing points, so it can’t be as simple as freezing at 0 degrees Celsius.

As a non-scientist with a non-science education, you make assertions that just aren’t true.

You love to make decisions based on average. Tell us what the temperature of the surface would be when in sunlight? What would it be at night?

Let’s assume the global temp is 15°C 400 ppm CO2, and let’s assume 2CO2 = 4°C.

200 ppm –> 11°C
100 ppm –> 7°C
50 ppm –> 3° C

When does it reach 0°C?

We’ll be dead anyway so who cares.

AlanJ
Reply to  Jim Gorman
December 20, 2024 7:28 pm

For a non absorbing atmosphere, the concentration of CO2 is irrelevant. Emission to space will occur from the surface at every concentration, so the surface temperature in each scenario will be the effective emission temperature.

Reply to  AlanJ
December 21, 2024 5:24 am

so the surface temperature in each scenario will be the effective emission temperature.

The problem with your statement is that 255 it is an AVERAGE. It assumes a surface area of a flat earth that does not rotate. Emission actually occurs at many different temperatures based on a function with a variable having an exponent of 4, rotation of a sphere on a tilted axis, and orbital changes.

As usual, you quote an average, that is, a mean, without bothering to also quote the variance that mean has. You don’t bother to determine if the distribution is even normal or some other symmetric distribution. As far as you are concerned, it could be a Poisson distribution with a terribly non-symmetric shape.

If CO2 were the only absorbing gas, i.e., no water, the emission temperature would occur at the temperature of the surface with a small, small dip at the absorbing frequency of CO2. The emission height of CO2 might change, but that would all. Perhaps you think emission through the atmospheric window occurs at the temperature of the non-absorbing gases.

AlanJ
Reply to  Jim Gorman
December 21, 2024 9:17 am

The problem with your statement is that 255 it is an AVERAGE.

It is not a problem whatsoever. Some parts of the surface would be warmer, others colder, their average 255K. Just as now some parts of the surface are warmer, some colder, their average about 288K.

If CO2 were the only absorbing gas, you are correct that most emission would occur from near the surface, with the bands in which CO2 is a strong absorber emitting from a higher altitude. The surface would be colder than it is now, but not as cold as it would be with no absorbing gases at all.

Sometimes your comments are worded argumentatively even though you aren’t really disagreeing with me.

Reply to  AlanJ
December 21, 2024 11:37 am

It is not a problem whatsoever. Some parts of the surface would be warmer, others colder, their average 255K.

You just keep doing it. Quoting an average with no variance or uncertainty. If you want to deal in measurements, why don’t you learn about them?

You continually display no physical science training at all. If you want to deal in measurements, you MUST deal with distributions. The numbers you average have variance, like it or not.

You admit it when you say:

Just as now some parts of the surface are warmer, some colder, their average about 288K.

Tell us what the variance is.

AlanJ
Reply to  Jim Gorman
December 21, 2024 5:31 pm

The variance is quite irrelevant here since 255K is a value derived from first principles. It is the blackbody temperature of an object at the earth-sun distance in thermodynamic equilibrium with incoming sunlight. We are concerned with the theoretical basis of the GHE, necessitated by the fundamental laws of physics.

Reply to  AlanJ
December 21, 2024 5:58 pm

What bullshit.

Reply to  AlanJ
December 23, 2024 1:00 pm

Well, now you’re talking about a hypothetical world that no one has ever seen. You can certainly try to imagine how that might work, but one way or the other it will still have a lapse rate due to gravito-thermal effects regardless of whether large-scale convective cells develop. Anyway, that’s not Earth, and without water it would bear effectively no resemblance to Earth, so I’m not sure why you’re worried about it.

AlanJ
Reply to  stevekj
December 24, 2024 4:37 am

Yes, this cold planet might still have a lapse rate, but the surface temperature would be much colder than earth’s due to the absence of a non-absorbing atmosphere. The emission of energy to space would occur directly from the surface, and this would force the surface temperature to 255K.

Reply to  AlanJ
December 24, 2024 6:51 am

I can’t argue against that logic, but something about it still feels wrong. Your conclusion is that the presence of a non-IR-interactive atmosphere is irrelevant to a planet’s surface temperature, regardless of the thickness of the atmosphere, right? Did you take into account the thermal energy in the atmosphere itself, since maintaining gases in a gaseous state (including against gravity) requires energy (from the sun)?

The Moon’s average surface temperature, for example, works out to -6 C, i.e. 267 K or thereabouts. Why isn’t that number 255 K? Did you also take into account the difference between a blackbody and the actual emissivity of the Earth (or the Moon)? How about the rate of rotation and the resulting temporal averaging effect?

AlanJ
Reply to  stevekj
December 24, 2024 9:07 am

In a non-GHE world, you have to work backwards from the constraint of having incoming and outgoing energy balance at the surface. Other processes within the atmosphere simply act to move energy around, they cannot affect the equilibrium temperature.

What is the source for your average temperature estimate for the moon? Simply averaging the maximum and minimum temperatures yields a value of about 250 K, but there are parts of the lunar surface that never see sunlight and parts that are continually bathed in sunlight, so determining a meaningful average temperature is quite tricky.

Reply to  AlanJ
December 25, 2024 7:22 am

For the Moon I am getting a daytime high of about +121 C and a nighttime low of about -133 C, so a simple average gives me -6 C. Of course there are lots of complications, even if averaging temperatures were a good idea in the first place, which it probably isn’t, so you shouldn’t read too much into this number.

Reply to  stevekj
December 25, 2024 1:42 pm

so a simple average gives me -6 C.

You bet! And, what is the variance of both temperature and more importantly, the variance in the radiation intensity?

Reply to  Jim Gorman
December 27, 2024 8:00 am

The variance is quite high, for temperature, but I have very little data for the variation in “radiation intensity”, by which I assume you mean the temperature of the Sun, since there is nothing else on the Moon to affect that. It seems to be a pretty small variation, anyway.

Reply to  stevekj
December 20, 2024 6:50 am

What do you think the word “flux” means?

Tell you what, why don’t you start using some actual science definitions that have SI units attached.

For example: from , Technical Note: Understanding Radiance, Irradiance, and Radiant Fluxs

RadianceThe SI unit of radiance is watts per square meter per steradian [W/m2-sr].

IrradianceIrradiance is the radiometry term for the power per unit area of electromagnetic radiation incident on a surface. The SI unit for irradiance is watts per square meter [W/m2]

Radiant FluxRadiant flux is radiant energy per unit time, also called radiant power [W, mW or μW]. 

Reply to  Jim Gorman
December 23, 2024 12:55 pm

What is the point of this posting, Jim? It has no relevance to anything I told Alan. Remember, he is the one who is making up varying and contradictory definitions for “flux” as he goes along.

Reply to  Jim Gorman
December 23, 2024 1:29 pm

And as far as “actual science definitions”, Jim, can you define the word “work” for us, please?

Richard M
Reply to  stevekj
December 18, 2024 8:11 am

That was my point. They are directly measuring downwelling IR which means the values they measure are accurate. As a result the methods/assumptions used by a radiometer are also accurate.

Reply to  Richard M
December 18, 2024 8:32 am

Richard, they are measuring accurate power values from the atmosphere to liquid nitrogen temperatures. That is very different from the power value you would get from the atmosphere to a normal surface temperature on Earth, which would actually be negative (i.e. “upwelling” rather than “downwelling”). That is what we see with room-temperature pyrgeometers, for example.

Reply to  stevekj
December 18, 2024 10:11 am

The IR equipment we used in naval aircraft (FLIR) used liquid nitrogen as the coolant for the mirror inside the turret. It was fascinating what you could see with that equipment.

Reply to  mkelly
December 18, 2024 10:37 am

I can just imagine! Well, with some effort, that is – I can only base my imagination on what I see with room-temperature FLIR cameras, which is already very fascinating 🙂

Reply to  stevekj
December 19, 2024 5:32 am

Two can play the ad hominem game. I asked you three questions in a prior post back to you. Are you too stupid to formulate a correct response?

What is the temperature range of an IR thermometer without a lens? Why?

Can the sensing end of a thermistor/thermopile be cooled lower than ambient just by allowing it to see a cooler object? That kinda violates heat going from hot to cold, right?

There is a reason to cool the sensing part to a very low temperature. Can you guess what it is?

Here are a couple more.

While you are at why don’t you show us the fundamental SI units for energy and power.

You state, “If you think energy can “move throughout the system” from a colder object to a warmer one, please demonstrate this with an experiment.”.

What effect does that have on the second question above that I asked you?

Reply to  Jim Gorman
December 20, 2024 5:31 am

I didn’t see your question earlier, Jim, but I have answered it now, see above.

The SI units for energy and power are Joules and Watts, respectively. But I am not the one who doesn’t know the difference.

What do you think “work” means, Jim? Does it require the expenditure of energy, or not?

Reply to  Jim Gorman
December 23, 2024 12:53 pm

To answer your other irrelevant questions:

1) Lenses have no effect on the subject at hand, which is the direction of energy flow according to the 2nd Law of Thermodynamics. I have no idea why you want to bring lenses into this question; they are irrelevant.

2) I answered this question above

3) This question sounds like it is related to my description of the liquid nitrogen cooled AERI sensors. And I pointed out that the reason for cooling the sensor below the temperature of the target is to get a positive and accurate (high signal to noise ratio) power reading. Otherwise the reading will be zero or negative, with a low signal-to-noise characteristic, due to the small temperature differences involved, and that would be less helpful for the researchers.

Are you going to answer any of my questions? And then apologize for calling me a clown while I try to teach you some physics?

Reply to  Richard M
December 23, 2024 1:08 pm

<deleting a redundant duplicate posting made by accident>

Richard M
Reply to  Andy May
December 17, 2024 5:55 pm

Andy, I agree, convection transfers a lot of energy. However, it is chaotic, unstructured. What gets the atmosphere back to average lapse rate temperatures? Radiation is always attempting to achieve an equilibrium based on the emissivity of atmospheric layers. It’s like a computer background process cleaning things up.

Reply to  Richard M
December 17, 2024 7:18 pm

What gets the atmosphere back to average lapse rate temperatures?”

The gas laws controlled by the gravity based energy profiles.

Balloon data analysis shows these profiles are absolutely linear when comparing energy against molecular density in the troposphere.

Someone
Reply to  Richard M
December 18, 2024 8:28 am

So please explain why the atmosphere gets cooler as your rise in the troposphere if no energy is lost. Where did it go?

The source of heat is the ground or ocean absorbing Sun’s radiation. Air is transparent and practically does not absorb. Air is heated from the surface in the boundary layer. The efficiency of air heating from the surface depends on its density. The denser it is, the more efficiently heat is transferred to air.

So, first, the lower, closer to the heat source – the warmer.
Second, the warm air rises and is cooled by adiabatic expansion.

This explains why it get cooler as you rise in the troposphere without ever invoking any radiative cooling. Radiative considerations change fine details but do not determine troposphere temperature drop with altitude. Much bigger effect on the laps rate has the latent heat of evaporated water.

Richard M
Reply to  Someone
December 18, 2024 9:18 am

For every unit of air rising and cooling there is an equal amount falling and warming. There is no net effect.

You are right that the low atmosphere gets energy from the surface. This is where the dense air exists. That is why it warms the most. As you rise the air density drops and so it can’t hold as much energy. That is why it get cooler.

However, energy must still get past the less dense air and eventually make it to space or the planet would overheat. The reason the energy is able to bypass lower density altitudes is there are also fewer radiative gases at that altitude.

Anthony Banton
Reply to  Andy May
December 18, 2024 12:29 am

It is not, in fact at every location on the Earth it varies almost continuously all day, it also varies with humidity and temperature, and most importantly it is not a straight line and the curvature changes with the weather. As far as we know the global average lapse rate does not vary a lot. It is about 5, but that hides a lot of complexity under the hood.”

Andy,
Of course complexities lie under the hood – no one denies that but you end up throwing the baby out with the bath water with your argument.
It is one that turns up regularly here, uncertainties/complexities etc.

Do you disagree that the atmospheric LR must comply with -g/Cp?

Your argument boils down to the Atmosphere moves a lot and is turbulent – so that basic physics cannot apply.
Would you want the Earth’s atmosphere to be still for it to work? (Sarc).

Of course the “simple model” of obvious necessity ignores that, but the fact remains – that the atmosphere must always return to the -g/Cp.

Reply to  Anthony Banton
December 18, 2024 3:29 am

It is one that turns up regularly here, uncertainties/complexities etc.

What it means in practice is that how our atmosphere behaves is a question for philosophers rather than scientists because science cant model it from first principles.

Reply to  TimTheToolMan
December 19, 2024 4:02 am

And that is why the climate models simply suggest an assumed set of equations, by default narrow enough to make sense of it. You can use at least some first principles but you cannot truely separate the signals, prove corrolation let alone causation. All that is left are a kind of guessed attribution system. The trouble is that they then claim a scientific connection as if those models represent a set of conclusions based on reliable equations.
That science often has discrepancies is not a problem for either science or scientists but it clearly is for those who are claiming and pushing settled science based on a consensus. They wont allow proper science to flourish and let the chips fall where they may. That is a giveaway..

Anthony Banton
Reply to  Anthony Banton
December 18, 2024 4:04 am

Further to the GHE theory of Held and Soden:

The level of effective LWIR transmission to space is at the height of the temperature in the atmosphere that corresponds to that of ASR in equals LWIR out.
Without any GHE on Earth this would be 255K.
The extra 18K is the back-radiated IR below the EEL (effective emission level), and isn’t seen from space.
As it stands this level lies just above that level, at a lower temp and is therefore not keeping up with ASR.
This will continue, the EEL rising as it warms to 255K and on until the EEI is equalised.

That there are turbulent causes that change the LR is a specious argument, as taking the EEL at 280ppm, with variation in that height (of the level of 255K) obviously variable from tropics to poles etc. Because the current situation of the increase of CO2 to 420ppm has consequently shifted that variability to a higher cooler level.
The variability remains but the ETH has risen as an everage.
That that cannot be seen is a mystery to me beyond the usual motivations of most denizens.

Reply to  Anthony Banton
December 18, 2024 4:36 am

Without any GHE on Earth this would be 255K.

The extra 18K is the back-radiated IR below the EEL (effective emission level), and isn’t seen from space.

Here is what NASA says: What is the greenhouse effect? – NASA Science

Scientists have determined that carbon dioxide plays a crucial role in maintaining the stability of Earth’s atmosphere. If carbon dioxide were removed, the terrestrial greenhouse effect would collapse, and Earth’s surface temperature would drop significantly, by approximately 33°C (59°F).

Where does your 18°C come from? 255 + 18 = 273. That would mean the earth would be pretty much an ice ball. Show your source for the 18°C!

The extra 18K is the back-radiated IR below the EEL (effective emission level), and isn’t seen from space.

Sure it is seen from space. Do you think the earth’s surface reradiates the same frequency it absorbs? If the earth radiates with a Planck curve, most of the absorbed “back radiation” would go out through the atmospheric window as the heat is radiated from the surface. That makes “back radiation” a quickly decreasing series.

I know you have swallowed the “CO2 trapping heat” scenario hook, line AND sinker but it violates the thermodynamic law that says heat flows from hot to cold.

You need to study Planck’s Theory of Heat Radiation closely.

AlanJ
Reply to  Jim Gorman
December 18, 2024 8:26 am

Sure it is seen from space. Do you think the earth’s surface reradiates the same frequency it absorbs? If the earth radiates with a Planck curve, most of the absorbed “back radiation” would go out through the atmospheric window as the heat is radiated from the surface. That makes “back radiation” a quickly decreasing series.

The earth has an effective radiating temperature of 255K, it must necessarily be “seen” from space as an object at 255K, because that is the temperature necessary to balance absorbed solar radiation.

You are correct that the “back radiation” is a diminishing series, this is why the planet doesn’t simply heat up endlessly given an absorbing atmosphere.

Reply to  Anthony Banton
December 18, 2024 4:21 am

Of course complexities lie under the hood – no one denies that but you end up throwing the baby out with the bath water with your argument.

You do realize that your argument here also applies to the Global Average Temperature also, right?

As you say, “Of course the “simple model” of obvious necessity ignores that …”,

Reply to  Andy May
December 18, 2024 7:00 am

Post says:”Greenhouse gases delay cooling, which is why they are important.”

Does nitrogen delay cooling?

Sparta Nova 4
Reply to  mkelly
December 18, 2024 9:56 am

Yes.

Reply to  Richard M
December 17, 2024 1:56 pm

“While a agree with much of what Andy states, he also repeats several misconceptions that are often repeated by skeptics.”

Including the 1750 and 1850 sh!te. Hello, CO2 didn’t start increasing appreciably since around the 1940s. Don’t let the climate criminals and associated toadies do that. Why do you do that?

Richard M
Reply to  philincalifornia
December 17, 2024 3:24 pm

I don’t. CO2 concentration is irrelevant once low atmosphere saturation is achieved. Of course, Earth has always had plenty of CO2 to accomplish just that.

Reply to  Richard M
December 17, 2024 11:36 pm

Yes, sorry. I was probably conflating all the posts and replies.

My comment still stands though. When a climate liar or climate crackpot talks about 1850, don’t fkn fall for it. They want to to erase the 1850 to 1950 baseline, which is not flat.

Reply to  philincalifornia
December 18, 2024 3:43 am

Yes, the climate liars and crackpots want us to believe today is the warmest period in human history, but It is not. Written weather history shows it was just as warm in the recent past as it is today.

That’s why the climate liars and crackpots try to erase the written, historic temperature record with their bogus, bastardized, computer-generated Hockey Stick global “temperature” chart.

The Hockey Stick Chart is the BIG LIE that climate liars and climate crackpots tell.

The temperature baseline is definitely not flat, it is cyclical with equal high points and equal low points since the end of the Little Ice Age. The bogus Hockey Stick chart erases all that and substitutes a “hotter and hotter and hotter” temperature profile in its place.

We can thank one of the primary climate liars, Phil Jones, for the bogus Hockey Stick profile. The same Phil Jones that refuses to allow others to view the data he used to create the bogus Hockey Stick temperature profile. Phil said: “Why should I give you the data, when all you want to do is find fault with it?”

This no-data Hockey Stick chart is what the climate liars and crackpots base all their claims about CO2 and Earth’s temperatures on. Without a Hockey Stick chart, the climate liars and crackpots would have nothing to talk about or point to.

How much damage has Phil Jones’ BIG LIE caused in the world? It’s about to drive Germany and the UK into bankruptcy. How many TRILLIONS of dollars have been wasted by people who think the Hockey Stick temperature profile is real?

Look what you have done, Phil!

What did Phil do about the “land blip” and the “ocean blip”? He created a Hockey Stick chart that erased those “blips”.

And today, every temperature chart uses Phil Jones’ original bogus data to create their own version of Phil’s Hockey Stick chart.

Such a BIG LIE.

And this article here, and the comments that followed demonstrate that CO2 Climate Science is definitely NOT settled.

And there is still no evidence that CO2 is anything other than a benign gas, essential for life on Earth. There is no evidence that CO2 levels and temperatures are connected other than the bogus Hockey Stick chart, and of course, that’s no evidence at all, it’s a BIG LIE.

Anthony Banton
Reply to  Richard M
December 18, 2024 5:03 am

More misunderstanding of how the GHE works:

You need to consider path-length – the length of the path of the attenuating substance.
In this case GHGs.
There is something called the Beer-Lambert Law:

https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert_law

“The Beer–Bouguer–Lambert (BBL) extinction law is an empirical relationship describing the attenuation in intensity of a radiation beam passing through a macroscopically homogenous medium with which it interacts. Formally, it states that the intensity of radiation decays exponentially in the absorbance of the medium, and that said absorbance is proportional to the length of beam passing through the medium, the concentration of interacting matter along that path, and a constant representing said matter’s propensity to interact.”

An analogy:
Shine torch beam through a narrow band of low level fog (say) at a person on the other side.
Chances are the beam will penetrate and illuminate that person.
Now put that person a mile away in a widespread fog and shine the torch beam in his directon. (of course assuming the fog particles have the same concentration)
You guessed it: he will not see it.
Path-length of the beam throuth the “fog”.
Same thing in the atmosphere.
That back-radiation at the surface comes from the lower bounday layer is hiding the bigger picture. Above that is another layer, and another layer, and another etc, until the EEL is reached at which point the GHG is sufficiently thin to allow LWIR to escape to space.
If your idea were the case then above the surface lyr there would be direct emission to space – at a high atmospheric temp, which therefore would not be much weaker at radiating the energy.
So that cannot happen – you cannot have at the same time back-radiation and there be no LWIR transmission to space from a cold layer that corresponds to ASR equals OLR, (255K).
It’s happening now from a lyr a little colder than that.

Richard M
Reply to  Anthony Banton
December 18, 2024 7:32 am

I guess you are in complete denial of CO2 IR emission. Everything you stated applied ONLY to absorption of radiation. It ignores emission.

Lets use your analogy and use CO2 as the fog. It is not only absorbing the energy but also emitting energy towards the target. Don’t you think that makes a difference?

In the case of Earth’s atmosphere, CO2 is both the torch and the fog. CO2 is emitting energy from low in the atmosphere which is being absorbed by CO2 higher in the atmosphere. CO2 at each higher layer which is absorbing energy is also emitting CO2 IR upwards.

At all upward layers of the atmosphere absorption and emission are occurring at consistently reduced rates because of the density reduction and cooling as your rise. That means each higher layer is unable to absorb all the IR coming from just the adjacent lower layer let alone other layers below it. Hence, IR from every layer makes its way to space. This is why each layer is colder.

As you can see, your entire picture is false. Because emissions are occurring at all layers the average emission temperature is the average of all the layers which only changes if you add more energy.

Reply to  Richard M
December 18, 2024 8:51 pm

That means each higher layer is unable to absorb all the IR coming from just the adjacent lower layer let alone other layers below it.

If you were to consider the layers to be equally wide, then this would effectively be true. But it just means the width of the higher layer capable of absorbing all of the radiation from below it…is wider than the one below it.

Hence IR from the lower layers doesn’t make it to space but it takes a wider layer above it to catch it.

On average it does make it to space at the effective radiative level.

Richard M
Reply to  TimTheToolMan
December 20, 2024 1:53 pm

How can “a layer above it catch it” when that layer can’t even catch all the radiation from the layer immediately below? The answer is …. it can’t. This is why radiation is emitted to space from all atmospheric layers.

And, when you look at this in even more detail, you end up seeing the amount of energy radiated to space is proportional to the change in density (aka gravity). Nowhere is the concentration of a well mixed gas important (beyond a minimal level).

When you average out all these layer driven emissions you can compute an effective emission height. But since these emissions are independent of CO2 concentration, they cannot change.

Reply to  Anthony Banton
December 18, 2024 11:20 pm

What a pile of mental masturbatory crap.

December 17, 2024 12:59 pm

Have a look at the daily OLR from tropical warm pools. You will find values as low as 150W/m^2. That is not being released from “greenhouse gasses”. It is being released from micron size ice particles bunched up jhigh in the atmosphere and often not visible as cloud. Those same ice particles are reflecting a significant portion of the solar EMR so it I not being thermalised within the climate system.

Ice in the atmosphere, on land and on water controls Earth’s energy balance. The basic error is the silly notion of “greenhouse gasses” playing a role beyond the ability of water to phase change.

The cloud reduction is not universal. Taking a look spatially across latitudes, clouds have increased over a small region of Antarctica and a small band just north of the Equator:
comment image?ssl=1

Also the heat content in the oceans is greatest at 45S and yet the northern hemisphere is warming the most. Any “greenhouse effect” has to be able to explain these observations:
comment image?ssl=1

Also note that most ocean heat is being retained where the net radiation is negative.

The solar intensity across the northern land masses has been increasing for 500 years. Some permanent land ice as melted so that land is taking in more heat and the NH oceans are concurrently warming up. Last time Earth was in a similar state with low land ice and solar intensity just starting its northward profession was 775ka.
comment image?ssl=1

Reply to  Andy May
December 17, 2024 3:32 pm

Anyone working seriously on climate modelling needs to know a great deal about ice wherever it forms. There is some good work being done on ice but it does not get much attention because it does not align with CO2 being the climate control knob.

Consider the terminal velocity of the 10 micron ice particles that make up cirrus cloud. They are not going to descend far during the cyclic instability oncerved over warm pools.

https://www.ecmwf.int/sites/default/files/elibrary/2013/9882-ice-cloud-particle-terminal-velocity-parameterizations-temperatures-0-85c.pdf

Now look at how little ice is required to dominate transmittance of OLR through the atmosphere:
https://www.montana.edu/jshaw/documents/Physics_IRCloudImaging_EJP2013.pdf

The notion off “greenhouse gasses” is so unscientific it is nothing less than utter carp.

The big change coming with ice is its accumulation on land. Its persistence is underlined by it still hanging around on a few tropical peaks despite the warming surface.

December 17, 2024 1:11 pm

Until they can get the climate sensitivity to several decimal places, the “science” is hardly settled.

Reply to  Joseph Zorzin
December 17, 2024 3:03 pm

Right now it is less than one decimal.

Chris Hanley
December 17, 2024 1:16 pm

The IPCC and the climate “consensus” believe that essentially all warming since 1750 is due to man’s emissions of CO2

And 1750 was deep in the Little Ice Age the coldest period in the past ten thousand years.

abolition man
Reply to  Chris Hanley
December 17, 2024 1:40 pm

That must be the Climate Utopia of which they winge! Perhaps we should call it the 1750 Project!

December 17, 2024 1:35 pm

“This suggests that the North Atlantic sea surface temperatures are related to the frequency of Los Ninos and Las Ninas in some fashion (An, Wu, Zhou, & Liu, 2021), or they both follow some other influence, like solar variability.”

Indirect solar forcing. The AMO was the coldest during the strongest solar wind states, and positive North Atlantic Oscillation regimes, in the mid 1970’s, mid 1980’s, and early 1990’s. The AMO warmed during negative NAO regimes of 1995-1999 and 2005-2012, during weaker solar wind states. The warm AMO phase reduces low cloud cover, so it is an additional negative feedback to lower indirect solar forcing. Also the upper ocean heat content has warmed mostly from 1995, probably due to the reduction in low cloud cover.

Notice how the AMO anomalies are always colder around sunspot minimum in a cold AMO phase, and never the coldest around sunspot minimum in a warm AMO phase. That’s because the major lows in the solar wind shifted from around the solar maximums (1969, 1979-82) to around the minimum (1997, 2009).

https://www.woodfortrees.org/graph/esrl-amo/from:1880/mean:13/plot/sidc-ssn/from:1880/normalise

comment image

solarwindtempandpressure
dh-mtl
December 17, 2024 2:06 pm

Andy,

Excellent paper. Thanks.

I would like to make a couple of comments:

You say, with respect to Figure 2, that the ‘qualitative pattern in OLR matches each solar cycle in shape. This suggests that the solar cycles contribute to the OLR pattern.’ In the section on ‘Clouds and ECS’ you raise the concept of ‘Energy Residence Time’ (ERT). I would like to suggest that there is a tremendous difference between the ERT of sunlight which is incident upon land (short, order of months) and sunlight which is incident upon the oceans (long, order of decades). I would further like to suggest that the ‘qualitative pattern’ of Figure 2 is a result of the sunlight with a short ERT, i.e. which is incident upon land. But, as you say in the section on EEI, ‘most solar energy absorbed on Earth’s surface is stored in the oceans (~90%)’. In such a case, the ‘qualitative pattern’ of Figure 2 would underestimate the total effect of solar cycles on the earth’s climate by an order of magnitude.

I believe that a significant source of error in climate models, that you have not addressed, is the extreme sensitivity of evaporative cooling to ocean surface temperatures higher than 25 C. This extreme sensitivity is not a function of the temperature sensitivity of the equilibrium vapor pressure of water per se, but of the self-reinforcing effect of wind and waves that drives rapidly increasing rates of evaporation at ocean surface temperatures higher than 25C. This extreme temperature sensitivity of water evaporation is shown in Figure 5 of Willis Eschenbach’s paper ‘Rainergy’ (WUWT, May 21, 2024). The result of this extreme temperature sensitivity is shown in David Shelley’s paper ‘The Geological Record of Climate Change …’ (WUWT, Nov. 2, 2024). In this paper Shelley presents a graph (from Scotese et al. ) that shows that tropical temperatures (+/- 20 degrees latitude) have been ‘anchored’ at approximately 26 C across geological time. All of the temperature variations occur at higher latitudes, the higher the latitude the greater the temperature variation.

I have not seen in any discussion on climate modeling that this extreme sensitivity of evaporative cooling to water temperature in tropical oceans is even recognized, let alone accounted for. Without properly characterizing the most important climate cooling process, water evaporation in the tropics, any climate model will be of no value whatsoever.

Bob
December 17, 2024 3:12 pm

Very nice Andy, this is important information.

December 17, 2024 3:46 pm

Harold the Organic Chemist Says:

ATTN: Andy and Everyone
RE: CO2 Does Not Cause Warming Of Air

Shown in the graphic (See below) are plots of temperatures at the Furnace Creek weather station in Death Valley from 1922 to 2001. In 1922, the concentration of CO2 was 303 ppmv (0.595 g/cu. m. of air), and by 2001, it had increased to 371 ppmv (0.729 g/cu. m. of air), but there was no corresponding increase in the temperature of air. The reason there was no significant increase in temperature is quite simple: There is too little CO2 in the air to absorb any out-going infrared light from the desert surface.

The empirical temperature data from this remote and arid desert falsifies the claim by the IPCC and the coterie of the collaborating and unscrupulous climate scientists (aka, the welfare queens in white coats) that CO2 cause warming air, and hence by extension “global warming”.

The claim by the IPCC since 1988 that CO2 produced from the use of fossil fuels causes global warming is fabrication and a deliberate lie. The purpose of this lie is to provide the UN the justification to distribute funds, via the UNFCCC and the UN COP, from the rich countries to the poor countries to help them cope with “global warming” and “climate change”. At the recent COP29 conference, the poor countries came clamoring not for billions, but now trillions of funds.

MB: The graphic was obtained from the late John Daly’s website:
”Still Waiting for Greenhouse” available at: http://www.John-Daly.com. From the home page scroll down to the end and click on “Station Temperature Data”. On the “World Map”, click on “NA”, scroll down and click on “Pacific”. Finally, scroll down and click on “Death Valley”.
He found many weather stations that showed no warming up to ca 2002. Go to OZ and check out the graphics for Alice Springs and Brisbane.

death-vy
pblase
December 17, 2024 4:17 pm

Thus, they break clouds up into types, some are net warming, and some are net cooling, with the overall change being positive.”
See, of course, Eschenbach’s recent articles on tropical thunderstorms. Do these guys bother studying weather?

December 18, 2024 2:49 am

Excellent article from Andy pointing out the many non sequiturs in regards to CO2 and its effect on various elements and supposed mechanisms.
And it is also very apparent that in order for a model to work as assumed you need to narrow the bandwidth and create a somewhat linear function as to make sense of the system.
And, again, fluid dynamics rather muddy the..ahum..waters but it is essential to our planet.
Maybe everybody should simply separate H2O from greenhouse gasses. Alarmists are under the flawed assumption that CO2 drives the H20 element in climate models as the IPCC states ( i think).
OLR increasing is such a strong argument against Wannabe et al. Whenever a pro Co2 climate forcer is argueing his point he/ she always starts w Arrhenius and Wannabe, as if we are children to be schooled about ‘the science’. Does not work f people who know their stuff. I have yet to come across a climate scientist who actually admits that at least some of the assumptions might be at least a wee bit shaky. The inevitably refer to the ‘consensus’ idea. That is of course NOT an argument..

December 18, 2024 4:04 am
  1. If the lower atmosphere is opaque to outgoing IR from the surface then it also opaque to back radiation – meaning back radiation can’t warm the surface because it gets intercepted by the atmosphere.
  2. The assumption that OLR from CO2 goes down because it gets weaker due to being at a cooler temperature ignores the fact that increased CO2 provides *more* CO2 at altitude thus at least offsetting part of the loss from it being cooler. I see this covered no where in the descriptions of the models.
  3. As the surface temperature goes up the OLR increases by some exponent value. Planck would suggest T^4 but it’s probably less than this. This modulates the maximum temperatures at the surface and in the atmosphere. At some point the biosphere will increase OLR radiation enough to cap the maximum temperature. I don’t see where this is covered anywhere, especially in the climate models. This is a NEGATIVE feedback that just seems to always get overlooked. It’s why the increase in the “global average temperature” is driven mostly by minimum temperatures going up – yet the negative/positive impacts of this always seem to get ignored as well.
  4. None of the models handle measurement uncertainty properly. All that climate science, including the models, can actually say is “WE SIMPLY DON’T KNOW” what is happening and why. We can’t measure the impacting factors with sufficient resolution to actually determining anything. All we really have is guesses and those guesses don’t seem to actually match reality – the guesses all seem to run hot.
Reply to  Andy May
December 18, 2024 7:46 am
  1. The new emission altitude can only be determined by including the fact that increased CO2 in the atmosphere means the amount of CO2 at the emission altitude also increases. Thus more radiation is emitted to space than an assumption that CO2 is constant at altitude allows for. This modulates the emission height. The big question is whether out present instrumentation has sufficient resolution to determine the actual emission height. If not, then we are back to guesses as to what is happening.
Sparta Nova 4
Reply to  Andy May
December 18, 2024 10:06 am

Thank you for putting “thermalized” in quotations. That word has been hijacked, repurposed, and redefined.

E. Schaffer
Reply to  Tim Gorman
December 18, 2024 6:50 am

Of there is “back radiation” from the atmosphere onto the surface. It is about the same amount of radiation emitted from the surface into the atmosphere, making it a zero-sum game.

You are right in one way, that there is a lot more “back radiation” throughout the atmosphere getting absorbed within the atmosphere. That is at least some 50,000W/m2 as demonstrated below. Once you are aware of it, it should be easy to unterstand how this is not doing anything.

https://greenhousedefect.com/basic-greenhouse-defects/a-little-thing-about-back-radiation-that-people-forget

Quondam
December 18, 2024 4:16 am

The Adiabatic Lapse Rate discussed, g/Cp, is dimensionally K/km and a function of equilibrium parameters. It has two interpretations:
1. Kelvin (1862): It is the equilibrium thermal gradient for fluids in a gravitational field.

2. Landau (1940?): It is a critical gradient triggering convection in a fluid at equilibrium, a ‘tipping’ point, or a condition for convection to then abruptly cease. A similar well-studied process, Rayleigh–Bénard convection.  

E. Schaffer
December 18, 2024 6:37 am

I think it is great when someone reads the literature and then tries to reflect critically on it. That is what is missing on the “critical side” all the way.

However, it will not help a lot, if you do not read what is written, but instead assume things that are explicitly NOT written. HS00 give a good representation of the GHE, the chart is easy to understand, and most of all there is NO REFERENCE to “BACK RADIATION”. That is great, because it has nothing to do with it.

Figure 1 illustrates what doubling CO2 does with a fixed and linear lapse rate in Held’s simplified model. The additional CO2 warms the surface with additional back-radiation, then the assumption of a fixed lapse rate kicks in, forcing the emission level up to a higher altitude, which means it has a lower temperature, causing it to emit less to space.

No, just read what the paper says!

The surface temperature is then simply Ts = Te + gamma*Ze, where gamma is the lapse rate. From this simple perspective, it is the changes in Ze, as well as in the absorbed solar flux and possibly in gamma, that we need to predict when we perturb the climate. As infrared absorbers increase in concentration, Ze increases, and Ts increases proportionally if gamma and S remain unchanged

Reply to  E. Schaffer
December 18, 2024 7:59 am

As infrared absorbers increase in concentration, Ze increases, and Ts increases proportionally if gamma and S remain unchanged”

Why does Ze increase? I don’t see in the paper where this assumption is justified. The paper appears to assume that as CO2 goes up the emission height Ze goes up proportionally. It seems to be based on the assumption that the amount of radiating CO2 remains constant, which doesn’t seem to match the fact that *more* CO2 is involved. Even if Ze does go up just how much does it go up? Can we actually measure it with present instrumentation?

E. Schaffer
Reply to  Tim Gorman
December 18, 2024 10:53 am

Just the next paraphrase names it..

The increase in opacity due to a doubling of CO2 causes Ze to rise by ~150 meters. This results in a reduction in the effective temperature of the emission across the tropopause by ≈(6.5K/km) (150 m) ~1 K, which converts to 4W/m2 using the Stefan-Boltzmann law.

Once you should understand this, we could start discussing what is wrong with it. Before that however, it will not make any sense.

Reply to  E. Schaffer
December 18, 2024 1:34 pm

This is just more of the “more CO2 causes Ze to rise” assumption. There is no justification for this assumption in either statement.

The amount of radiation is basically “number of CO2 molecules” times “radiation intensity”. If the amount of CO2 in the atmosphere goes up then does the “number of CO2 molecules” go up as well?

“Temperatures must increase by ≈1 K to bring the
system back to an equilibrium between the absorbed solar flux and the infrared flux
escaping th space (Figure 1).”

Why? The number of CO2 molecules going up will increase the amount of radiation without a change in temperature. I see no consideration of this in the article. 2x CO2 *will* generate more radiation from a volume of gas.

E. Schaffer
Reply to  Tim Gorman
December 18, 2024 3:51 pm

More CO2 will indeed mean more back- and forth radiation within the atmosphere. Rightfully no one cares about it, it does not matter. What matters is in the text..

Reply to  E. Schaffer
December 19, 2024 3:00 am

What matters *is* the text – and the unjustified assumptions made in the text! Assumptions which you haven’t justified yet.

It’s not just more back and forth radiation, it also includes the radiation that escapes to space.

Brock
December 18, 2024 8:03 am

The 11 year solar luminosity variation simulates the deposition and removal of 15 years of CO2 (~0.3 w/m2). There is a clear 11 year cycle in the downwelling radiation (Hansen found it), but no such signal exists in the EEI. 15 years of CO2 emissions does not make a detectable difference in the warming rate. The IPCC claims the warming effects of CO2 are amplified, but the satellite data makes it quite clear the effects are strongly suppressed. Run a fourier transform on the EEI and see if you can find an 11 year cycle; it doesn’t appear to be there, but the IPCC claims it must be.
It’s also worth noting that, once you subtract out the energy plants use to grow, the EEI was zero about a decade or so ago.