The Problem with Climate Models

“We know there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know”  — Donald Rumsfeld

Ed Zuiderwijk, PhD

An Observation

There is something strange about climate models: they don’t converge. What I mean by that I will explain on the basis of historical determinations of what we now call the ‘Equilibrium Climate Sensitivity’ (ECS), also called the ‘Charney Sensitivity’ (ref 1), defined as the increase in temperature at the bottom of the Earth’s atmosphere when the CO2 content is doubled (after all feedbacks have worked themselves through). The early models by Plass (2), Manabe & co-workers (3) and Rowntree & Walker (4) in the 1950s, 60s and 70s gave ECS values from 2 degrees Centigrade to more than 4C. Over the past decades, these models have grown into a collection of more than 30 climate models brought together in the CMIP6 ensemble that forms the basis for the upcoming AR6 (‘6th Assessment Report’) of the IPCC. However the ECS values still cover the interval 1.8C to 5.6C, a factor of 3 difference in results. So after some 4 decades of development climate models have still not converged to a ‘standard climate model’ with an unambiguous ECS value; rather the opposite is the case.

What that means becomes clear when we consider what it would mean if, for instance, the astrophysicists found themselves in a similar situation with, for example, their stellar models. The analytical polytropic description of the early 20th century gave way years ago to complex numerical models that enabled the study of stellar evolution – caused by changing internal composition and the associated changes in energy generation and opacity – and which also in 1970, when I took my first steps in the subject, offered a reasonable explanation of, for example, the Hertzsprung-Russell diagram of star populations in star clusters (5). Although always subject to improvement, you can say that those models have converged to what could be called a canonical star model. The different computer codes for calculating stellar evolution, developed by groups in various countries, yield the same results for the same evolutionary phases, which also agree well with the observations. Such convergence is a hallmark of the progress of the insights on which the models are based, through advancement of understanding of the underlying physics and testing against reality, and is manifest in many of the sciences and techniques where they are used.

If the astrophysicists were in the same situation as the climate model makers, they would still be working with, for example, a series of solar models that predict a value of X for the surface temperature give or take a few thousand degrees. Or that, in an engineering application, a new aircraft design should have a wing area of Y, but it could also be 3Y. You don’t have to be a genius to understand that such models are not credible.

A Thesis

So much for my observation. Now what it means. I will here present my analysis in the form of a thesis and defend it with an appeal to elementary probability theory and a little story:

“The fact that the CMIP6 climate models show no signs of convergence means that, firstly, it is likely that none of those models represent reality well and, secondly, it is more likely than not that the true ECS value outside the interval 1.8-5.6 degrees.”

Suppose I have N models that all predict a different ECS value. Mother Nature is difficult, but she is not malicious: there is only one “true” value of ECS in the real world; if that were not the case, any attempt at a model would be pointless from the outset. Therefore, at best only one of those models can be correct. What is then the probability that none of those models are correct? We know immediately that N-1 models are not correct and that the remaining model may or may not be correct. So we can say that the a priori probability that any model is incorrect is [(N-1+0.5)/N] = 1–0.5/N. This gives a probability that none of the models is correct from (1-0.5/N)^N, about 0.6 for N>3. So that’s odds of 3 to 2 that all models are incorrect; this 0.6 is also the probability that the real ECS value falls outside the interval 1.8C-5.6C.

Now I already hear the objections. What, for example, does it mean that a model is ‘incorrect’? Algorithmic and coding errors aside, it means that the model may be incomplete, lacking elements that should be included, or, on the other hand, that it is over-complete with aspects that do not belong in it (an error that is often overlooked). Furthermore, these models have an intrinsic variation in their outcome and they often contain the same elements so those outcomes are correlated. And indeed the ECS results completely tile the interval 1.8C-5.6C and for every value of ECS between the given limits models can be found that can produce that result. In such a case one considers the effective number of independent models M represented by CMIP6. If M = 1 it means that all models are essentially the same and the 1.8C-5.6C is an indication of the intrinsic error. Such a model would be useless. More realistic is an M ~ 5 to 9 and then you come back to the foregoing reasoning.

What rubs most with climatologists is my claim that the true ECS is outside the 1.8C-5.6C interval. There are very good observational arguments that 5.6C is a gross overestimate so I am actually arguing that probably the real ECS is less than 1.8C. Many climatologists are convinced that that is instead a lower limit. Such a conclusion is based on a fallacy, namely the premiss that there are no ‘known unknowns’ and especially no ‘unknown unknowns’, ergo that the underlying physics is fully understood. And, as indicated earlier, the absence of convergence of the models tells us that precisely that is not the case.

A Little Story


Imagine a parallel universe (theorists aren’t averse to that these days) with an alternate Earth. There are two continents, each with a team of climatologists and their models. The ‘A’ team on the landmass Laputo has 16 models that predict an ECS interval 3.0C to 5.6C, a result, if correct, with major consequences for the stability of the atmosphere; the ‘B’ team at Oceania has 15 models that predict an ECS interval 1.8C to 3.2C. The two teams are unaware of each other’s existence, perhaps due to political circumstances, and are each convinced that their models set hard boundaries for the true value of the ECS.

That the models of both teams give such different results is because those of the A-team have ingredients that do not appear in those of the B-team and vice versa. In fact, the climatologists on both teams are not even aware of the possible existence of such missing aspects. After thorough analysis, both teams write a paper about their findings and send them, coincidently simultaneously, to a magazine published in Albion, a small island state reknowned for its inhabitant’s strong sense of independence. The editor sees the connection between the two papers and decides to put the authors in contact with each other.

A culture shock follows. The lesser gods start a shouting match. Those in the A team call the members of the B team: ‘deniers’ who in their turn shout: ‘chickens’. But the more mature of both teams realize they’ve had a massive blind spot about things the other team knew but they themselves not. That those ‘unknowns’ had firmly bitten both teams in the behind. And the smartest realize that now the combined 31 models are a new A team to which the foregoing applies a fortiori: that there could arise a new B team somewhere with models that predict ECS values outside the 1.8C-5.6C range.

Forward Look

So it may well be, no, it is likely that once the underlying physics is properly understood, climate models will emerge that produce an ECS value considerably smaller than 1.8C. What could such a model look like? To find out we look at the main source of the variation between the CMIP6 models: the positive feedback on water vapour (AR4, refs 6,7). The idea goes back to Manabe & Wetherald (8) who reasoned as follows: a warming due to CO2 increase leads to an increase in the water vapour content. Water vapour is also a ‘greenhouse gas’, so there is extra warming. This mechanism is assumed to ‘amplify’ the primary effect of CO2 increase. Vary the strength of the coupling and add the influence of clouds and you have a whole series of models that all predict a different ECS.

There are three problems with the original idea. The first is conceptual: the proposed mechanism implies that the abundance of water vapour is determined by that of CO2 and that no other regulatory processes are involved. What then determined the humidity level before the CO2 content increased? The second problem is the absence of an observation: one would expect the same feedback on initial warming due to a random fluctuation of the amount of water vapour itself, and that has never been established. The third problem is in the implicit assumption that the increased water vapour concentration significantly increases the effective IR opacity of the atmosphere in the 15 micron band. That is not the case. The IR absorption by water vapour is practically saturated which makes the effective opacity, a harmonic mean, insensitive to such variation.

Hence, the correctness of the whole concept can be doubted, to say the the least. I consider therefore models in which the feedback on water vapour is negligible (and negative if you include clouds) as much more realistic. Water vapour concentration is determined by processes independent of CO2 abundance, for instance optimal heat dissipation and entropy production. Such models give ECS values between 0.5C and 0.7C. Not something to be really concerned about.

References

  1. J. Charney, ‘Carbon dioxide and climate: a scientific assessment’, Washington DC: National Academy of Sciences, 1979.
  2. G. N. Plass, ‘Infrared radiation in the atmosphere‘, American Journal of Physics, 24, 303-321 , 1956.
  3. S. Manabe and F. Möller, ‘On the radiative equilibrium and heat balance of the atmosphere‘ Monthly Weather Review, 89, 503-532, 1961.
  4. P. Rowntree and J. Walker, ‘Carbon Dioxide, Climate and Society‘: IIASA Proceedings 1978 (ed J. Williams), pages 181–191. Pergamon, Oxford, 1978.
  5. http://community.dur.ac.uk/ian.smail/gcCm/gcCm_intro.html
  6. V. Eyring et al,The CMIP6 landscape’ Nature Climate Change, 9, 727, 2019.
  7. M. Zelinka, T. Myers, D. McCoy, et al. ‘Causes of higher climate sensitivity in cmip6 models‘, Geophysical Research Letters, 47, e2019GL085782, 2020. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782.
  8. S. Manabe and R. Wetherald,  ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’, J. Atmos. Sci., 24, 241-259, 1967.
4.8 36 votes
Article Rating
231 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
March 14, 2021 10:13 am

As Lewis and Curry, and Lord Monckton derive ECS on the order of 1.4 C with differing methods, I would agree that there is a good chance the true value is less than 1.8 C.

n.n
Reply to  Tom Halla
March 14, 2021 10:30 am

Or a single statistic, inferred with pragmatic but unrealistic assumptions/assertions, reduces the complexity of the system to a plausible absurdity that requires regular injections of brown matter and energy to force a consensus with past, present, and future observations

TheFinalNail
Reply to  Tom Halla
March 14, 2021 11:00 am

How do L&C and Lord M account for the observation (mean of HadCRUT, GISS and NCDC) that since the start of the 20th century we are already around +1.1C warmer than what might be called ‘pre-industrial’ (1880-1900)? And we are still a long way off from equilibrium with ocean lag. There would need to be a drastic slowdown in the long term warming trend for 1.4C to be anywhere near ECS.

Tom Halla
Reply to  TheFinalNail
March 14, 2021 11:14 am

A low figure, > 1.5 C is used in the Russian climate model (INM-CM4) which tracks UAH data better than any other models.

Jeff Alberts
Reply to  TheFinalNail
March 14, 2021 11:18 am

The problem is presenting a single number that supposedly represents a “global average”. Such things are physical nonsense. Everything else that follows from them is even more nonsensical.

Monckton of Brenchley
Reply to  Jeff Alberts
March 14, 2021 1:54 pm

A Monte Carlo distribution is one way of allowing for data uncertainties.

Tim Gorman
Reply to  Monckton of Brenchley
March 15, 2021 5:42 am

A Monte Carlo distribution developed from a model that is wrong only gives you a spread of wrong values. That’s not really very useful.

Monckton of Brenchley
Reply to  Tim Gorman
March 15, 2021 3:10 pm

Mr Gorman states the obvious. However, our approach is to take the observational data, not the data from models, and perform statistical analyses on that.

rhoda klapp
Reply to  Jeff Alberts
March 15, 2021 2:30 am

” there is only one “true” value of ECS in the real world;”

No, not even one, it is not a real thing to be measured, it is good only in order to compare model results.

Reply to  TheFinalNail
March 14, 2021 11:35 am

About .4 degree C of warming happened from 1900 to the early 1940s, and most of that was not caused by increase of greenhouse gases.

Gerald Machnee
Reply to  Donald L. Klipstein
March 14, 2021 2:59 pm

Yes, and could be lower in the LIA.

MarkW
Reply to  TheFinalNail
March 14, 2021 11:41 am

You are assuming that most if not all of the warming since the end of the Little Ice Age is due to CO2. Since most of the warming occurred prior to the big rise in CO2, that’s not possible.

philincalifornia
Reply to  MarkW
March 14, 2021 6:19 pm

Don’t go confusing him with the null hypothesis. It made Travesty Trenberth break out in herpes.

Just for sh!ts and giggles though, TFN exactly what year was the great shift to a flat baseline?

Reply to  TheFinalNail
March 14, 2021 11:46 am

FN, even the SPM to AR4 said at least half of the value you cite, specifically the warming period from ~1920 to 1945, was natural and not AGW; CO2 forcing was way too low. So your ‘adjusted’ ~1.1C needs to be at least halved. And I would also point out that natural variability did not magically stop in the second half of the time period. How much there was, dunno. Do know that AR4 and AR5 explicitly assumed essentially zero, which cannot be correct.This attribution problem is why all the models (except the Russian one) run hot. See my several previous posts here on this issue.

fred250
Reply to  Rud Istvan
March 14, 2021 1:46 pm

Reality is that in the NH the current temperature is similar to that of around the 1940s

There has been no effective warming in the NH in the last 80 years !!

That is what enhanced atmospheric CO2 does to temperatures… ABSOLUTELY NOTHING.

Last edited 2 months ago by fred250
Gerald Machnee
Reply to  fred250
March 14, 2021 3:02 pm

ECS is Zero +/- feedbacks which vary.

Reply to  Gerald Machnee
March 14, 2021 3:24 pm

See far below. What you assert here CANNOT be true from first principles. We can only know it is something above about 1.16, rounded to 1.2 by people far more expert and qualified than you to opine.

philincalifornia
Reply to  Rud Istvan
March 14, 2021 6:07 pm

Respectfully Rud, experts and novices can opine all they want. Real scientists show data. Real data, not molested data.

Reply to  philincalifornia
March 14, 2021 8:12 pm

Yup.Like I did extensively below.

fred250
Reply to  Rud Istvan
March 14, 2021 8:11 pm

You are forgetting all the other energy transfers within the atmosphere

Bulk energy transfers are far far greater than any internal CO2 energy transfer

rhoda klapp
Reply to  fred250
March 15, 2021 2:33 am

Emergent phenomena step in pretty quickly, in the order of minutes and hours. You can’t talk about hypothetical doublings when that is going on.

Gerald Machnee
Reply to  Rud Istvan
March 15, 2021 10:28 am

Willis has listed many assumptions in his post.
Here is one which shows the declining influence of CO. By 400 ppm it is not far from Zero which is why I start with 0.0 +/- ?? feedbacks.
1.     David Archibald shows how the effect of increasing CO2 decreases logarithmically as CO2 increases in the following:
http://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/
There is also another article on the Logarithmic heating effect of CO2:
http://wattsupwiththat.com/2010/03/08/the-logarithmic-effect-of-carbon-dioxide/
An important item to note is that the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas.

Gerald Machnee
Reply to  Rud Istvan
March 15, 2021 10:37 am

The 1.16 is not measured is it? All based on assumptions as Willis said:

https://wattsupwiththat.com/2021/03/12/there-are-climate-models-and-there-are-climate-models/
Computer modelers, myself included at times, are all subject to a nearly irresistible desire to mistake Modelworld for the real world. They say things like “We’ve determined that climate phenomenon X is caused by forcing Y”. But a true statement would be “We’ve determined that in our model, the modeled climate phenomenon X is caused by our modeled forcing Y”. Unfortunately, the modelers are not the only ones fooled in this process.
The more tunable parameters a model has, the less likely it is to accurately represent reality. Climate models have dozens of tunable parameters. Here are 25 of them, there are plenty more.
 

fred250
Reply to  Rud Istvan
March 15, 2021 2:00 pm

First principles that ignore all the bulk transfers of energy in the atmosphere.?

This is not “first principles”.. that is ignorance.

TimTheToolMan
Reply to  Rud Istvan
March 23, 2021 12:54 pm

Rudd writes

We can only know it is something above about 1.16, rounded to 1.2

Above? You assume feedbacks are positive over the longer term. That is not a fact.

fred250
Reply to  TheFinalNail
March 14, 2021 12:51 pm

“for the observation”

You mean the “ADJUSTMENTED” fabricated once-were-observations, right, rusty.

Real warming is probably significantly less than shown by HadCrud et al.

Does rusty have ANY EVIDENCE AT ALL this the slight and highly beneficial warming since the COLDEST period in 10,000 years, has anything at all to do with atmospheric CO2?

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

S.K.
Reply to  TheFinalNail
March 14, 2021 1:31 pm

I don’t know how they account for it but my opinion is data altering.

fred250
Reply to  TheFinalNail
March 14, 2021 1:48 pm

“with ocean lag.”

ROFLMAO..

Rusty has just ADMITTED that its not CO2 doing the slight warming.

Well done rusty ! 🙂

Monckton of Brenchley
Reply to  TheFinalNail
March 14, 2021 1:50 pm

Based on Wu et al. (2019), the anthropogenic fraction of the 1.04 degrees’ warming from 1850 to 2020 is 0.73 degrees.

fred250
Reply to  Monckton of Brenchley
March 14, 2021 2:12 pm

Almost all Urban induced warming.

Last edited 2 months ago by fred250
Dave Fair
Reply to  TheFinalNail
March 14, 2021 2:48 pm

You seem to assume that global warming can only come from increasing CO2 concentrations.

TheFinalNail
Reply to  Dave Fair
March 15, 2021 2:19 am

Dave Fair

You seem to assume that global warming can only come from increasing CO2 concentrations.

Not over the short term. Over the longer term no one has so far been able to explain the observed warming without invoking man-made greenhouse gases (not just CO2) and human land use, etc.

rhoda klapp
Reply to  TheFinalNail
March 15, 2021 2:34 am

Bad premises lead to wrong conclusions.

fred250
Reply to  TheFinalNail
March 15, 2021 3:47 am

WRONG as always, corroded twit.

Not only rusty but bent and twisted and clinging onto a corroded mess of ignorance.

Warming is easily explained by the strong solar cycles and drop in tropical cloud cover.

I notice you COWARDLY squirm away from producing any answers.

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

MarkW
Reply to  TheFinalNail
March 15, 2021 8:02 am

Argument from ignorance. I can’t think of anything else, so it must be this.

Beyond that, there were no greenhouse gas changes during the Holocene optimum, where temperatures were as much as 5C warmer than today, and man hadn’t developed civilization yet so there were no land use changes.

Please try to come up with a lie that is at least a little bit plausible.

DrEd
Reply to  TheFinalNail
March 15, 2021 9:34 am

Wrong. Look at the cyclical data. THAT’s not explained by CO2. Eddy ~1000 yr, Suess/DeVries ~190 yr, AMO, etc.

Gerald Machnee
Reply to  TheFinalNail
March 15, 2021 9:47 am

Explain the Little Ice Age.

TimTheToolMan
Reply to  TheFinalNail
March 23, 2021 1:03 pm

The Final Nail writes

Over the longer term no one has so far been able to explain the observed warming without invoking man-made greenhouse gases

You mean the models can’t explain it. But the models are explicitly designed not to explain it. They are designed so that control runs don’t cause climate change. Santer tells us they can’t sustain change beyond 17 years in the control runs.

Rich Davis
Reply to  TheFinalNail
March 14, 2021 3:14 pm

How do you account for the fact that the number of murders has increased and yet we know that the number of demons has not changed because they are immortal and do not procreate? We can infer with confidence that the productivity of demons has increased as CO2 has increased. Isn’t that obvious?

If you assume something that doesn’t even exist, you may convince yourself of some rather absurd ideas.

TheFinalNail
Reply to  Rich Davis
March 15, 2021 2:20 am

You should explain to Lord M, etc that he is wrong about the forcing effect of CO2 on warming. Tell him it’s a ghost or something. (I don’t believe in ghosts.)

fred250
Reply to  TheFinalNail
March 15, 2021 3:51 am

“I don’t believe in ghosts.”

But you do believe in fairy-tales..

Look at your rancid brain-hosed belief of CO2 warming, despite being totally incapable of producing one iota of empirical evidence.

That is Mills and Boon or Bros Grimm fantasy land stuff

Certainly not science

1… Do you have any empirical scientific evidence for warming by atmospheric CO2?

2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?

Eric Stevens
Reply to  TheFinalNail
March 14, 2021 3:34 pm

Your question is valid only if CO2 is the only determinant of temperature. If something else is acting your question is meaningless.

Dr. Deanster
Reply to  TheFinalNail
March 14, 2021 5:56 pm

It obviously doesn’t cross your mind that HadCRUT, GISS, and NCDC are manipulated numbers intended to support the higher ECS. Using those false “calculations” .. (not observations as you pretend), are of no use in a discussion regarding the true ECS.

Reply to  TheFinalNail
March 14, 2021 10:04 pm

Please send us a link to the GISS/NCDC and HadCRUT data from 1880-1980, so we can all discuss this as informed adults? I signed an NDA with St. Peter, but I’ll ignore it and swop you my GISS/NCDC data set from 1645-1875… in the interest of not spewing nonsense.

Paul of Alexandria
Reply to  TheFinalNail
March 15, 2021 4:14 pm

Simple: changes in solar forcing and ocean current cycles. There are many other factors contributing to the atmospheric temperature at any given time.

Monckton of Brenchley
Reply to  Tom Halla
March 14, 2021 1:47 pm

On the basis of the latest data, we now conclude that ECS will be just 1.0 [0.8, 1.4] degrees.

fred250
Reply to  Monckton of Brenchley
March 14, 2021 2:13 pm

And as temperature start to drop over the next several years, that “calculation” will tend towards zero.

Gerald Machnee
Reply to  fred250
March 14, 2021 3:03 pm

Yes!!

Reply to  Monckton of Brenchley
March 14, 2021 6:00 pm

Respectfully disagree, several times already, here and at Judths. It does us skeptics no favor to overstate the ECS case, when an irrefutable yet more moderate equivalent can make the same point.
i really like just generally winning. Absolute winning often loses.

fred250
Reply to  Rud Istvan
March 14, 2021 8:13 pm

“It does us skeptics no favor to overstate the ECS case”

CO2 is a minor player in energy transfer in the atmosphere, absolutely SWAMPED but other mechanisms

ECS is effectively ZERO

Last edited 2 months ago by fred250
fred250
Reply to  Rud Istvan
March 15, 2021 2:03 pm

“It does us skeptics no favor to overstate the ECS case’

Then stop doing it !!

Monckton of Brenchley
Reply to  Rud Istvan
March 15, 2021 3:16 pm

Mr Istvan says he disagrees with an analysis he may not yet have seen. However, the official estimate of the anthropogenic fraction of observed warming is 0.7, which brings down ECS quite a bit. We do not find ECS to be 1 K because that was our target: we find it to be 1 K because that is what the latest available mainstream data would indicate.

PETER D ANDERSON
Reply to  Tom Halla
March 17, 2021 3:21 am

CO2 has absolutely nothing to do with climate, so why give the idea any credence?

Gerald Browning
Reply to  Tom Halla
March 17, 2021 10:35 pm

Before worrying about the accuracy of the physical parameterizations, the climate and weather modelers must accurately approximate the correct dynamical system of equations. However it has been mathematically proved that all global climate and weather models are based on the wrong dynamical system of equations ( the so called primitive or hydrostatic system of equations). And secondly the continuum solution of the equations must be differentiable in order that a numerical method accurately approximatre a continuous derivative. However discontinuous parameterizations mean that the continuum solution is not differentiable so the mathematical theory of numerical methods is violated.In order to keep the numerical solution appearing to be smooth, an unrealistic amount of dissipation must be applied and this leads to the system that is being approximated far from the one intended.

The unique, well posed reduced system for atmospheric flows: Robustness in the presence of small scale surface irregularities
Dedicated to my mentor and colleague, Heinz-Otto Kreiss.

G L Browning

DAO,91,2020

ThinkingScientist
March 14, 2021 10:14 am

Great article, really interesting and thoughtful.

One of the fundamental issues of climate models is they depend entirely on the input forcings. Without the input forcing time series the models do nothing. The output of a climate model that can be compared to temperature only seems to work if we average lots of climate models. The mean of the models then looks like temperature.

But the priors (the input forcings) can be used with simple linear regression to predict the mean model output with surprising accuracy (R=0.96). So on that basis, all the models actually do is convert forcings in W/m^2 to temperature. Which is what linear regression does too.

Finally, if we subtract the mean of the models from the individual climate models for each climate model residual we get – random noise as far as I can tell. There is no structure and the first annual time lag autocorrelation is effectively zero.

Climate models are simply random number generators with a trend. The trend comes only from the input forcing data provided. The a priori input forcings already correlate with temperature (R=0.92) by linear regression. So what do climate models do? Not much. But climate scientists seem blind to the fact that the output only depends on the input forcings, not on the internal workings of the model.

Lipstick on the pig

Last edited 2 months ago by ThinkingScientist
n.n
Reply to  ThinkingScientist
March 14, 2021 10:35 am

They need to follow the scientific philosophy in theory and practice, and reduce their frame of reference from universal, global, etc. to a limited frame where their models… hypotheses exhibit a consensus with observation, without regular injections of brown matter and energy, and altogether created conceptions, to steer the result.

Tim Gorman
Reply to  ThinkingScientist
March 14, 2021 10:53 am

So what do climate models do? Not much. But climate scientists seem blind to the fact that the output only depends on the input forcings, not on the internal workings of the model.”

So true!

markl
Reply to  ThinkingScientist
March 14, 2021 11:09 am

“One of the fundamental issues of climate models is they depend entirely on the input forcings. Without the input forcing time series the models do nothing” That’s it in a nutshell.

Reply to  ThinkingScientist
March 14, 2021 11:12 am

So what do climate models do?
Give only some scientists the impression to handle the real world.

Last edited 2 months ago by Krishna Gans
Richard Page
Reply to  Krishna Gans
March 14, 2021 5:32 pm

Climate models provide employment for university graduates that otherwise would be completely unemployable in the real world. Climate activist charities exist for precisely the same reason.

March 14, 2021 10:25 am

The Dynamic Atmosphere Energy Transfer model created by me and Philip Mulholland predicts zero sensitivity from greenhouse gas changes but a minuscule rearrangement of the convective overturning system in lieu of a global temperature rise. Indiscernible from natural variability.
it also predicts features of atmospheres already observed within our solar system.

Reply to  Stephen Wilde
March 14, 2021 6:09 pm

SW, with all due respect, I have studied your alternative model and found it deficient in several physical respects. If you wish to persist, I will comment on those deficiencies in great detail in a big picture. For example, you claim CO2 is saturated—ignoring the ever elevated radiation threshold LRE saying CO2 can never saturate— but its concentration effect will always decline logarithmically, as Guy Callander said in 1938.

fred250
Reply to  Rud Istvan
March 14, 2021 11:44 pm

Actual measurements of energy absorption show it leveling off at around 280ppm

comment image

Irrespective, CO2 is a minor bit player in atmospheric energy transfer, totally SWAMPED by bulk energy movements.

Your calculated ECS value also relies on the temperature rise being cause ONLY by CO2, which is manifestly FALSE.

Reply to  Rud Istvan
March 15, 2021 1:13 am

I don’t recall even considering that aspect of CO2 since it is not relevant to our model.
We accept that ghgs would affect lapse rate slopes but that circulation changes would neutralise any thermal effect.
So you haven’t really studied it at all.

Reply to  Stephen Wilde
March 15, 2021 1:09 am

Should be Transport not Transfer

n.n
March 14, 2021 10:26 am

The very model of chaos (e.g. evolution): nonlinear, incompletely or insufficiently characterized and computationally unwieldy, thus the scientific logical domain and philosophy in the near-space and time. Unfortunately, intuitive science with secular incentives, “benefits”, is politically congruent, economically marketable, and emotionally appealing.

Robert of Texas
March 14, 2021 10:30 am

I will only point out one other assumption:
“there is only one “true” value of ECS in the real world”

If the Climate system is chaotic then I am not sure the above assumption is correct. You may get entirely different sensitivities given a doubling of CO2 depending on various other conditions, including one (or many) that resembles random chance. (The presumption is if one could know enough detail then the condition is not random, but in all practical use of a real computer model it appears random).

You *might* and probably can put limits around the ECS and maybe even build it a probability curve.

If this were true then there are limits to predicting future climate that we just cannot go beyond. Trying to predict it using CO2 as the control knob becomes laughable.

Reply to  Robert of Texas
March 14, 2021 11:55 am

The climate system is for sure nonlinear (feedbacks) and dynamic feedbacks are lagged in time. Therefor it is mathematically chaotic. That means it has strange attractors. We know of two strong ones it oscillates between: ice ages and brief noniceages. The last two of those are the Eemian and the Holocene. Based on observational ECS, no reason not to think any Holocene ECS is a fairly stable number, since its ‘real’. Ompenents like albedo and WVF and Cloud feedback are observationally all fairly stable.

fred250
Reply to  Rud Istvan
March 15, 2021 2:05 pm

“Based on observational ECS”

NH 1940’s same temp as now, NO warming in 80 years

ECS = ZERO !!

Last edited 2 months ago by fred250
Jeff Alberts
Reply to  Robert of Texas
March 14, 2021 11:56 am

I’d say there’s no global ECS in the same way there is no global temperature. Every column of air anywhere in the world is either slightly or drastically different from any other column of air.

Jim Gorman
Reply to  Jeff Alberts
March 14, 2021 2:05 pm

That’s kinda like what I was going to say. There are enough regional differences that ECS is probably different at different locations in the globe. Simple amount of cloud cover would generate different ECS’s because of different “albedos”. Trying to find an average would be probably wrong and at the least, a not very useful number.

Reply to  Jim Gorman
March 14, 2021 3:30 pm

JG, I agree. But in fairness, (see below) the warmunists do try to compensate by using ECS anomalies. Besides, if you want to engage them, only do it on skeptical provable terms. Dismissive skeptical results only in ‘Denier dismissal’ rebuttals. Not useful.

Jim Gorman
Reply to  Rud Istvan
March 15, 2021 4:49 am

I appreciate your comment, however it is my understanding that ECS in the models is an emergent quantity based upon global numbers. I wouldn’t exactly call a quantity that comes out as an anomaly and that is based on temperature anomalies an anomaly itself.

Larry in Texas
Reply to  Robert of Texas
March 16, 2021 12:00 pm

I would add that various geographic features and conditions (mountains, oceans, deserts, etc.) are part of the “various other conditions” you are talking about.

Which is why I don’t even think it appropriate to try to calculate an “average global temperature” much less predict it (or any relatable concept of ECS) in the future.

Tim Gorman
Reply to  Larry in Texas
March 16, 2021 2:10 pm

Don’t forget that the *average* temperature should be taken at a specific point in time, specifically UTC time. That’s because at any specific time about half the earth is in sunshine, i.e. temps going up, and the other half is in dark, i.e. temps going down. That snapshot in time would tell you far more about “average” conditions than the so-called average mid-point temp the AGW alarmists use.

Richard M
March 14, 2021 10:30 am

It is natural to view the Earth’s climate by separating land, ocean and atmosphere. This is essentially what climate models do. Unfortunately, that leads to the creation of a massively complex interface between these systems. Turns out there’s a better way to approach the problem.

By combining the atmosphere with the skin of land and ocean areas we create a thermodynamic system that provides us with more insights. Since 99.9% of the IR energy from radiating gases only penetrates into the skin it removes the need to consider them outside this system. The energy into this system comes from the sun and from subsurface below. Also keep in mind, the energy in from solar is less that the total energy radiated to space by the atmosphere/skin system. A good portion of the solar energy penetrates meters into the oceans. Hence, this system cannot be warming the subsurface without violating the 2nd law.

The first significant insight from looking at the problem this way is how can this system be warmed from a completely internal transfer of energy? That is all IR does within this system. It moves around. It cannot possibly warm the system without creating energy out of nothing. That would violate the 1st law. The only ways for the atmosphere/skin system to warm is for 1) more solar energy (no evidence of this) or 2) more subsurface Earth energy or 3) less energy loss to space (the insulation effect).

For 3), we would need the atmosphere/skin to reduce outgoing radiation. That is, in order to adhere to basic thermodynamic law we MUST see a decrease in outgoing radiation. Has such a decrease been seen? Nope. According to CERES the planet has seen no reduction in outgoing radiation. In fact, CERES has seen an increase in the outgoing radiation.

The only way the CERES data can be explained is that 2), the subsurface oceans/land areas, are transferring more energy into the atmosphere/skin system. We know the oceans contain more than 1000 times the energy of the atmosphere. It is certainly possible for them to vary the energy transfer into the atmosphere/skin system.

If climate models used this approach they would be much simpler and they would likely all produce similar results.

John Tillman
March 14, 2021 10:36 am

AR6 model runs published so far include some with outlandishly high ECS results. As usual, the two Russian INM GCMs produce the lowest ECS estimates, ie 1.9 and 1.8 degrees C per doubling of vital plant food in the air.

From Zeke:

https://www.carbonbrief.org/cmip6-the-next-generation-of-climate-models-explained

Even the Russian results are probably too high. Until computing power increases at least a billion-fold, such GIGO computer gaming will remain worse than worthless for any useful purpose. Without actually modeling clouds, for instance, rather than parameterizing them, GCMs are a titanic waste of time and money, except to show how unsettled is “climate science”.

Ideally, grid cells would be cubes ten meters on a side, but 100 m. might give meaningful ouputs. However, the assumptions behind GCMs may well be fundamentally flawed, rendering the exercise invalid at any resolution.

Last edited 2 months ago by John Tillman
Clyde Spencer
Reply to  John Tillman
March 14, 2021 1:46 pm

Until computing power increases at least a billion-fold, such GIGO computer gaming will remain worse than worthless for any useful purpose.

Perhaps Slartibartfast should be commissioned to build an analog computer to simulate what happens in our increasingly digital world. Why would any thinking person wear a digital watch?

philincalifornia
March 14, 2021 10:41 am

It’s zero

M Courtney
Reply to  philincalifornia
March 14, 2021 11:04 am

It may be zero relative to the noise but I doubt it’s physically unrelated.
Not philosophically zero.

Richard M
Reply to  philincalifornia
March 14, 2021 5:16 pm

It could be non zero. Maybe 0.01 C. ;)) It certainly is not a problem for anyone. The CERES data tells us all we need to know. For CO2 to create any warming there would need to be a reduction in outgoing IR. Hasn’t happened. Some might even call it proof that CO2 does not cause warming.

philincalifornia
Reply to  Richard M
March 14, 2021 6:27 pm

Could be minus 0.01 C too, given that radiation is the only way energy exits the planet. Pretty amazing that after 40 years or more, it can only be purported to be a number other than zero by mental masturbation, bloviating and fabricating data, and even with fabricated data it still hasn’t been shown to be a number other than zero.

Nick Schroeder
March 14, 2021 11:11 am

How it does not work.

By reflecting away 30% of the incoming solar radiation the albedo, which would not exist w/o the atmosphere and its so-called GreenHouse Gases, makes the earth cooler than it would be without the atmos/GHGs much like that reflective panel propped up on the car dash. Remove the atmos/GHGs and the earth becomes much like the Moon or Mercury, an arid, barren rock with a 0.1 albedo, 20% more kJ/h, hot^3 on the lit side, cold^3 on the dark.
If this is correct, the Radiative GreenHouse Effect theory fails.

For the GHGs to warm the surface with downwelling energy as advertised they must absorb/trap/delay/intercept/whatevah “extra” energy from somewhere in the atmospheric system. According to RGHE theory the source of that “extra” upwelling energy is the surface radiating as a near ideal Black Body. As demonstrated by experiment the surface cannot radiate BB because of cooling by the non-radiative heat transfer processes of the contiguous atmospheric molecules.
If this is correct, RGHE theory fails.

How it does works.

To move fluid through a hydraulic resistance requires a pressure difference.
To move current through an electrical resistance requires a voltage difference.
To move heat through a thermal resistance requires a temperature difference. (Q=UAdT)
Physics be physics.

The complex thermal resistance (R=1/U) of the atmosphere (esp albedo, net Q) is responsible for the temperature difference (dT=Tsurf-Ttoa) between the warm terrestrial surface and the cold edge of space (32 km).
And that heat transfer process involves the kinetic energy of ALL of the atmospheric molecules not just 0.04% of them.

WUWT Bastardi loop.jpg
Joel O'Bryan
Reply to  Nick Schroeder
March 14, 2021 4:01 pm

To move electromagnetic energy as photons through a vacuum only requires a temperature above absolute zero.

nick schroeder
Reply to  Joel O'Bryan
March 14, 2021 4:31 pm

Temperature requires kinetic energy AND stuff. 0 LoT.
A vacuum has neither and therefore temperature has no meaning.

nick schroeder
Reply to  nick schroeder
March 14, 2021 5:00 pm

And no thermal resistance.

commieBob
March 14, 2021 11:20 am

It is possible for a system to have more than one quasi stable condition.

Recently, in geological time, the planet has been banging in and out of glaciations.

The climate system is chaotic and apparently has at least two attractors. As far as I can tell, none of the models even consider the possibility of the next glaciation.

John Tillman
Reply to  commieBob
March 15, 2021 8:21 am

IMO, during the present ice age, there are three attractors, ie interglacial, glacial and glacial maximum.

Mr.
March 14, 2021 11:31 am

ECS is like butts – everybody has a different size one to show.

And a study of all the butts in the world would be just as useful to humanity as all the studies (conjectures) about ECS.

saveenergy
Reply to  Mr.
March 14, 2021 4:35 pm

Can I get a grant to study the female butts aged between 20 & 40 ??

Mr.
Reply to  saveenergy
March 14, 2021 5:17 pm

You’d be competing with oh, about 3 billion hetero blokes who’d do that study for free.

fred250
Reply to  saveenergy
March 15, 2021 4:02 am

We allocate you the 70-and-over butts, will that do ?

March 14, 2021 11:31 am

Regarding “The third problem is in the implicit assumption that the increased water vapour concentration significantly increases the effective IR opacity of the atmosphere in the 15 micron band. That is not the case. The IR absorption by water vapour is practically saturated which makes the effective opacity, a harmonic mean, insensitive to such variation.”: Absorption of thermal infrared by water vapor from 8 to 15 microns is not saturated. comment image

Ed Zuiderwijk
Reply to  Donald L. Klipstein
March 14, 2021 11:53 am

Indeed. There is little absorption in that spectral region. That means that in that spectral region is the bulk of the radiative heat loss, not through the adjacent CO2 and watervapour bands. It also means that that heat loss is little affected by changes in the adjacent spectral regions, be it of CO2 or watervapour.

Tim Gorman
Reply to  Donald L. Klipstein
March 14, 2021 12:29 pm

Your links says “By contrast, the greenhouse gases capture 70-85% of the energy in upgoing thermal radiation emitted from the Earth surface.”

The electromagnetic energy that is absorbed is either lost to collisions with other molecules or radiated away. How can it be “captured”? If it isn’t “captured” then how does it contribute to opacity?

Clyde Spencer
Reply to  Donald L. Klipstein
March 14, 2021 1:58 pm

Perhaps I’m missing something, but it looks to me like the combined CO2 and H20 absorption features are indeed saturated, while the passband between the two major absorption features never will be saturated because there can only be absorption as the wings of the 8 and 15 micron features encroach on the relatively transparent region. What am I missing?

Richard M
Reply to  Donald L. Klipstein
March 14, 2021 5:31 pm

Increased absorption will not have any effect even if it occurred. The energy is eventually radiated somewhere else within the atmosphere/surface skin thermodynamic system or lost to space. Since the average of all emission paths is upwards to space, the energy is eventually lost.

DMacKenzie
March 14, 2021 11:32 am

The models fail to accurately predict the effect of clouds on the system. They solve equations for convective clouds but parameterize advective cloud fronts which one can see daily at https://epic.gsfc.nasa.gov/ is the largest contributor to cloud cover.
The Albedo of clouds is .5 to .8, ocean is .06, land is .12….Because clouds randomly cover about 55% of the planet, the Earth’s Albedo averages 0.3. The average radiative temperature of the Earth is about the temperature half the way up the troposphere, interestingly about the average cloud tops. That equation is T=278(1-Albedo)^.25
For Earths A= .3, the temp is 254 K, If A=.06 for ocean, that temp is 274 K, .12 for land gives 269 K, and say .7 for a cloud covered planet gives 206 K which is much colder. You can add approximately 35 C to those temps to get the resultant surface temp for those Albedo extremes, assuming a lapse rate similar to today.
The point is that increased SST causes more evaporation….causes more clouds….causes more reflection of incoming sunlight….causes surface temperature to drop. You can quite easily do a little spreadsheet and calculate how little cloud increase offsets a couple of Watts of CO2 forcing. And you probably won’t even get to the bit about how 1 sq.M of 1 degree warmer SST can make about 10 times as many sq.M of cloud cover…
The inescapable conclusion is:
CLOUDS and and the vapor pressure water of at any given SST are what control the planet’s temperature, CO2 is only a minor player that minorly affects the elevation at which clouds form….

Ed Zuiderwijk
Reply to  DMacKenzie
March 14, 2021 5:49 pm

The effect of clouds and more generally what controls the (relative) humidity is one of the ‘known unknowns’.

March 14, 2021 11:36 am

There are thing to like and not to like in this post, IMO.

A thing to like is ECS likely below 1.8 lower CMIP6 model bound. There are several ways to derive an estimate without relying on CMIP6.

Guy Callendar’s 1938 curve implies 1.67 (method and calculation in essay Sensitive Uncertainty in ebook Blowing Smoke).

Observational energy budget methods (e.g. Lewis and Curry) produce about 1.55-1.65 depending on comparison time periods used.

And Bode feedback analysis on a no feedbacks CO2 only (more follows below) of ~1.2 and the AR4 “ECS likely 3” implies Bode f/(1-f) of 0.65, with 0.5 from water vapor feedback directly derivable from AR4 text, so an implicit additive 0.15 from cloud feedback. Despite his claims otherwise, Dessler’s 2010 paper actually showed zero cloud feedback. And the fact that the AR4 models have about half of observed rainfall implies their modeled WVF is ~2x high, meaning a ‘correct’ Bode is only about 0.25. Plug that into Lindzen’s curve based on 1.2 and ECS is about 1.65.

There are strong reasons to think ECS must be greater than about 1.2, therefore I do NOT like arguments for something about 0.5 as here.

First we can compute from first principles and accepted experimentally determined inputs that the zero feedback (zf) value is something between 1.1 and 1.2. Two examples. (1) Judith Curry had several posts estimating this value in the early days of Climate Etc. (2) Using Monckton’s ‘famous’ equation, the zf value computes to 1.16. Lindzen rounds up to 1.2 in his several papers from 2010-2012 such as his UK Parliament briefing in 2011.

Second, we know observationally that WVF must be positive rather than strongly negative (if 0.5 were to be correct) so ECS must be above ~1.2. The observation is a simple one to make. When I visit the ‘deserts’ (ok Phoenix and Palm Springs) the night time temperature drops sharply in summer because of the relatively low humidity. Here on the beach in Fort Lauderdale it doesn’t because of the relatively high humidity. Ergo, WVF must be globally net positive since 71% of Earth’s surface is water.

DMacKenzie
Reply to  Rud Istvan
March 14, 2021 12:01 pm

WVF is positive until you hit the point where clouds form. Then you quite suddenly go from a few watts positive to hundreds of watts negative in the localized area where clouds are forming. Colloquially hot and muggy to cloudy in an hour or two.

Reply to  DMacKenzie
March 14, 2021 12:48 pm

Clouds are not so simple. Depends on type and altitude. For example, high thin cirrus warms rather strongly since the ice crystals are transparent to incoming solar radiation but opaque to outgoing IR. A part of Lindzen’s adaptive iris hypothesis.

Dessler’s paper using two years of TOA IR global satellite observations comparing clear sky to all sky (all with whatever cloud) showed a net zero cloud feedback overall, an almost perfect scatter centered on zero, with an r^2 of 0.02. Were there a positive cloud feedback, there should have been a clear negative TOA IR relationship with all sky having less observed TOA IR and clear sky more, with a significant r^2.

DMacKenzie
Reply to  Rud Istvan
March 14, 2021 3:11 pm

Not to be argumentative, Rud, yes clouds are not so simple, but two strongly opposing feedbacks can easily be “net zero” at some balance point, and lead one to the false yet “verifiable” conclusion that these feedbacks are unimportant. Such is the case with the “a cloud reflects 50 to 90% of sunlight back into space” and “clouds are a net +ve feedback” (or slightly -ve or zero depending on source). The latter conclusion actually is the result of 2 offsetting variables and starting at the point where those variables balance each other.
Regarding Cirrus, if you add a cirrus cloud to the sky, it still reflects more of the 1000 watts/ sq.M of sunlight at that altitude away during the day, than it retains longwave at night. Again starting at a point where everything is in balance causes errors in the conclusion.

Reply to  DMacKenzie
March 14, 2021 3:34 pm

You might be right. Dunno. But none less than a weather expert far more reknownd than you, Richard Lindzen, prof emeritus at MIT, says you are just wrong about cirrus. Read his papers. Learn.

DMacKenzie
Reply to  Rud Istvan
March 14, 2021 6:59 pm

Lindzen is a thermodynamicist, and one of the best there ever has been with atmospheric phenomena. IMHO I’m not saying anything he would disagree with. His Iris effect includes the effect of improved IR transmission to outer space as Cirrus clouds decline which is to be expected. His mechanism for decline of Cirrus as humidity increases is a hard one to confirm.

John Tillman
Reply to  Rud Istvan
March 14, 2021 1:15 pm

On a homeostatic water planet, net feedbacks are more likely to be negative than positive. Thus at this point in the Holocene, ECS between 0.0 and 1.2 degrees C per doubling of essential plant food in our air is not only possible but probable, IMO.

Besides clouds, the GCMs ignore or downplay such non-radiative, negative feedbacks as convective and evaporative cooling.

Reply to  John Tillman
March 14, 2021 3:05 pm

It is not true that climate models ‘ignore’ ( your words) convection or evaporation. The problem is they cannot ‘model’ them, so they are parameterized, which drags in the attribution problem. See my years ago post here, “The Trouble with Global Climate Models” for the specifics about which you are apparently unaware.

Please show your homework math supporting your claims. I explained mine (with references), which you have not refuted. And more generally to the ‘IR saturation’ crowd above try reading essay Sensitive Uncertainties’ for a laymans expalnation of why that argument fails as a matter of fundamental physics. Heck, I even gave you all the Modtran spectral analysis.

I tire of these contentless, nonsense ‘no ECS’ true denier opinions here at WUWT. As has been said, “You are certainly entitled to your own opinions, but NOT to your own facts.” Those just are.

John Tillman
Reply to  Rud Istvan
March 14, 2021 4:23 pm

Parameterization is totally worthless, pretend input. Meaningless, as the values are whatever the GIGO computer gamer wants them to be.

As for evaporative and convective cooling homework, I refer you to the late, great Father of Hurricanology Bill Gray, a pdf:

A Personal View of the Advancements and Failures in Tropical Meteorology

Or consider this discussion of what GCMs ignore in a Royal Society paper:

https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0146#d3e1120

From past climate, it can be approximated by relating equilibrium warming to radiative forcing. In global climate models (GCMs), climate sensitivity is normally not tuned, but it results from aggregating or parametrizing small-scale processes and ignoring long-term ones (red ellipse in figure 2). GCM-based estimated of TCR and ECS ignore certain processes even within the time frames they consider (grey bars within the red ellipse)…

Next to the evaluation of the full-blown feedback processes in the models, a key challenge is to study the limits of using the linear framework discussed in this paper. How far can one push a GCM into being very sensitive or very insensitive to explore the range of plausible magnitudes of feedbacks and their rate of change? Do cloud, convection and aerosol parametrizations bias GCMs to be too sensitive, or not sensitive enough? For which purposes can we safely use the effective radiative forcing estimates of the linear regression methods? Over which time frames is the assumption of a constant λ justified? Can GCMs serve as a perfect model test bed for simple frameworks, as shown in figure 4? For which climatic base states, feedbacks and their interaction would it be wise to include nonlinear descriptions? For which temperatures, forcing scenarios, and locations does the rate of change of the feedback term matter? When is using a certain fit to estimate the global or regional temperature response justified? How does the coupling of ocean, atmosphere and sea ice determine the evolution of surface temperature patterns enhancing different feedback processes? How can we understand uncertainty propagation in nonlinear systems, with correlated uncertainties, and using computationally expensive climate models? In the light of all these questions, we argue to further explore various uses of feedback frameworks rather than squeezing them into a one-fits-all-concept, and to carefully explore the applicability and predictive capacity of each concept for a range of purposes.

Last edited 2 months ago by John Tillman
John Tillman
Reply to  John Tillman
March 15, 2021 8:28 am

It’s possible that increased CO2 even cools parts of the planet, as has been suggested for the hottest moist tropical areas and Antarctica. There has been no warming at the South Pole since record-keeping began there in 1958, yet given how dry the air is there, that’s where the GHE of more CO2 should be most pronounced.

A single global number for ECS must combine the GHE for polar, temperate and tropical zones. The effect appears greatest in the north polar zone, yet there are few actual stations there with long continuous records, so the temperature “data” there are largely made up.

Gerald Machnee
Reply to  Rud Istvan
March 15, 2021 9:58 am

Willis has listed many assumptions in his post.
Here is one which shows the declining influence of CO. By 400 ppm it is not far from Zero which is why I start with 0.0 +/- ?? feedbacks.

1.     David Archibald shows how the effect of increasing CO2 decreases logarithmically as CO2 increases in the following:
http://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/
There is also another article on the Logarithmic heating effect of CO2:
http://wattsupwiththat.com/2010/03/08/the-logarithmic-effect-of-carbon-dioxide/
An important item to note is that the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas.

Clyde Spencer
Reply to  Rud Istvan
March 14, 2021 2:02 pm

Using Monckton’s ‘famous’ equation, the zf value computes to 1.16. Lindzen rounds up to 1.2 in his several papers from 2010-2012 …

Which is probably appropriate since Monckton has shown frequently that he pays little attention to significant figures in his calculations.

Reply to  Clyde Spencer
March 14, 2021 3:37 pm

I have disagreed with him several times over at Judith’s, and he has always courteously replied. Look it up. Never over significant figures (precision), because we both know we are deep into fairly uncertain approximations. Nice try. FAIL.

Clyde Spencer
Reply to  Rud Istvan
March 14, 2021 5:53 pm

And he has been courteous to me in the past — when I have agreed with him. That sounds like someone who does not take criticism well, even when it is meant constructively. Perhaps it is the phase of the moon.

Monckton of Brenchley
Reply to  Clyde Spencer
March 14, 2021 4:41 pm

In response to the needlessly but characteristically churlish Mr Spencer, our paper is constantly updated with the latest mainstream climatological values. The mean doubled-CO2 radiative forcing in the CMIP6 models for AR6 is 3.52 Watts per square meter. The product of that forcing and the Planck sensitivity parameter 0.299 K/W/m2 is 1.053 degrees, which is thus the current midrange estimate of reference sensitivity to doubled CO2.

Our approach to rounding is straightforward. All the input data are available to at least 1 decimal place of precision, but we perform all the calculations to extended precision of 16 decimal places. The bottom-line result, like all our calculations, is displayed in the computer output table to three decimal places, but is rounded to one decimal place when reported. All the calculations are then run through a Monte Carlo distribution to derive the error bars.

Whether Mr Spencer likes it or not, that approach is standard, and is unobjectionable to all but vacuous nit-pickers. It has not met with any objection from reviewers on grounds of failure to comply with some kindergarten rule of thumb or another about rounding. One of our collaborators is a Professor of Statistics, who kindly keeps us straight on these matters.

Clyde Spencer
Reply to  Monckton of Brenchley
March 14, 2021 5:03 pm

characteristically churlish?

Surely you must be confusing me with one of the resident trolls!

Clyde Spencer
Reply to  Monckton of Brenchley
March 14, 2021 5:32 pm

You claim, “that approach is standard, and is unobjectionable to all but vacuous nit-pickers.”

Au contraire! It is commonly accepted that the number of significant figures implies the precision of a particular measurement and that in the absence of a formal uncertainty evaluation such as a propagation-of-error calculation, or at least the calculation of a standard deviation of a primary measurement, the uncertainty is +/- 5 units to the right of the last significant figure. If the formal uncertainty is known, then it should be stated along with the mean value and if it is a standard deviation, whether it represents 1-sigma or 2-sigma.

It is also commonly held that a final answer, involving multiplication/division or exponentiation, should retain no more significant figures than the least precise number, albeit sometimes an extra guard digit is justified if it is an intermediate answer to be used for further calculations. Using 16 digits for calculation is both unnecessary and unwarranted, although one has little control over how the computer performs floating-point calculations, unless you are using a language that allows an optional double-precision.

My primary objection is the you frequently present ‘albedo’ as having a value of 0.3. The commonly accepted implication is that the value you provide is 0.3 +/- 0.05, and any resultant calculation should show no more than one significant figure. If albedo is known to 2 or 3 significant figures, then it should be shown as 0.30 or 0.300, respectively, with a notation that the zeroes are indeed significant. Although, I doubt that the continuously varying cloud cover allows any certainty above one significant figure.

It is you who is being defensive and churlish rather than admit you are being careless.

Last edited 2 months ago by Clyde Spencer
Clyde Spencer
Reply to  Monckton of Brenchley
March 14, 2021 6:12 pm

The problem with statisticians and mathematicians in general is that their behavior towards calculations is often cavalier because, it would seem, they so frequently work with exact numbers, and don’t live in the same sphere as metrologists, physicists, and chemists.

In any event, Rice University hardly qualifies as kindergarten!
http://www.ruf.rice.edu/~kekule/SignificantFigureRules1.pdf

John Shotsky
March 14, 2021 11:38 am

If you want climate models to be closer to reality, remove CO2 from them.
Remember 95+% of CO2 is completely natural, and happens every year. Humans only add about 24ppm total, which is miniscule compared to natural CO2, and even then CO2 is a trace gas which cannot possibly have any effect on climate.

Ronald Havelock
Reply to  John Shotsky
March 14, 2021 3:01 pm

This message somehow must be socially amplified until it gets to western political leaders like Biden. Take co2 out of the equation. There is not only zero evidence that it plays any measurable role in any climate statistic but the idea that it might do so, given the numbers you state is laughable. When will this madness get its required response? This is not or should not be. A political issue. It is environmental fanatics against reality!

Gerald Machnee
Reply to  John Shotsky
March 14, 2021 3:10 pm

CO2 causes the largest error since the ASSUMPTION of its effect is much too large. Will anyone admit it?

March 14, 2021 11:47 am

I see one problem in general with climate models that causes them to generally overstate ECS. There are enough climate models that are close enough to their average in terms of stating or indicating ECS to give somewhat of a consensus, that overstates ECS. The problem I see is that they’re selected and/or tuned to hindcast the past, especially the 30 years before their hindcast-forecast transition years. For the CMIP3, CMIP5 and CMIP6 models, the 30 year periods before their hindcast-forecast transition years had part of the warming during this time being caused by upswing phase of multidecadal oscillations, which these models did not consider. For CMIP5 models, the last year of hindcasting (“historical”) is 2005 and the first year of forecasting (“projections”) is 2006. According to some Fourier work I did on HadCRUT3, about .2 degree C of the warming reported by HadCRUT3 was from a periodic cycle that had periodic nature sustained for two cycles from a peak time in 1877 to a peak time in 2005 with a period of 64 years and a peak-to-peak amplitude of .218 degree C. Because the CMIP5 models were done without consideration of multidecadal oscillations, they hindcasted about .2 degree C more warming from positive feedbacks to effects of increase of manmade greenhouse gases (especially the water vapor feedback) than actually happened. Overdoing the water vapor feedback causes models to show greatly excessive degree of the tropical upper tropospheric warming hotspot. And, lately I have noticed denial of the Atlantic Multidecadal Oscillation being favored by some climate scientists including Dr. Michael Mann.

Gerald Machnee
Reply to  Donald L. Klipstein
March 14, 2021 3:11 pm

The hindcast assumes too much influence by CO2.

Joseph Zorzin
March 14, 2021 11:58 am

“Suppose I have N models that all predict a different ECS value. Mother Nature is difficult, but she is not malicious: there is only one “true” value of ECS in the real world”

Why does ECS have to be a fixed – one true value? Can’t it be a variable- assuming all other potential forcings aren’t changing? If it’s a variable, perhaps all those numbers are right given certain conditions. Which would mean something fundamental is missing from all the models.

OK, I claim to not understand most of the conversations here – other than about forests and wood energy- but I do try to follow the discussions and have learned a lot from this site.

LOL@Klimate Katastrophe Kooks
March 14, 2021 12:01 pm

The climate models fail to converge because the climastrologists refuse to even consider that CO2 climate sensitivity could be less than zero.

The effective emission height is ~5.105 km.

7 – 13 µm: >280 K (near-surface).
>17 µm: ~260 – ~240 K (~5km in the troposphere).
13 – 17 µm: ~220 K (near the tropopause).

TOA (emission height) is that altitude at which the atmosphere effectively becomes transparent to any given wavelength of radiation… and for some wavelengths, TOA is very near the surface. The emission profile is equivalent to a blackbody with a temperature of 255 K, and thus an effective emission height of 5.105 km.

Combine that 255 K effective emission height temperature with the lapse rate to get surface temperature, and you’ll find there is no “greenhouse effect”, thus no CAGW.

The lapse rate is said to average ~6.5 K / km. 6.5 K / km * 5.105 km = 33.1825 K. That is not the ‘greenhouse effect’, that is the tropospheric lapse rate. The climate loons have conflated the two. Polyatomic molecules such as CO2 and H2O reduce the adiabatic lapse rate, not increase it (for example: dry adiabatic lapse rate: ~9.81 K / km; humid adiabatic lapse rate: ~3.5 to ~6.5 K / km).

9.81 K / km * 5.105 km = 50.08005 K dry adiabatic lapse rate (due to homonuclear diatomics and monoatomics… see below), which would give a surface temperature of 255 + 50.08005 = 305.08005 K. Sans CO2, that number would be even higher (on the order of 314 K).

Water vapor (primarily) reduces that to 272.8675 K – 288.1825 K, depending upon humidity. Other polyatomics (such as CO2) also contribute to the cooling, to a much lesser extent. The higher the concentration of polyatomics, the more vertical the lapse rate, the cooler the surface. Also remember that the atmosphere is stable as long as the actual lapse rate is less than the adiabatic lapse rate… and a greater concentration of polyatomic molecules reduces the adiabatic lapse rate… thus convection increases.

This occurs because polyatomics transit more energy from surface to upper atmosphere by dint of their higher specific and latent heat capacity, which acts (all else held constant) to reduce temperature differential between different altitudes. One would think this would cause an upper-atmospheric ‘hot-spot’, and indeed the climastrologists originally claimed that such a hot-spot would be a signature of CAGW… but that hot-spot was never found, and for a very good reason… because the increased atmospheric CO2 concentration augers more energy from surface to upper atmosphere (an upper atmosphere warming effect) while also more effectively radiatively cooling the upper atmosphere… and the radiative cooling effect far overwhelms the convective warming effect. That’s why the upper atmosphere has exhibited a long-term and dramatic cooling.

Near-surface extinction depth is ~10.4 m at current CO2 concentration, and a doubling of CO2 concentration would reduce that to ~9.7 m. The troposphere is essentially opaque to 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation. In fact, it’s opaque to that radiation right up to ~15 – 20 km (TOA for that wavelength of radiation). That’s where the emission height of CO2 is.

Tropospheric thermalization by CO2 is effectively saturated… but upper atmosphere radiative cooling by CO2 is not saturated.

Thus, tropospheric CO2 thermalization only serves to increase CAPE (Convective Available Potential Energy), regardless of CO2’s atmospheric concentration. This increases convection, which is a cooling process (it transits energy from surface to upper atmosphere, then radiatively emits it… more CO2 will transit and emit more energy).

It is the homonuclear diatomics and monoatomics which are the ‘greenhouse gases’… they can receive energy via conduction by contacting the surface just as the polyatomics do, they can convect just as the polyatomics do, but in order to emit their energy, they must have their net-zero magnetic dipole moment perturbed via collision. But in the upper atmosphere, collisional processes take place far less often due to the reduced atmospheric density, and any collision is more likely to thermalize the energy, rather than emit it.

Homonuclear diatomic vibrational mode quantum states are relatively long-lived and meta-stable, and so the majority of that energy is thermalized via v-t (vibrational-translational) collisional processes.

Monoatomics have no vibrational mode quantum states (so they cannot contribute to thermalization warming), but their lower specific energy acts to convectively transit less energy per parcel of air than would more-complex molecules such as polyatomics.

Remember that radiative emission is the sole means by which our planet can shed energy to space. Remember also that CO2 is the primary radiative coolant in the upper atmosphere.

Thus, in an atmosphere consisting solely or primarily of homonuclear diatomics, radiative emission to space would be reduced. The upper atmosphere would be warmer (because it could not as effectively radiatively cool), thus the lower atmosphere would have less buoyancy, thus convection would decrease, thus the surface would be warmer.

In addition, because polyatomic molecules make the lapse rate more vertical, a paucity of polyatomic molecules would make the lapse rate less vertical (less energy would be transited from surface to upper atmosphere per parcel of air, temperature differential between altitudes would be greater, thus the lapse rate would force surface temperature to be much higher).

On a world without water (ie: an atmosphere consisting solely or primarily of homonuclear diatomics), the surface would be much warmer. On our mostly-water world, a decrease in atmospheric CO2 content would cause a similar effect, which would be compensated by an increase in atmospheric water vapor content (the warming due to lower CO2 atmospheric content would cause more evaporation of water, humid air is more buoyant than dry air, so convection would increase, thus the warming due to less atmospheric CO2 concentration would be compensated by cooling due to more water vapor content), which would again make the lapse rate more vertical by transiting more energy from surface to upper atmosphere.

Polyatomic molecules act to increase thermodynamic coupling between heat source (in this case, the surface) and heat sink (in this case, space) (as compared to homonuclear diatomics and monoatomics). They thermalize energy in the lower atmosphere, convect it to the upper atmosphere, and radiatively emit it.

Homonuclear diatomics act to thermalize energy picked up via conduction with the surface (and energy picked up via collision, and energy picked up via the odd collisionally-induced radiative absorption), but cannot as effectively radiatively emit that energy. They also act to reduce the energy content of any parcel of air (as compared to a similar parcel of air with the homonuclear diatomics replaced with polyatomics), thus less energy is convectively transited from surface to upper atmosphere..

Monoatomics act to reduce the energy content of any parcel of air (as compared to a similar parcel of air with the monoatomics replaced with homonuclear diatomics or polyatomics), thus less energy is convectively transited from surface to upper atmosphere.

If CO2 was such a great ‘heat-trapping’ gas, it would be used as a filler gas between double-pane windows… it’s not because window manufacturers know monoatomics with low specific heat capacity reduce thermodynamic coupling between heat source and heat sink.

mkelly
Reply to  LOL@Klimate Katastrophe Kooks
March 14, 2021 12:59 pm

Nice write up. Thanks.

fred250
Reply to  LOL@Klimate Katastrophe Kooks
March 14, 2021 2:01 pm

“it would be used as a filler gas between double-pane windows…”

There was an experiment done that showed that using CO2 as a filler in double glazed windows, gave INCREASED energy transfer compared to normal air.

Mike
Reply to  fred250
March 14, 2021 7:36 pm

Fred, do you have a link for this?
Was there increased flow both ways?

Last edited 2 months ago by Mike
stablesort
Reply to  Mike
March 14, 2021 9:49 pm
fred250
Reply to  Mike
March 14, 2021 11:46 pm

Link was on my old computer.. sorry.

DMacKenzie
Reply to  LOL@Klimate Katastrophe Kooks
March 14, 2021 3:26 pm

Very good LOL, 8 out of 10, add in cloud albedo and you’ll get 10.

Macha
Reply to  LOL@Klimate Katastrophe Kooks
March 14, 2021 4:03 pm

Great write up indeed. Am sure someone estimated amount of ground water pulled to surface for irrigation use…now that might change broad regional surface temps.

LOL@Klimate Katastrophe Kooks
Reply to  Macha
March 15, 2021 4:35 pm

I’m sure it does, given the high latent heat of vaporization of water.

In fact, if the climate loons were so awfully worried about global warming (whatever the cause), they’d be advocating for hitting the heat where it hurts using a method that’s effective… in other words, they’d be advocating for water misters on tall buildings in cities.

That’d quash the UHI effect, cool the air in cities so people wouldn’t suffer as much during heat waves, and reduce cooling costs.

Where I grew up, we had something similar… except the buildings were fir trees. My friend’s dad had a landscaping business, so I purchased a couple thousand feet of small plastic tubing, T-fittings and small misters, and Ty-Rap’d (my dad is an electrician, so he had plenty of Ty-Raps, the precursor to zip-ties) them in the trees around the northern and western perimeter of the wind-break (in summer, wind generally blew from the West-Northwest). If it got too hot, I’d go open the hose faucet, the trees would get wet, the evaporative cooling made a nice cool breeze.

It needn’t be a large flow of water, either.

Let us assume each building emits 20 L (5.283 US Gallons) of water in a fine mist per hour, and let us assume that the misters automatically kick on at 300 K (80,33 F, 26.85 C).

At 300 K, ∆H_vap = 2437300 J/kg or 677.02777778 Wh/L

So each building would be providing cooling equivalent to running a 13540 W A/C unit.

That doesn’t sound like a lot, until you get a thousand buildings doing the same, amounting to 13.54 MW of cooling equivalent.

The people on the street wouldn’t notice water misting down on them, the mist would be fine enough to almost immediately evaporate… all the people would notice is cool air flowing down the sides of the buildings.

But let’s instead destroy society, destroy capitalism, impoverish hundreds of millions of people, force hundreds of millions of people into government dependency, destroy our sovereignty, enslave ourselves to an unaccountable and unelected cabal of communist UN profiteers… yeah, that’s what the liberal climate loons are shilling for. LOL

Richard M
Reply to  LOL@Klimate Katastrophe Kooks
March 14, 2021 7:44 pm

Yes, more simply stated, the radiating gases allow energy to be distributed (through absorption/collision) into the atmosphere to molecules that are already gravitationally distributed. That is the reason for the lapse rate and the entire reason the surface is 33 C warmer than it would be without those radiating gases. There is no greenhouse effect.

Downwelling radiation is real, but that energy is simply being moved from one location to another one. When all emission paths are averaged out, you find the average is upward. Again, due to the distribution of mass. The probability of absorption is higher where there is more mass so the path downward is shorter. This means the atmosphere is supporting an outward, constant energy flux where some amount of that energy temporarily energizes the gases, but the energy **effectively** goes in one direction, to space.

Computing some warming effect from looking only at downwelling energy is a mental trap that many skeptics have accepted. In reality, it’s all part of the same outward flux when averaged out over the trillions of emission events.

LOL@Klimate Katastrophe Kooks
Reply to  Richard M
March 15, 2021 4:19 am

I’m not so sure this claimed ‘backradiation’ is of sufficient radiant intensity to even have much of an effect.

If ‘backradiation’ from CO2 atmospheric emission causes CAGW, where is that ‘backradiation’ coming from?

Near-surface extinction depth is ~10.4 m at current CO2 concentration, and a doubling of CO2 concentration would reduce that to ~9.7 m. The troposphere is essentially opaque to 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation. In fact, it’s opaque to that radiation right up to ~15 – 20 km (TOA for that wavelength of radiation). That’s where the effective emission height of CO2 is.

So this ‘backradiation’ must be coming from that ultra-thin ~10.4 m layer of atmosphere immediately above the surface, in order to even reach the surface… except that’s the same layer of atmosphere which is thermalizing nearly all of that 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation.

CO2’s absorption of IR in the troposphere only has the effect of thermalizing that radiation and thus increasing CAPE (Convective Available Potential Energy), which increases convection of air to the upper atmosphere (carrying with it the latent and specific heat of polyatomic molecules… more polyatomic molecules will carry more energy and will more readily emit that energy in the upper atmosphere), which is a cooling process.

Mean free path length for radiation decreases exponentially with decreasing altitude and vice versa due to air density changing inversely exponentially with altitude, therefore the net vector for radiation in the 13.98352 µm to 15.98352 µm band is upward, so the majority of ‘backradiation’ which could possibly reach the surface would be from that very thin layer of atmosphere which is within ~10.4 m of the surface, and the great majority of that energy is being thermalized and convected. So where’s this ‘backradiation’ energy coming from that’s going to cause catastrophic anthropogenic global warming, especially considering that the maximum able to be absorbed by CO2 is 8.1688523 W/sr-m^2, and the maximum able to be absorbed by anthropogenic CO2 is 0.29652933849 W/sr-m^2?

At 287.64 K (the latest stated average temperature of Earth) and an emissivity of 0.93643 (calculated from NASA’s ISCCP program from data collected 1983-2004), at a photon wavelength of 14.98352 µm (the primary spectral absorption wavelength of CO2), the spectral radiance is only 5.43523 W / m^2 / sr / µm (integrated radiance from 13.98352 µm – 15.98352 µm of 10.8773 W/sr-m^2 to fully take into account the absorption shoulders of CO2).

Thus the maximum that CO2 could absorb in the troposphere would be 10.8773 W/sr-m^2, if all CO2 were in the CO2{v20(0)} vibrational mode quantum state.

While the Boltzmann Factor calculates that 10.816% of CO2 are excited in one of its {v2} vibrational mode quantum states at 288 K, the Maxwell-Boltzmann Speed Distribution Function shows that ~24.9% are excited. This is higher than the Boltzmann Factor calculated for CO2 because faster molecules collide more often, weighting the reaction cross-section more toward the higher end.

Thus that drops to 8.1688523 W/sr-m^2 able to be absorbed. Remember, molecules which are already vibrationally excited can not absorb radiation with energy equivalent to the vibrational mode quantum state energy at which they are already excited. That radiation passes the vibrationally excited molecule by.

That’s for all CO2, natural and anthropogenic… anthropogenic CO2 accounts for ~3.63% (per IPCC AR4) of total CO2 flux, thus anthropogenic CO2 can only absorb 0.29652933849 W/sr-m^2.

CO2 absorbs ~50% within 1 meter, thus anthropogenic CO2 will absorb 0.148264669245 W/m^2 in the first meter, and the remainder 0.148264669245 W/m^2 within the next ~9 meters.

CO2 absorbs this radiation regardless of any increase in atmospheric concentration… extinction depth is ~10.4 m at 14.98352 µm wavelength. A doubling of CO2 atmospheric concentration would reduce that to ~9.7 m. Thus any tropospheric thermalization which would occur at a higher CO2 atmospheric concentration is already taking place at the current CO2 atmospheric concentration. Thus the net effect of CO2 thermalization is an increase in CAPE (Convective Available Potential Energy), which increases convective transport to the upper atmosphere, which is a cooling process.

Tropospheric thermalization is effectively saturated. A doubling of CO2 doesn’t appreciably reduce extinction depth at the band centered around 14.98352 µm. But upper-atmospheric radiative shedding of energy to space is not saturated… and more CO2 molecules will cause more upper-atmospheric cooling, increasing buoyancy of lower-atmosphere air and thus increasing convection. IOW, polyatomic molecules (such as CO2) increase thermodynamic coupling between heat source (in this case, the surface) and heat sink (in this case, space) due to the fact that they have higher specific heat capacity than the monoatomics (Ar) and homonuclear diatomics (N2, O2).

An increased CO2 atmospheric concentration will emit more radiation in the upper atmosphere (simply because there are more molecules absorbing energy in the lower atmosphere, more molecules convectively transporting energy to the upper atmosphere and advectively transporting energy poleward, and more molecules capable of emitting radiation in the upper atmosphere), thus more radiation will be emitted to space, and that represents a loss of energy to the system known as ‘Earth’, which is a cooling process.

This illustrates what I’m stating:
http://imgur.com/Zxq4KlB.png
That’s a MODTRAN plot at 287.64 K for 415 ppm vs. 830 ppm CO2 for 13.98352 µm to 15.98352 µm radiation (to fully account for the absorption shoulders of CO2). It assumes no water vapor, no CH4, no O3 present. Note that the troposphere plots aren’t appreciably different, whereas the 100 km plots (ie: at the edge of space) are appreciably different. IOW, a doubling of CO2 atmospheric concentration doesn’t appreciably change the upward or downward radiative flux in the troposphere (because the extinction depth for those wavelengths at 415 and 830 ppm is low enough that it’s thermalizing nearly all of that radiation, the net effect being an increase in CAPE (Convective Available Potential Energy), which increases convection, which is a cooling process), but it does appreciably change how much energy is exiting the system known as ‘Earth’, and that represents a cooling process. One can clearly see the effect of CO2 upon energy emission to space, as delineated by the shoulders of the emission spectrum of CO2 in the 100 km plots. That cools the upper atmosphere, and since the lapse rate is ‘anchored’ at TOA and since the heat transfer equation must (eventually) balance, that means the lower atmosphere must cool toward the temperature of the upper atmosphere (because a higher concentration of polyatomic molecules shifts the lapse rate vertically, <u>and</u> radiatively cools the upper atmosphere faster than the lower atmosphere can convectively warm it), and thus the surface must <i>cool</i> with an increasing CO2 atmospheric concentration. This is what is taking place, we’re just working through the humongous thermal capacity of the planet, which warmed due to a now-ended long series of stronger-than-usual solar cycles (the Modern Grand Maximum), but it is cooling (in fact, it’s projected that we’re slipping into a Solar Grand Minimum which will rival the Dalton Minimum, and may rival the Maunder Minimum).

fred250
Reply to  LOL@Klimate Katastrophe Kooks
March 15, 2021 2:16 pm

comment image

LOL@Klimate Katastrophe Kooks
Reply to  fred250
March 15, 2021 9:26 pm

Hey Fred, maybe you can help me out here… I’m attempting to calculate the adiabatic lapse rate for CO2 alone, but I’m getting a number that seems too high. I suspect it’s because I’ve not accounted for the low atmospheric concentration of CO2, but my brain’s not working right now, so I can’t figure out how to do that. Need sleep.

I did it once before, but apparently I didn’t keep the calculations.

Your input would be appreciated.

g = gravitational acceleration
c_p = Specific Heat Capacity at Constant Pressure

R = gas constant = 8.31 J K-1 mol-1

M = mean molecular mass = 44.0095 amu x 0.001 kg mol-1 = 0.0440095 kg mol-1

UNITS:
cp: (kg m2 s-2 K-1 mole-1 )/(kg mole-1) = m2 s-2 K-1
dT/dz: g/cp – m s-2 /(m2 s-2 K-1) = K m-1

c_p = 9/2 R / M = 9/2 8.31 J K-1 mol-1 / .0440095 kg mol-1 = 849.70290505459048614503686704007 m2 s-2 K-1

dT/dz = -g/c_p

9.80665 m s-2 / 849.70290505459048614503686704007 m2 s-2 K-1 = 0.01154126923853456344431073672951 K m-1

0.01154126923853456344431073672951 K m-1 * 1000 m = 11.54126923853456344431073672951 K km-1

Like I said, that seems high. Perhaps because I’ve not accounted for the low concentration of CO2 in the atmosphere.

LOL@Klimate Katastrophe Kooks
Reply to  LOL@Klimate Katastrophe Kooks
March 15, 2021 9:39 pm

I have a vague recollection of contrasting the current mean molecular mass of air to that of CO2… I’ll work on it tomorrow. Time to sleep.

fred250
Reply to  LOL@Klimate Katastrophe Kooks
March 16, 2021 12:12 pm

Much more sensible to just calculate the change in lapse rate due to say doubling or quadrupling current CO2 concentration

You are also using numbers that are far more “accurate” than is logical.. Stick to a couple of dps.

Apart from that, your calcs look reasonable.

LOL@Klimate Katastrophe Kooks
Reply to  fred250
March 16, 2021 5:33 pm

Yeah, that’s my ultimate goal, calculate for a 1,000,000 ppm CO2 concentration atmosphere, then back-engineer to whatever ppm concentration I want to use.

Actually, we could do this with every gas in the atmosphere, and come up with a much more exact answer than the climastrologists do… in large part, it is the lapse rate, after all, which determines surface temperature.

As for the decimal points, I’ve gotten in arguments with pedantic liberal kooks who claimed that even a fourth decimal point rounding means the entire answer is incorrect, so I’ve gotten into the habit of making the answer as exact as possible. It drives the liberal kooks mad, because they have nothing to nitpick. LOL

LOL@Klimate Katastrophe Kooks
Reply to  LOL@Klimate Katastrophe Kooks
March 15, 2021 5:11 pm

A correction / addendum (bolded):

Thus that drops to 8.1688523 W/sr-m^2 able to be absorbed. Remember, molecules which are already vibrationally excited can not absorb radiation with energy equivalent to the vibrational mode quantum state energy at which they are already excited. That radiation passes the vibrationally excited molecule by ( unless there are degenerate vibrational mode quantum states… there are three for CO2 ):
CO2{v21(1)}: 667.4 cm-1, 14.98352 µm
CO2{V22(2)}: 667.8 cm-1, 14.97454 µm
CO2{v23(3)}: 668.1 cm-1, 14.96782 µm

In such a case, CO2 would simply absorb / thermalize energy a maximum of three times, increasing the convective cooling effect of CO2 by increasing CAPE (Convective Available Potential Energy).

Tom in Toronto
March 14, 2021 12:09 pm

I was a statistics and economics dual major during my university years, and the one thing that climate modellers have no choice but to conveniently forget is that every time they run the model they lose a degree of freedom (i.e. the equivalent of one data point no longer being accessible).
Even if an individual forecaster is scrupulous enough to account for these trial runs, when the academic community is only presenting the ‘best’ models, they aren’t accounting for every ‘failed’ model that was used by others and didn’t yield results good enough to publish in a journal.
So this leaves cherry-picked results that have essentially zero (realistically, thousands of negative) degrees of freedom given the paucity of accurate temperature data, and no way to ‘re-run’ the experiment.
The estimates for error bars rely on the degrees of freedom, and once you get close to zero, your sigmas are virtually infinitely large (i.e. the results are worthless).
This is precisely the same reason that economic forecasts can be wildly incorrect.

“Climate Science” is a new field that doesn’t have the hundreds of years of failure behind it that Economic Forecasting does, and the data comes out at a far slower pace than most economic data.

Reply to  Tom in Toronto
March 15, 2021 1:52 am

So, what you are saying is, climastrologists are a bunch of venal liars using “statistics” to justify their right to utter false prophesy so as to mislead the general public, rob them blind, and enslave their children, all done from the elevated heights of a religious platform no sinner may criticise, because everything they say is too complex for mere mortals to understand?
Because that’s what Economics Forcasting has delivered so far…

Greg
March 14, 2021 12:18 pm

“there is only one “true” value of ECS in the real world; if that were not the case, any attempt at a model would be pointless from the outset.”

The whole concept of ECS is based on the concept of an “average global temperature”. Averaging temperature readings is physically meaningless since temperature is not an extensive property : it cannot be added and thus cannot be averaged.

An average temperature is a statistic but it is physically meaningless. So yes, the whole effort is pointless ( from a scientific point of views ). But we all know that this is not about science, it is a dishonest pseudo-scientific dressing for a political agenda.

Reply to  Greg
March 14, 2021 12:57 pm

Greg, that is one way to look at it, but I think erroneously. The obviously varying locational actual temperatures are washed out by computing the anomaly for each location. It is only the location anomalies that are ‘averaged’, and the ECS refers to the expected change in that anomaly average per CO2 doubling. That is mathematically sound. The confusion arises because everybody refers to ECS as if a new equilibrium ‘temperature’, when it is only indirectly.

Tim Gorman
Reply to  Rud Istvan
March 14, 2021 1:23 pm

I disagree. You can no more “average’ anomalies than you can absolutes. Uncertainties in the absolute temperatures grow as you increase the number of data points being “averaged”. Those uncertainties carry over to the anomalies as well. You can’t decrease uncertainty by subtracting two values. Therefore the uncertainties in the “average” grows as you add more data values.

It’s just like laying multiple boards end-to-end, each with its own uncertainty value. The more boards you add the higher the overall uncertainty becomes in the final length. It’s no different with temperatures laid end-to-end – i.e. added together. It doesn’t take long for the uncertainties to become larger than the differences in the “average” you are trying to discern.

Mark Pawelek
March 14, 2021 12:22 pm

What is this model fetish and why does it waste so much WUWT time? There’s not a single greenhouse gas climate model with a half decent empirical tests, validations or attempted falsifications. Climate modelling is nearly all fake science. The probability that all the CIMP6 climate models are incorrect is 1. Or 100%.

Das ist nicht nur nicht richtig; es ist nicht einmal falsch!
– Wolfgang Pauli

Reply to  Mark Pawelek
March 14, 2021 1:14 pm

MP, it is because only in a future predicted by climate models are there any climate concerns worth worrying about. The CAGW castle is build on model sand. And, the IPCC therefore does its level best to pretend the many billions spent on them was good money rather than bad, lest the whole climate enterprise collapse.
The beauty of CMIP6 is that AR6 has dug itself an even bigger hole, harder to climb out of.

Gerald Machnee
Reply to  Mark Pawelek
March 14, 2021 3:14 pm

Willis explained the problem with models very well.

Mark Pawelek
Reply to  Gerald Machnee
March 16, 2021 7:39 pm

Even were scientists able to model the effect of clouds ‘correctly’ the models would still be wrong, and would still be biased towards man-made climate change. Political groups, E-NGOs and billionaires are biased to see man-made climate change. Fourty, or more, years of climate models, and the testimony of climate modellers tell us. Tens of billions of dollars trying to prove sky pixies say so. Only scientists who comply with funding bias are funded.

Without the scientific method, the models will always be very wrong. That’s why skeptics should concentrate on empirical testing, validation, and falsification. Without it, there is no science. Our intelligence could double, our AI and machine-learning algorithms could be orders of magnitude better – but the models will still be wrong, and no amount of logical refutation can make models better. Only empirical falsification can help. It’s the only thing we have to stop bad science.

Reply to  Mark Pawelek
March 15, 2021 2:21 am

“Modelling” is what the enemy uses to bullshit us common folk. If we come right out and call them bullshitters, we are rude denialists reverting to ad hominems to discredit our betters.
Reprinting their Holy Writ, and throwing it before us swines, we all get a go at analysing said Word of Gawd as well as each of us are able or willing. “Death by a thousand paper cuts” I think someone called it? So we spend a lot of time on their models. It is the only thing they offer us swine, they have nothing else… Pure fear porn.
That said, now you also know why every second troll writes such beautiful prose on the impropriety of non-experts commenting on the Holy Writ as delivered by the Holy Profits of climaskatology, it really gets their gall, their Montessori education was all about “consensus Truth”, “perception is reality” and “believe the science”.
Not that I have much to offer, mind, I’m just here for the awsome stuff the people all over this site teach me daily.

Burl Henry
March 14, 2021 12:29 pm

Earth’;s Climate is totally controlled by varying amounts of dimming Sulfur Dioxide aerosols in the atmosphere, primarily from random volcanic eruptions, and as such, can NEVER be modeled, unless the modeler’s provide solutions for controlling temperatures by adjusting atmospheric SO2 aerosol levels.

Those who maintain that the ECS for CO2 is zero are the only correct voices on this blog..

.

fred250
Reply to  Burl Henry
March 14, 2021 12:55 pm

That requires repeating and highlighting…

Those who maintain that the ECS for CO2 is zero are the only correct voices on this blog..

DMacKenzie
Reply to  Burl Henry
March 14, 2021 7:16 pm

In order….sunshine, clouds, evaporation and condensation of rain, Net ground/sky IR, convection…..aerosols are somewhere about 1/10 of the smallest of these….just sayin’

Burl Henry
Reply to  DMacKenzie
March 15, 2021 1:50 pm

DMacKenzie:

The sun is not a variable, but the amount of sunshine reaching the Earth’s surface is what drives Earth’s temperatures, and that amount is controlled by the varying amounts of dimming SO2 aerosols in the atmosphere. Increase them and it cools down. Decrease them and it warms up.

http://www.skepticmedpublishers.com/article-in-press-journal-of-earth-science-and-climatic-change/

Joel O'Bryan
March 14, 2021 12:33 pm

Title: “The Problem with Climate Models”
There is not a singular problem climate models.
The problems with climate models are manifold. The problems are many, and they are all entangled, wrapped around each other like a proud dung beetle rolling up his ever-growing ball of manure prize. The models and their problems are like that ball of manure that forms a massive junk ensemble – it stinks. It’s an ensemble the CMIP’ers like to publish in an “Emperor’s New Clothes” Fallacy. Only those with divine climate training and appreciation of junk science can see and smell the goodness.

Ed identifies one of those problems here: Convergence. It is a really a lack of convergence, better seen as a divergence. The more models produced by the Climate Dowsing community, the wider the ECS upper and lower bound estimate gets; rather than a convergence as n increases.

Another massive problem in the climate models is they are iterative input value error propagation machines that produce statistical error that quickly overwhelms any conclusions in the output that can be drawn. This is the Pat Frank-developed problem with the GCMs. This problem I think of as a sticky glue that holds the junk model outputs together and makes unwinding the ensemble, a turd ball the CMIP’ers create, as an intractable problem. Better to just toss them all in the junk bin than to try to parse and tease out nuggets of goodness from them. Junk all the way down.

Another major problem with too hot running climate models, at least in their current implementations, is they all predict the mid-tropospheric tropical hot spot. No one in the climate modeling community wants to talk about this elephant in the room anymore. They all hand-wave it away and try to ignore it as they have for over 20 years. The lack of observation of this prediction after almost 30 years of looking would in other science disciplines, been the basis for either tossing out the hypothesis completely, or realizing the strong water vapor feedback part of GHG theory is likely incorrect and implementing a weak GHG theory. But then the modellers political paymasters would be unhappy.

So those 3 problems:, divergence, iterative error propagation machines, and failure of major prediction tells us that the CMIP3/5/6 ensembles are merely junk science. That the climate modelers all assemble every few 4-6 years and put each of the climate model outputs together into an ensemble and proudly roll-out their predicted ECS range makes them not much more than dung beetles. At least the dung beetle though knows what he has is manure.

Last edited 2 months ago by joelobryan
S.K.
Reply to  Joel O'Bryan
March 14, 2021 1:02 pm

Excellent analogy.

[fix the misspelling of your email in the autofill and your comments won’t get flagged-mod]

S.K.
March 14, 2021 12:35 pm

The different computer codes for calculating stellar evolution, developed by groups in various countries, yield the same results for the same evolutionary phases, which also agree well with the observations. Such convergence is a hallmark of the progress of the insights on which the models are based, through advancement of understanding of the underlying physics and testing against reality, and is manifest in many of the sciences and techniques where they are used.

The IPCC alters the data to support their thesis and their models without any effort to understand the underlying physics. They are a political organization who produce propaganda and have zero credibility in science community.

Why is there no research on producing experimental evidence quantifying co2 climate sensitivity?

Climate science is in it’s absolute infancy, it is extremely complex and the complexity ensures it will always be changing. Any individual or organization that asserts they have the ability to model or predict the climate is/are ignorant to climate science.The science community needs more data and fewer computer models.

March 14, 2021 12:47 pm

“Such models give ECS values between 0.5C and 0.7C. Not something to be really concerned about.
– Ed Zuiderwijk, PhD

In the many decades that I have been involved in the “global warming” / “climate change”/ “wilder weather”/ “we’re all gonna die from false fabricated climate BS” / debate, I’ve watched rationally calculated ECS decline by almost an order of magnitude, from almost 10C to about 1C/(2xCO2). 

My own calculations of ECS range from Plus1C/doubling to Minus1C/doubling, based on the ASSUMPTION that increasing atmospheric CO2 drives temperature, which is probably FALSE – unless the future can cause the past, which is extremely improbable in our current space-time continuum. I suggest that ECS, at best, should be treated as an “Imaginary Number”. 🙂

The following post is from 2013.

https://wattsupwiththat.com/2013/09/19/uh-oh-its-models-all-the-way-down/#comment-1108144
[To spare our hardworking moderators, I’ve deleted links – see the original post for links.]
[excerpt]

One could also say “an infinitude of worthless climate models”, programmed by “an infinitude of dyslexic climate modellers”, yielding “an infinitude of exaggerated global warming predictions” (er, sorry, “projections”).

John said above:
David Appell had the first comment on Judith’s blog entry, which is entitled “Consensus Denialism.” Here is what he said about Judith:
“The distressing thing is how some people are all ready to attack models, instead of helping make them better.”

OK David, here are some helpful suggested steps to make the climate models better:
1. Adjust the Surface Temperature (ST) database downward by about 0.05 to 0.07C per decade, back to about 1940, to correct for a probable strong warming bias in the ST data.
2. Decrease the ECS (sensitivity) to about 1/10 of its current level, to between 0.0 and 0.5C. If ECS exists, it is much smaller than current estimates.
3. Eliminate the fabricated aerosol data to enable the false high ECS values used in the climate models. The aerosol data was always provably false (Google “DV Hoyt” ClimateAudit).
4. Include a strong natural cyclical variation based on either the PDO (~60 years) or the Gleissberg Cycle (~90 years) – see which one fits the ST data best.

Other than that, the models are great! Actually no, not so great – the models have probably “put the cart before the horse” – we know that the only clear signal in the data is that CO2 LAGS temperature (in time) at all measured time scales, from a lag of about 9 months in the modern database to about 800 years in the ice core records – so the concept of “climate sensitivity to CO2” (ECS) may be incorrect, and the reality may be “CO2 sensitivity to temperature”. See work by me and Murry Salby (and Hermann Harde and Ed Berry).
BTW, this does not preclude the possibility that increases in atmospheric CO2 over the past ~century are primarily due to human combustion of fossil fuels, but there are other plausible causes – (Google “mass balance argument” Engelbeen Courtney).

So good luck with those models David – hope this helps to make them better. 🙂

Regards to all, Allan

Steve Case
March 14, 2021 12:55 pm

After a short search on “Climate Models Clouds” this comes up:

Clouds, Arctic Crocodiles and a New Climate Model January 2020

From NASA no less and this quote:

   “It stands to reason that any computer model that hopes to explain
   past climates or forecast how ours will change would have to take
   clouds into account. And that’s primarily where climate models fall
   short.”

Words mean things and the quote says “primarily” which means clouds aren’t the only shortcoming.

So why is so much stock put with climate models?

John Tillman
Reply to  Steve Case
March 14, 2021 1:25 pm

Above I mention the grid scale resolution needed actually to model clouds, tens to hundreds of meters rather than hundreds of kilometers, as now.

There presently ins’t enough computing power in Christendom to achieve this, or in any other and all -doms.

Clyde Spencer
March 14, 2021 1:29 pm

So after some 4 decades of development climate models have still not converged to a ‘standard climate model’ with an unambiguous ECS value; rather the opposite is the case.

Convergence is just some 40 years in the future — and always will be.

Last edited 2 months ago by Clyde Spencer
Clyde Spencer
March 14, 2021 1:36 pm

… there is only one “true” value of ECS in the real world;

Might it be that ECS is not a constant, but instead varies with other parameters such as the global mean temperature or greenhouse gas concentration?

Tim Gorman
Reply to  Clyde Spencer
March 14, 2021 1:55 pm

go here: https://climate.nasa.gov/vital-signs/carbon-dioxide/

The canard that CO2 is well-mixed globally doesn’t jive with what NASA shows at this site. Not only is the CO2 not well-mixed it shows the US, Siberia, and China as having very high CO2 concentrations.Yet the US and Siberia are two regions that have been seeing cooling, not warming. How can CO2 then be the control knob?

Clyde Spencer
Reply to  Tim Gorman
March 14, 2021 5:43 pm

Tim
I’m afraid that “well-mixed,” like beauty, is in the eye of the beholder. I have never seen a formal definition of “well-mixed,” and probably never will. As commonly used, it is as ambiguous as “catastrophic warming” or “becoming more acidic,” which is how the game is played by alarmists.

Carlo, Monte
March 14, 2021 1:52 pm

Is the AR6 equivalent of Table 9.5 in the AR5 report publicly available yet? It has the ECS and TCR results from individual models.

Ossqss
March 14, 2021 2:10 pm

C’mon man, all the talking doomsayer “climate heads” in the news all use RCP 8.5.

That’s got to be accurate, right? >sarc

fred250
March 14, 2021 2:10 pm

And the saying that “some models are useful” does NOT apply to climate models

Their use is actually a REAL DANGER to society and the world, just like using a “wrong” engineering model would be.

“Believing” these erroneous models is doing GREAT DAMAGE to many parts of global infrastructure and badly affecting many societies, especially those in the developing world.

It is causing the delay of reliable electricity infrastructure to those countries while at the same time causing great UNRELIABILITY and INSTABILITY in the electricity supplies of developed societies

Not to mention the political, societal and civil unrest it is creating.

Last edited 2 months ago by fred250
CO2isLife
March 14, 2021 2:32 pm

Any real scientist would be able to identify the flaws in the Climate Models. The fact that they aren’t fixing the obvious problems pretty much proves they benefit from the flawed output.
1) The change in W/M^2 per unit of CO2 shows a log decay, the climate models assume a linear relationship
2) CO2 isn’t related to Temp, W/M^2, the climate models model CO2 in a linear relationship and not W/M^2 in a log decay relationship
3) The only mechanism by which CO2 can affect climate change is through the effect of 15 micron LWIR. 15 Micron LWIR has a black body temp of -80C, and won’t warm water
4) The oceans are warming, and CO2 and 15 micron LWIR won’t warm water
5) The Poles, N and S Hemi. Land and Ocean, and lower 1/3rd, Equator, and upper 1/3rd all have identical CO2 yet different temp trends. CO2 of equal amounts can’t cause the temperature differentials
6) The UHI and Water Vapor corrupt the temperatures. If you control for the UHI and Water Vapor and isolate the impact of CO2 on temperature you find that CO2 doesn’t cause warming. The models focus on warming.

Simply look at a desert location to isolate the impact of CO2 on temperatures. You will see that CO2 doesn’t cause warming.

Alice Springs (23.8S, 133.88E) ID:501943260000 https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show_v3.cgi?id=501943260000&dt=1&ds=5

Jim Gorman
March 14, 2021 2:35 pm

Warmists never want to deal with the radiation H2O absorbs directly from the sun in the near IR spectrum. It is substantial. I don’t know for sure, but a lot of them mistake this for “feedback” from CO2. One only has to look at this to see something is out of whack in the “radiation budget” because this is never accounted for.

IMG_0250.JPG
Clyde Spencer
Reply to  Jim Gorman
March 15, 2021 7:59 am

Jim
If this is your personal graphic, there are a couple of typo’s that should be corrected:
Readiation and Wavelenght

Jim Gorman
Reply to  Clyde Spencer
March 18, 2021 6:03 am

Clyde, not my graphic. Found on Bing images.

RickWill
March 14, 2021 2:52 pm

I have an accurate climate model grounded in sound physics. It is light years ahead of any other climate model that are founded on a fairy tale.

The average surface temperature of the Earth is set by thermostatic limits at the poles of -2C and the tropical warm pools at 30C. Hence:
Average Global Surface temperature = {30 + (-2)}/2 = 14C or 57F

Glaciation occurs when the Atlantic warm pool fails to reach the 30C limit annually, resulting in ice accumulation,mulation on adjacent land.

RickWill
Reply to  RickWill
March 14, 2021 3:11 pm

Something that has bugged me for a while is finding missing heat in the deep oceans. We are told that a key indicator of global warming is observed in ocean heat uptake. The top 100m of the ocean has reportedly warmed 0.57C in the past 65 years. The top 700m .22C and the top 2000m 0.11C. That means the zone 700 to 2000m has warmed 0.06C.

To conduct the amount of heat to cause 0.06C rise in the 1300m below 700m from the surface mixed layer to 700m would take 430years. So the heating of the deep ocean in 65 years requires a different process.

Given that the thermocline is a function of heat conduction from the top mixed layer and cooling by upward flow from the cool zone, fed from the poles, I determine that the only way to get “rapid” warming at depth is to slow the rate of evaporation so the upward rate of flow is reduced. That is detailed in the attached charts.

To get close to the temperature changes observed in the different layers, requires the change in net evaporation rate to occur at least 100 years before the the start of the deep ocean record in 1955.

A reduced net evaporation rate is consistent with increase in the area of warm pools, where the surface insolation is reduced and there is convergence of moist air so the net evaporation rate is negative over the warm pools.

Ocean_Temp_Profile.png
Richard M
Reply to  RickWill
March 14, 2021 8:24 pm

This agrees with the increase in salinity seen over the last 400 years. Higher salinity would slow the rate of evaporation.

Loydo
Reply to  RickWill
March 16, 2021 8:57 pm

What about convection, turbulence and up and down-welling?

RelPerm
March 14, 2021 2:59 pm

The title “The Problem with Climate Models” infers only only one problem with climate models? Me thinks they have many more problems than just ECS!

March 14, 2021 3:15 pm

An ECS around 0.7 C/doubling will work very nicely I suspect.

That is my finding from 250 years of HadCET, and the findings of Lindzen and Spencer from the AMSU data.

Of course this requires the Svensmark Mechanism to be recognized as major driver of global temperature. Which would not be popular with lucratively paid government climate scientists, who would then lose their jobs.

(I also think TCR and ECS are fairly close to each other, otherwise the ~60 year cycle wouldn’t be so pronounced in HadCRUT 3 and the AMO.)

RickWill
March 14, 2021 3:19 pm

I did an exercise looking at Australia’s CSIRO CMIP3 and CMIP5 models over the same region. In the 4 years between the data sets, they actually COOLED the Nino34 region by 0.8C.

Of course this region has had constant temperature for the last 4 decades so the only way you get a rising trend and have the model accurate at time of generation is to cool the past.

Nino34_CSIRO-NCEP.png
RelPerm
Reply to  RickWill
March 14, 2021 3:43 pm

…so the only way you get a rising trend and have the model accurate at time of generation is to cool the past.

Easy to do when alarmists are in charge of maintaining historical temperature record. Models hind-casting garbage data are sure to forecast garbage, no matter what ECS they assume.

fred250
Reply to  RelPerm
March 14, 2021 11:49 pm

” Models hind-casting garbage data are sure to forecast garbage”

This is a point I have made MANY times !!

FAKE trend in the hind-cast data => FAKE trend in the forecasting.

Chris Hanley
March 14, 2021 3:27 pm

Absence of progress is mentioned as an indicative characteristic of a pseudoscience amongst others that would also apply to climate change research over-reliant on computer models.

March 14, 2021 4:13 pm

“…none of those models represent reality well …” One of the difficulties appears to be in grasping that the flow of energy gets redirected wrt wavenumber.

Weekly_rise
March 14, 2021 4:36 pm

“ We know immediately that N-1 models are not correct and that the remaining model may or may not be correct. So we can say that the a priori probability that any model is incorrect is [(N-1+0.5)/N] = 1–0.5/N. This gives a probability that none of the models is correct from (1-0.5/N)^N, about 0.6 for N>3. So that’s odds of 3 to 2 that all models are incorrect”

This is fine and well if considering the outcome as a binary event – absolutely right or absolutely wrong. But surely we can agree that a “true” ECS of 2 degrees would not be functionally distinguishable from, say, a modeled ECS of 2.0001 degrees, even though the modeled result would be wrong, strictly speaking.

fred250
Reply to  Weekly_rise
March 14, 2021 11:51 pm

Again, you show your TOTAL LACK OF COMPREHENSION of error propagation.

As expected.

Is that your Weekly-FAIL, or do you have more to come. !

Mike Maguire
March 14, 2021 6:24 pm

One problem with the massively wide range in models is that you can dial in positive and negative feedbacks based on assumptions that the next modeler will disagree with and dial in their speculative feedbacks to represent unknowns. 1.8C to 5.6C is an insanely wide range considering we’ve had over 100 years worth of surface measurements with CO2 going up and various other factors changing. To think that one model would have triple the warming of another model of the same planet, with, what should be the same conditions, tells you something isn’t right.

In models, you have to try to capture everything known accurately and estimate the unknowns using speculation. The range of speculation is what causes the range in sensitivity and wide range in temperature projections that grow to huge values 100 years from now.

But why not give MUCH more weight to the obvervations on the real planet(recorded empirical data) and just project that trend out and only make slight adjustments to speculative items?
If the trend has been 1.4 deg C/century, assume the trend forward will continue to be very close to 1.4 deg C/century. The trend is your friend.

Even if there were several unknown variables/factors that had an influence on the rate of warming over the last century…………it doesn’t matter because the data is the data and it measured the REAL warming, with every one of those factors…………known and unknown in there. You don’t have to know and represent every factor.

You can be just as ignorant of those same factors going forward…..but using the trend will dial them in automatically. Yes, they could change but if you don’t even know them to begin with, how do you accurately determine how they might change?

You can’t.

The trend and change over the last 100 years has every single element…. knowns and unknowns baked in.

It doesn’t necessarily tell you the sensitivity to CO2 with accuracy because of the unknowns that you don’t know……which means your attempts to accurately project the next 100 years will be flawed if you based them on CO2 sensitivity “speculative guesses”

But using the trend, allows you to project the next 100 years based on that trend that had every single unknown in it by definition.

All these brilliant PhD scientists beating their heads against the wall because they need thousands of mathematical equations to accuractely represent the physics of all sorts of dynamics to try to represent the CO2 sensitivity to project the global temperature for the next 100 years.
.
And the wide range of projections tells you how speculative and unknown the unknowns are.

Then, an operational meteorologist armed with only empircal data that accurately measured the last 100 years, can project it out for the next 100 years and likely get much closer to reality than the vast majority of the climate model projections.

There’s just way too much weighting on modeling equations and guessed at sensitivity and not enough weighting given to the real world/empirical data/observations.

Tim Gorman
Reply to  Mike Maguire
March 15, 2021 4:26 am

If you look just at the model outputs, after about 20 years they all turn into basically linear trends of equation y = mx + b. Simple projections of a linear trend. And almost all of them have a higher “m” than the real world. your 1.4C / century.

Sage
March 14, 2021 8:29 pm

Normally, science looks for convergence to verify a hypothesis. In the “settled science” of Climate Change, divergence is considered verification of the hypothesis.

March 14, 2021 9:54 pm

How many people have taken the trouble to go back and look in detail at the original Manabe and Wetherald (M&W) model and their underlying assumptions? [M&W, 1967] They started by ASSUMING an equilibrium average climate. This idea goes back to Pouillet in 1836 and comes from a fundamental misunderstanding of climate energy transfer [Pouillet 1836]. Conservation of energy for a stable climate on planet earth requires an approximate long term planetary energy balance between the absorbed solar flux and the long wave IR flux returned to space. Using an average solar flux of 1368 W m-2, an albedo (reflectivity) of 0.3 and an illumination area ratio (sphere to disk) of 4, the average LWIR flux is about 240 W m-2. (The exact number depends on satellite calibration). Simple inspection of the CERES IR images gives a value of about 240 ±100 W m-2 [CERES, 2011].  There is NO exact short term energy balance. 
 
Furthermore, the spectral distribution of the outgoing longwave radiation (OLR) at the top of the atmosphere (TOA) is not that of a blackbody near 255 K. The OLR consists of the upward emission of the LWIR flux from many different levels in the atmosphere. The emission from each level is modified by the absorption and emission of the levels above. The OLR does not define an ‘effective emission temperature’.  It is just a cooling flux. There is no 255 K temperature that can be subtracted from an ‘average’ surface temperature of 288 K to give a ‘greenhouse effect’ temperature of 33 K [Taylor, 2006]. 
 
Thermal equilibrium means that the rate of heating equals the rate of cooling.  The lunar surface under solar illumination is in thermal equilibrium so that the absorbed solar flux is re-radiated back to space as LWIR radiation as it is received. There is almost no time delay. The earth is very different from the moon. It has an atmosphere with IR active species (‘greenhouse gases’), mainly H2O and CO2. It also has oceans that cover about three quarters of the surface. In addition, the period of rotation is also faster, 24 hours instead of 27.3 days.  On the real planet earth there are significant time delays between the absorption of the solar flux and the emission of the LWIR flux. This is irrefutable evidence of non-equilibrium energy transfer. Diurnal time delays or phase shifts between the peak solar flux at local noon and the surface temperature response can easily reach 2 hours and the seasonal phase shift at mid latitudes for the ocean surface temperature may reach 8 weeks. This is not new physics. The phase shift for the subsurface ground temperature was described by Fourier in 1824 [Fourier, 1824]. It has been ignored for almost 200 years. Similar non-equilibrium phase shifts are also found in other energy storage devices such as capacitors in AC electronic circuits. 
 
The surface temperature is determined at the surface by the interaction of four main time dependent flux terms with the surface thermal reservoir. These are the absorbed solar flux, the net LWIR emission, the moist convection and the subsurface transport.  (This does not include rainfall or freeze/thaw effects). The fluxes are interactive and should not be separated and analyzed independently of each other. A change in surface temperature requires the calculation of the change in heat content or enthalpy of the surface reservoir divide by the local heat capacity [Clark, 2013]. The (time dependent) downward LWIR flux from the lower troposphere to the surface limits the surface cooling by net LWIR emission. In order to dissipate the excess solar heat, the surface warms up until the excess heat is removed by moist convection. This is real source of the so called greenhouse effect. The ocean-air and land-air interfaces have different energy transfer properties and have to be analyzed separately. In addition, for the oceans, long range transport by ocean currents is important. 
 
The M&W ‘model’ has nothing to do with planet earth. It was simply a mathematical platform for the development and evaluation of atmospheric radiative transfer algorithms. M&W left physical reality behind as soon as they made their first assumption of an exact flux balance between an average absorbed solar flux and the LWIR flux returned to space. They started with a static air column divided into 9 or 18 layers. The IR species were CO2, H2O and O3 simulated using the spectroscopic constants available in 1967. The surface was a blackbody surface with zero heat capacity. This absorbed all of the incident radiation and converted it to blackbody LWIR emission. To simulate the atmospheric temperature profile they fixed the relative humidity in each air layer. The water vapor concentration therefore changed with temperature as the surface and layer temperatures changed. The model was run iteratively until the absorbed solar flux matched the outgoing LWIR flux. It took about a year of model time (step time multiplied by the number of steps) to reach equilibrium. Actual computation time was of course much less. In 1967, getting such a model to run at all and then reach equilibrium was a major achievement. However, the effects of surface heat capacity, ocean evaporation and convection were ignored. When the CO2 concentration in the M&W model was increased, there was a decrease in the LWIR flux emitted at the top of the atmosphere. In order to reach a new ‘equilibrium state’ the surface temperature and the tropospheric temperatures had to increase. As the temperature increased, the water vapor concentration also increased. This then ‘amplified’ the surface warming produced by the CO2. All of this was a mathematical artifact of the input modeling assumptions. There is no equilibrium climate on planet earth. 
 
Unfortunately the ‘global warming apocalypse’ predicted by the M&W model became a lucrative source of research funds that was too good given up. Two early climate ‘bandwagons’ were created. First, the radiative transfer algorithms could be improved with better spectroscopic constants and more IR species. Second, a large number of M&W ‘unit’ models could be incorporated into a global circulation model. In addition, everyone one needed the biggest and fastest computer available. No one tried to calculate the change surface temperature from first principles or otherwise independently validate the M&W model. Global warming had been created by model definition. Do not kill the goose that lays the golden eggs. By 1975, M&W had created a ‘highly simplified’ global circulation model that still produced ‘global warming’ and by 1978, eleven more (minor) IR species had been added to the M&W model [M&W, 1975; Ramanathan and Coakley, 1978].
 
Instead of correcting the equilibrium assumption, three additional invalid assumptions were added to the M&W model by Hansen and his group in 1981 [Hansen et al, 1981]. First, the ‘blackbody surface’ was replaced by a 2 layer ‘slab’ ocean. This was used to add heat capacity and a delayed time response but little else to the ‘model’. The ocean surface energy transfer, particularly the wind driven evaporation (latent heat flux) was ignored. Second, the effect of a ‘doubling’ of the atmospheric CO2 concentration on an ‘equilibrium average climate’ was discussed as though it applied to planet earth. The mathematical warming artifacts created by the equilibrium model were presented as though they were real. On planet earth, the changes in LWIR are far too small to have any effect on surface temperature. Third, the weather station temperature was substituted for the surface or skin temperature. The flux terms interact with the surface. The weather or meteorological surface air temperature (MSAT) is measured in a ventilated enclosure located 1.5 to 2 m above the ground. This was a fundamental ‘bait and switch’ change made to the observables that were ‘predicted’ by the ‘model’ without any change to the model calculations. How did the ‘blackbody surface’ turn into a weather station? Furthermore, one of the real causes of climate change, the Atlantic Multi-decadal Oscillation (AMO) was clearly visible in the temperature plots shown by Hansen et al, but they chose to ignore reality and called these temperature variations ‘noise’. The only change that has been made to the basic equilibrium climate ‘model’ since 1981 was the addition of ‘efficacies’ to the radiative forcing terms by Hansen et al in 2005 [Hansen et al, 2005].
 
Since the start of the industrial revolution around 1800, the atmospheric concentration of CO2 has increased from about 280 to 400 ppm. This has produced a decrease in the LWIR flux at TOA of approximately 2 W m-2 with a similar increase in the downward LWIR flux to the surface [Harde, 2017]. At present, the average CO2 concentration is increasing by about 2.4 ppm per year, which corresponds to a change in LWIR flux near 0.034 W m-2 per year. The effect of an increase in CO2 concentration on surface temperature has to be determined by calculating the effect of the increase in LWIR flux on the change in heat content of the surface thermal reservoir after a thermal cycle with and without the change in flux. This is simply too small to measure. 
 
The decrease in LWIR flux at TOA has been turned into a ‘radiative forcing’ and an elaborate climate modeling ritual has been developed to describe the effect of a hypothetical ‘CO2 doubling’ on a fictional equilibrium average climate [Ramaswamy et al, 2019; IPCC, 2013; Hansen, 2005]. In order to understand what really happens on planet earth, the ‘radiative forcing’ has to be converted back into a change in the rate of heating at different levels in atmosphere [Feldman et al. 2008]. For CO2, the ‘radiative forcing’ is a wavelength specific decrease in the LWIR flux in the P and R branches of the main CO2 emission band at TOA, produced by absorption at lower levels in the atmosphere. This results in a slight warming in the troposphere and a cooling in the stratosphere. (There is also a smaller effect for the CO2 overtone bands). For a ‘CO2 doubling’, the maximum warming rate in the troposphere is less than 0.1 K per day [Iacono et al, 2008]. This is simply dissipated by the normal convective motion in the troposphere. There is a very small increase in emission from the H2O band and a small increase in the gravitational potential energy. The lapse rate is not a mathematical function, it is a real vertical motion of the air in the troposphere – upwards and downwards. At an average lapse rate of -6.5 K km-1 a temperature increase of 0.1 K is produced by a descent of 15 m. This is equivalent to riding an elevator down about 4 floors. The dissipation of the radiative forcing is illustrated schematically in Figure 1 (attached). The slight heating effect is illustrated in Figure 2 (attached).
 
In addition, the LWIR flux in the atmosphere is produced by many thousands of overlapping molecular lines. In the lower troposphere, these are pressure broadened and overlap to produce a quasi-continuum within the main H2O and CO2 absorption emission bands. About half of the downward LWIR flux reaching the surface from the troposphere originates from within the first 100 m layer above the surface and almost all of the downward LWIR flux originates from within the first 2 km layer. Any ‘radiative forcing’ at TOA from a decrease in LWIR flux cannot couple to the surface and cause any kind of temperature change [Clark, 2013]. 
 
The global warming in the climate models has been created by ‘tuning’ the models to match the ‘global average temperature anomaly’ such as the HadCRUT4 temperature series from the UK Met. Office [HadCRUT4, 2019]. The climate warming has been produced by a combination of the warming phase of the Atlantic Multi-decadal Oscillation (AMO) and various ‘adjustments’ to the temperature record [Andrews, 2017a; 2017b; 2017c; D’Aleo, 2010; NOAA, AMO, 2019,]. The HadCRUT4 climate series was used by Otto et al [2013] to create a pseudoscientific equilibrium climate sensitivity (ECR) and transient climate response (TCR) using the correlation between HadCRUT4 and a set of contrived ‘radiative forcings’. In reality, the downward LWIR component of the forcings from the lower troposphere to the surface cannot couple below the ocean surface. They are absorbed within the first 100 micron layer and fully mixed with the much larger and more variable wind driven evaporation. The two cannot be separated and analyzed independently of each other. Figure 3a (attached) shows the HadCRUT4 data used by Otto et al and Figure 3b shows the radiative forcings. Figure 3c shows the HadCRut4 data set from Figure 3a overlapped with the AMO. From 1850 to 1970, there is a good match between the two, including both the nominal 60 year oscillation and the short term ‘fingerprint’ variations. After 1970 there is an offset of approximately 0.3 C. This requires further investigation, but is probably related to ‘adjustments’ during the climate averaging process. The correlation coefficient between the two data sets is 0.8. The linear slope is the temperature recovery from the Little Ice Age. Figure 3d shows the tree ring reconstruction of the AMO from 1567 by Gray et al [Gray et al, 2004; Gray.NOAA, 2021]. The instrument record from 1850 is also shown. The variations in the AMO have no relationship to changes in the atmospheric CO2 concentration. 
  
The increase in the surface temperature of the N. Atlantic Ocean is transported over land by the prevailing weather systems and coupled to the weather station record through the diurnal convection transition temperature. The land surface temperature is reset each day by the local temperature at which the land and air temperatures equalize. Changes in this transition temperature are larger than any possible changes that can be produced by the observed increase in atmospheric CO2 concentration. Temperature changes produced by downslope winds and ‘blocking’ high pressure systems can easily reach 10 C over course of a few days. 
 
The forcings, feedbacks and climate sensitivities found in the climate models can be traced back to the mathematical artifacts created by the original M&W model. There is no equilibrium average climate that can be perturbed by an increase in atmospheric CO2 concentration.   
 
References
 
Andrews, R., 2017a, Energy Matters Sept 14, 2017, ‘Adjusting Measurements to Match the Models – Part 3: Lower Troposphere Satellite Temperatures’. http://euanmearns.com/adjusting-measurements-to-match-the-models-part-3-lower-troposphere-satellite-temperatures/#more-19464
Andrews, R., 2017b, Energy Matters Aug 2, 2017, ‘Making the Measurements Match the Models – Part 2: Sea Surface Temperatures’. http://euanmearns.com/making-the-measurements-match-the-models-part-2-sea-surface-temperatures/
Andrews, R., 2017c, Energy Matters July 27, 2017, ‘Adjusting Measurements to Match the Models – Part 1: Surface Air Temperatures’. http://euanmearns.com/adjusting-measurements-to-match-the-models-part-1-surface-air-temperatures/
CERES 2011, CERES OLR Image, March 8 2011, Aqua Mission (EOS/PM-1), https://earth.esa.int/web/eoportal/satellite-missions/a/aqua
Clark, R., 2013, Energy and Environment 24(3, 4) 319-340 (2013), ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part I: Concepts’.
https://doi.org/10.1260/0958-305X.24.3-4.319
Energy and Environment 24(3, 4) 341-359 (2013), ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part II: Applications’.
https://doi.org/10.1260/0958-305X.24.3-4.341
D’Aleo, J. ‘Progressive Enhancement of Global Temperature Trends’, Science and Public Policy Institute, July 2010. http://scienceandpublicpolicy.org/science-papers/originals/progressive-enhancement
Feldman D.R., Liou K.N., Shia R.L. and Yung Y.L., J. Geophys. Res. 113 D1118 pp1-14 (2008), ‘On the information content of the thermal IR cooling rate profile from satellite instrument measurements’. https://doi.org/10.1029/2007JD009041
Fourier, B. J. B., Annales de Chimie et de Physique, 27, pp. 136–167 (1824), ‘Remarques générales sur les températures du globe terrestre et des espaces planétaires’. https://gallica.bnf.fr/ark:/12148/bpt6k65708960/f142.image# English translation:
http://fourier1824.geologist-1011.mobi/
Gray, S. T.; L. J. Graumlich, J. L. Betancourt and G. T. Pederson, Geophys. Res. Letts, 31 L12205, pp1-4 (2004) doi:10.1029/2004GL019932, ‘A tree-ring based reconstruction of the Atlantic Multi-decadal Oscillation since 1567 A.D.’. http://www.riversimulator.org/Resources/ClimateDocs/GrayAMO2004.pdf
Gray.NOAA, 2021, Gray, S.T., et al. 2004, Atlantic Multi-decadal Oscillation (AMO) Index Reconstruction, IGBP PAGES/World Data, Center for Paleoclimatology, Data Contribution Series #2004-062, NOAA/NGDC Paleoclimatology Program, Boulder CO, USA.
ftp://ftp.ncdc.noaa.gov/pub/data/paleo/treering/reconstructions/amo-gray2004.txt
HadCRUT4, 2019, https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.6.0.0.annual_ns_avg.txt
Harde, H., Int. J. Atmos. Sci.9251034 (2017), ‘Radiation Transfer Calculations and Assessment of Global Warming by CO2’. https://doi.org/10.1155/2017/9251034
Hansen, J. et al., (45 authors), J. Geophys Research 110 D18104 pp1-45 (2005), ‘Efficacy of climate forcings’. https://pubs.giss.nasa.gov/docs/2005/2005_Hansen_ha01110v.pdf
Hansen, J.; D. Johnson, A. Lacis, S. Lebedeff, P. Lee, D. Rind and G. Russell Science 213 957-956 (1981), ‘Climate impact of increasing carbon dioxide’.
https://pubs.giss.nasa.gov/docs/1981/1981_Hansen_ha04600x.pdf
Iacono, M. J.; J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, J. Geophys. Res., 113, D13103pp 1-8, (2008), ‘Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models’.  https://doi.org/10.1029/2008JD009944
IPCC, 2013: Myhre, G., D. Shindell, F.-M. Bréon, W. Collins, J. Fuglestvedt, J. Huang, D. Koch, J.-F. Lamarque, D. Lee, B. Mendoza, T. Nakajima, A. Robock, G. Stephens, T. Takemura and H. Zhang, ‘Anthropogenic and Natural Radiative Forcing’. In: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, Chapter 8, Radiative Forcing1535 pp, doi:10.1017/CBO9781107415324. http://www.climatechange2013.org/report/full-report/
Knutti, R. and G. C. Hegerl, Nature Geoscience 1 735-743 (2008), ‘The equilibrium sensitivity of the Earth’s temperature to radiation changes’. https://www.nature.com/articles/ngeo337
Manabe, S. and R. T. Wetherald, J. Atmos. Sci. 32(1) 3-15 (1975), ‘The effects of doubling the CO2 concentration in the climate of a general circulation model’. https://journals.ametsoc.org/view/journals/atsc/32/1/1520-0469_1975_032_0003_teodtc_2_0_co_2.xml?tab_body=pdf
Manabe, S. and R. T. Wetherald, J. Atmos. Sci., 24 241-249 (1967), ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’. http://www.gfdl.noaa.gov/bibliography/related_files/sm6701.pdf
NOAA, AMO, 2019 https://www.esrl.noaa.gov/psd/data/correlation/amon.us.long.mean.data
Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J. Gregory, G. C. Johnson, R Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens and M. R. Allen, Nature Geoscience, 6 (6). 415 – 416 (2013). ISSN 1752-0894, ‘Energy budget constraints on climate response’. http://eprints.whiterose.ac.uk/76064/7/ngeo1836(1)_with_coversheet.pdf
Otto, A., F. E. L. Otto, O. Boucher, J. Church, G. Hegerl, P. M. Forster, N. P. Gillett, J. Gregory, G. C. Johnson, R Knutti, N. Lewis, U. Lohmann, J. Marotzke, G. Myhre, D. Shindell, B. Stevens and M. R. Allen, Nature Geoscience, 6 (6). 415 – 416 (2013). ISSN 1752-0894, ‘Energy budget constraints on climate response’, Supplementary Material. content.springer.com/esm/art%3A10.1038%2Fngeo1836/MediaObjects/41561_2013_BFngeo1836_MOESM299_ESM.pdf
Pouillet, M., in: Scientific Memoirs selected from the Transactions of Foreign Academies of Science and Learned Societies, edited by Richard Taylor, 4 (1837), pp. 44-90. ‘Memoir on the solar heat, on the radiating and absorbing powers of the atmospheric air and on the temperature of space’
http://nsdl.library.cornell.edu/websites/wiki/index.php/PALE_ClassicArticles/archives/classic_articles/issue1_global_warming/n2-Poulliet_1837corrected.pdf
Ramanathan, V. and J. A. Coakley, Rev. Geophysics and Space Physics 16(4)465-489 (1978), ‘Climate modeling through radiative convective models’. https://doi.org/10.1029/RG016i004p00465;  
Ramaswamy, V.; W. Collins, J. Haywood, J. Lean, N. Mahowald, G. Myhre, V. Naik, K. P. Shine, B. Soden, G. Stenchikov and T. Storelvmo, Meteorological Monographs Volume 59 Chapter 14 (2019), ‘Radiative Forcing of Climate: The Historical Evolution of the Radiative Forcing Concept, the Forcing Agents and their Quantification, and Applications’. https://doi.org/10.1175/AMSMONOGRAPHS-D-19-0001.1
Taylor, F. W., Elementary Climate Physics, Oxford University Press, Oxford, 2006, Chapter 7
 

Figs1 thru 3.jpg
Tim Gorman
Reply to  Roy Clark
March 15, 2021 4:44 am

Roy,

Very nice treatise!

The only thing I would add is that LWIR from the atmosphere toward the Earth is merely a reflection of LWIR radiating from the Earth. The atmosphere by itself is not a heat generator. It can only reflect what it receives. When the Earth radiates LWIR away it cools. When the atmosphere reflects part of that back the Earth re-warms part way back to its starting point. And around and around it goes. The net effect is that the Earth cools because it never gets back as much as it radiates. The only question is then – how much does the Earth cool? If it doesn’t cool as much at night because the CO2 LWIR reflection slows down the loss to space then we would see Tmin go up, not Tmax. And that seems to be what is driving the so-called “global average temperature” to go up.

But Tmin going up is mostly beneficial so the CAGW alarmists attempt to sow alarm by claiming that it is Tmax going up instead. We are all going to fry under a blanket of CO2. In other words, propagation of a fraud!

Clyde Spencer
Reply to  Tim Gorman
March 15, 2021 8:10 am

Tim
Yes, Tmin is increasing the most. Yet, the climastrology alarmists, and those riding on their coattails with ecological prophesies, commonly use the global average from an RCP 8.5 scenario, rather than deal with the Tmin and Tmax independently. The average looks scarier than Tmin, while it is the Tmax that is most likely to be a biological stressor, and thus should be the focus.

Mark Pawelek
Reply to  Tim Gorman
March 17, 2021 10:03 am

It does not cool as much at night
<- Or maybe the apparent increase in minimum night temperature simply reflects higher urban heat effect from so many land surface weather stations located too close to human habitation?

Tim Gorman
Reply to  Mark Pawelek
March 17, 2021 1:05 pm

Mark,

That could very well be the case. But in that case we would see Tmax going up as well and, at least in the US, Tmax isn’t going up, Tmin is.

Mark Pawelek
Reply to  Roy Clark
March 17, 2021 9:59 am

Thanks Roy for another excellent contribution. Re-blogged your post only on my website.

Lindsay Moore
March 14, 2021 11:16 pm

All Climate models are predicated on the basic assumption that CO2 ,as a greenhouse gas, traps heat . The quantum is the issue.
The scientifically classic way to test an hypothesis or assumption is to run a controlled experiment.
Despite the early assertion by the IPCC that you can’t use this method in climate science, which seems to have been accepted by the whole scientific community which has resulted in the reliance on modelling to resolve issues , such as climate sensitivity. With the obvious variations in output of these tools owing to different assumptions as to the “unknowns”
Back to basics. IMHO you can run a controlled experiment to determine the influence of CO2 on climate, or, more specifically such trials have been run, albeit unknowingly!
Consider the temperature records from the Giles weather station in remote Western Australia set up as a manned station (and still is) in 1956 where I would contend , the only thing to have changed in a climate influencing sense would be the rise in CO2.
Temperature records , available on the BOM website show NO evidence of heat trapping !
No rise in minimum temperatures, no reduction in diurnal spread.
With the only variable input identifiable being the universal rise n CO2 over the 65 years of the record, Is this not a controlled experiment? And aren’ the observations a valid contribution to the science?

Schrodinger's Cat
Reply to  Lindsay Moore
March 15, 2021 5:40 am

The amplification of CO2 induced warming by water vapour partly obscures the fact that any warming could, in theory, produce more water vapour and subsequent warming. But there is no evidence that such amplification actually happens and as is common in climate science, there are other possible mechanisms acting in the opposite direction. For example, More water vapour could lead to more convection, carrying water vapour to higher altitudes where condensation releases latent heat to space. More water vapour could produce more clouds with all the cooling possibilities that flow from that.
Also common in climate science is the tendency for warming scenarios to be favoured by the climate scientists. This bias has existed for decades and is evident in the climate models that exhibit the same bias. For how long will this charade continue?
Climate models are now predicting high end levels of warming that are not credible. It is increasingly difficult to explain actual temperatures without resorting to unrealistic cooling factors. If the real climate continues to cool and models continue to become ever hotter, something will have to give.
It would seem that the obvious way to improve the models is to remove the positive feedbacks. These seldom exist in nature for good reason and the incredible stability of our climate is all the proof we need. Nature has a whole range of tools for tweaking our climate. Carbon dioxide is just one of these. Like most things in nature its effect is limited. Its absorption bands are saturated and further CO2 emissions will make little difference. The science is clear. It is the determination of climate scientists to avoid the obvious that is the problem. 

Tim Gorman
Reply to  Schrodinger's Cat
March 15, 2021 9:01 am

Even if there is some positive feedback it probably only exists within a narrow confine. E.g. a non-linear feedback mechanism that decreases the amount of feedback as the driving input (i.e. the system output) goes up.

Reply to  Schrodinger's Cat
March 15, 2021 11:47 am

>>
The amplification of CO2 induced warming by water vapour . . . .
<<

What’s interesting is if this is true, then pan evaporation should be increasing. In fact, pan evaporation is decreasing. It’s called the “pan evaporation paradox.” Of course, climatologists are writing papers to show that there is no paradox–nothing to see here, move along. Still . . . .

Jim

Burl Henry
Reply to  Lindsay Moore
March 15, 2021 2:24 pm

Lindsay Moore:

There have been other “unknown trials” on the climatic effect of changing CO2 levels which show just the opposite, that it appears to have no effect.

https://www.osf.io/f8d62/

Burl Henry
Reply to  Lindsay Moore
March 15, 2021 4:53 pm

Lindsay Moore:

You suggest that a “controlled” experiment to observe the effect of CO2 in the atmosphere may have been run unknowingly, and that it supports .the CO2 hypothesis.

There have been other “unknown experiments” run that have also addressed the effect of CO2 in the atmosphere, but instead they have shown no climatic effect.

http://www.scholink.org/ojs/php/se/.

Lindsay Moore
Reply to  Lindsay Moore
March 15, 2021 11:17 pm

Apparently this is a bit too simple for most people to digest
Put simply a “controlled experiment carried out over 65 years at a site where the only variable was the CO2 level (280 to 400ppm) showed no evidence of any extra heat being “trapped”
A simple and unambiguous explanation why climate models fail!!
ie the most basic assumption that the EGE traps measurable heat is not supported.
How simple is that?

Burl Henry
Reply to  Lindsay Moore
March 16, 2021 7:24 pm

Lindsay Moore:

CO2 was NOT the only variable. Every VEI4 or larger volcanic eruption affects temperatures, because their SO2 aerosol pollution reflects sunlight, cooling the earth to various degrees.

Matthew Sykes
March 15, 2021 1:14 am

I fully agree. WV is also not increasing over land.

 one would expect the same feedback on initial warming due to a random fluctuation of the amount of water vapour itself” or an el nino, or recovery form volcano etc. Any warming could have triggered a runaway positive feedback.

The fact is positive feedbacks are very rare because they are self destructive. If anything the act of water vapour in the tropics acts as an upper limit to global temperature.

CO2 is not only safe, but beneficial, we should try to get it to 1000 ppm.

igsy
March 15, 2021 6:14 am

Excellent article, a very enjoyable read.
The more data we get, the more evident it becomes that sensitivity is 2C at the very most. My own favourite back-of-the-envelope is to take the trend UAH (0.0137444 per year, currently) and match the start and end trendline delta against the Mauna Loa delta (0.581C per 80ppm). Ratioing up to 280ppm this puts sensitivity at 2.03C.
But we must remember – according to the theory – the lower troposphere warms at a faster rate than the surface. A round number stab at the equivalent surface warming then would be 0.5C per 80ppm. Therefore we get 1.75C warming for a doubling of CO2. And that’s based on a period which must have included a rebound effect from the fall in global temperatures after WW2 to the end of the 70’s, so this is likely an overestimate.
Realistically, we are never going to get to 560ppm anyway – regardless of Green Deals, subsidies and the like – due to the inherent incentives in the capitalist, free market system (or what’s left of it) to maximise energy use productivity.

Steve Z
March 15, 2021 7:28 am

Some climate models assume that relative humidity remains constant if the atmosphere is warmed by CO2.

But since warm air can hold more water vapor than cold air, a constant relative humidity results in an increase in absolute humidity. The extra water vapor must come from somewhere, most likely from increased evaporation from a body of water (ocean, lake, pond, etc.).

Due to the high heat of vaporization of water, if we assume a volume of air in contact with a body of water, if the air is warmed by 1 C, and the relative humidity remains constant, the heat of vaporization results in cooling the air by about 0.5 to 0.7 C (depending on the initial temperature and humidity).

This negative feedback is often overlooked by climate models, which take into account “amplification” due to IR absorption by additional water vapor, but neglect the heat required to force more water vapor into the atmosphere. Failure to take into account this negative feedback would lead to over-estimating the climate “sensitivity”.

Clyde Spencer
Reply to  Steve Z
March 15, 2021 8:15 am

Steve
You said, “… the heat of vaporization results in cooling the air …”

I thought that it was the surface of the body of water that was cooled as the molecules with the most kinetic energy were removed, taking the kinetic energy with them.

Jim Whelan
March 15, 2021 9:42 am

Too many words for a simple explanation (thanks to Richard Feynman). If the observations don’t support the theory then the theory is wrong. No amount of mucking with models based on a wrong theory or their parameters will get past that basic fact.

Reply to  Jim Whelan
March 15, 2021 10:43 am

>>
If the observations don’t support the theory then the theory is wrong.
<<

Actually, Feynman was talking about laws, but his statement also applies to theories

Jim

Jim Whelan
Reply to  Jim Masterson
March 15, 2021 3:40 pm

I believe his actual word was “guess”.

Reply to  Jim Whelan
March 15, 2021 5:48 pm

And “guess” would annoy those naive scientific neophytes who believe laws are proven theories.

Jim

Jim Whelan
Reply to  Jim Masterson
March 15, 2021 6:49 pm

True enough, but in the lecture I remember Feynman says, “Science starts with a guess.” The class laughed at that but Feynman went on to say that no matter how it’s derived or what you call it, it’s really just a “guess” about how the world works which must then be tested.

LOL@Klimate Katastrophe Kooks
Reply to  Jim Masterson
March 17, 2021 9:31 pm

It do believe it’s time to establish the definitions:

The words “fact”, “theory”, “hypothesis” and “law” have very specific definitions in science:
———-
Hypothesis: A tentative explanation of an empirical observation that can be tested. It is merely an educated guess.
—–
Working hypothesis: A conjecture which has little empirical validation. A working hypothesis is used to guide investigation of the phenomenon being investigated.

Scientific hypothesis: In order for a working hypothesis to be a scientific hypothesis, it must be testable, falsifiable and it must be able to definitively assign cause to observed effects.

Null hypothesis: Also known as nullus resultarum. In the case of climate science, the null hypothesis should be that CO2 does not cause global warming.

A Type I error occurs when the null hypothesis is rejected erroneously when it is in fact true.

A Type II error occurs if the null hypothesis is not rejected when it is in fact false.
—–

Fact: An empirical observation that has been confirmed so many times that scientists can accept it as true without having to retest its validity each time they experience whatever phenomenon they’ve empirically observed.

Law: A mathematically rigorous description of how some aspect of the natural world behaves.

Theory: An explanation of an empirical observation which is rigorously substantiated by tested hypotheses, facts and laws.

Laws describe how things behave in the natural world, whereas theories explain why they behave the way they do.

For instance, we have the law of gravity which describes how an object will behave in a gravitational field, but we’re still looking for a gravitational theory which fits into quantum mechanics and the Standard Model and explains why objects behave the way they do in a gravitational field.

LOL@Klimate Katastrophe Kooks
Reply to  LOL@Klimate Katastrophe Kooks
March 17, 2021 9:35 pm

Mheh… ‘It’ = ‘I’. Time to get a new keyboard for my laptop… this one’s keys are getting overly-sensitive to even the slightest touch… especially the ‘down arrow’ key, which I use a lot.

Reply to  LOL@Klimate Katastrophe Kooks
March 20, 2021 3:54 pm

>>
Law: A mathematically rigorous description of how some aspect of the natural world behaves.
<<

This is not exactly true. A law (or principle–they are often interchangeable terms in science) can also be a statement. It need not be a mathematical description. For example, the following laws (principles) are from geology:

The law of faunal succession
The law of original horizontality
The law of superpostion
The law of cross-cutting relationships

Also, a law need not be anything more than a simple guess. Whether it really describes the “natural world” depends on experiment.

>>
. . . we have the law of gravity which describes how an object will behave in a gravitational field . . . .
<<

Einstein was able to revise Newton’s law so it is invariant WRT inertial and accelerating frames of reference. It also corrects Newton’s law for the bending of light and the precession of Mercury’s orbit (which also occurs for the orbits of Venus and the Earth, but to a much lesser degree). The speculation that there is a quantum gravity theory is just that–speculation.

Also notice that Special Relativity corrects Maxwell’s equations. SR makes Maxwell’s equations invariant WRT inertial frames of reference. However, QED (Quantum Electrodynamics) combines quantum theory with SR and replaces Maxwell’s equations. And Maxwell’s equations were based on older laws by Gauss, Ampere, Coulomb, and Faraday.

The problem with QED are the infinities that have to be “re-normalized” out of the equations–a problem that annoyed and annoys many, including Feynman.

Jim

Last edited 1 month ago by Jim Masterson
Tim Gorman
Reply to  Jim Masterson
March 20, 2021 8:02 pm

I was taught in my engineering courses that a law is always able to be described by mathematics – e.g. Gauss’ Law, the laws of thermodynamics, Coulombs Law, the three laws of motion, and Hookes Law. There are some things that are called Principles but they are still described mathematically – e.g. Bernoulli’s Principle, Archimedes Principle.

Reply to  Tim Gorman
March 21, 2021 1:22 pm

LOL’s definition: “A mathematically rigorous description . . . .” is not exactly true. There are many laws that aren’t defined by “mathematically rigorous descriptions.” However, your statement: “. . . a law is always able to be described by mathematics . . . .” is not as strong. I’m not exactly sure what you mean by it.

If you mean that every law can be described by a mathematical expression or series of mathematical expressions then I disagree. I would like to see the mathematical expressions represented by those four geological laws I referred to previously.

Or if you mean that every law can be described by a numeric value or a mathematical range of numeric values, then that may be true.

Then again, you may be referring to something I haven’t thought of.

Jim

Tim Gorman
Reply to  Jim Masterson
March 21, 2021 3:55 pm

Let’s just take one, the law of faunal succession. That’s an observation that may or may not be true. It’s no different than the observation that the sun rises in the east. Neither of them is a “Law”. The difference is that when and where the sunrise occurs is a matter of mathematics, namely orbital mechanics. So the “Law”: of Sunrise can be mapped out mathematically. The “Law” of faunal succession is nothing more than a historical observation. Just like the observation that “no government lasts forever” or the “Law” of Generations – one generation follows another. Those observations can’t be proven mathematically and someday they may not even be true.

Reply to  Tim Gorman
March 21, 2021 4:59 pm

>>
That’s an observation that may or may not be true.
. . .
Those observations can’t be proven mathematically and someday they may not even be true.
<<

I believe you’ll find that these statements apply to every theory and law in science–with the possible exception of “climate science.” (It’s why I consider “climate science” an oxymoron.)

As Feynman said in the video we were discussing earlier, some ideas can last for centuries until they are shown to be incorrect–like Newton’s Law of Gravity.

And the fact that you can “prove” something mathematically doesn’t mean you’ve proven it scientifically–that’s something that can’t be done.

Jim

Last edited 1 month ago by Jim Masterson
Tim Gorman
Reply to  Jim Masterson
March 22, 2021 5:38 am

Gauss’ Law is not likely to ever become untrue – unless the fundamental makeup of the universe changes. In which case no one will care because the human race likely won’t survive. Same for the Laws of thermodynamics and the laws of motion.

And since when was Newton’s Law of Gravity proven wrong? It has been superseded by Einstein’s General Relativity but Newton’s Law still works unless you are concerned with extreme conditions (e.g. black holes or neutron stars).

If you can prove it mathematically then you *have* proven it scientifically. The tunneling effect of electrons across an energy barrier (e.g. in a transistor) was shown mathematically using quantum mechanics – thus proving what empirical observations had seen.

Results derived from observations are subjective, and therefore are subject to being wrong. Being able to describe reality in terms of math, e.g. Gauss’ Law, is not subjective.

Reply to  Tim Gorman
March 22, 2021 1:31 pm

>>
Gauss’ Law is not likely to ever become untrue – unless the fundamental makeup of the universe changes. In which case no one will care because the human race likely won’t survive.
<<

Let’s see: Maxwell’s Equations include Gauss’s law, Gauss’s law for magnetism, Faraday’s law with Maxwell’s modification, and Ampere’s law with Maxwell’s modification. Although Maxwell’s Equations correctly predicted electromagnetic waves before they were discovered (one of those subjective observations, no doubt), they have been a thorn in the side of physics ever since.

First there was the problem of the speed-of-light the equations contained. Attempts to remove it led to modifications of the arbitrary constant in Coulomb’s law. This led to two metric systems: CGS (centimeters-grams-seconds) where the constant’s magnitude is 1 (with appropriate units), and MKS (meters-kilograms-seconds) where the constant has the value of 1/(4*pi*e0). This didn’t work, because the speed-of-light, though hidden, was still there, c = 1/sqrt(e0*u0).

Next came the ether theory. This did work, and it had the added benefit of giving something for EM waves to wave. Unfortunately, Michelson-Morely 1887 ruined the ether theory (with another lousy subjective observation apparently). Attempts to repair the ether theory ran into problems with things like stellar aberration. Apparently observations, like Airy’s water-filled telescopes (darn those subjective observations), didn’t support these modifications.

Lorentz’s transformations solved the invariant problem of Maxwell’s Equations, but it took Einstein’s Special Relativity to explain Lorentz’s transformations and explain things like stellar aberration.

So far, so good, but Maxwell’s Equations don’t explain photon–photon scattering, “nonclassical light,” quantum entanglement, the photoelectric effect, Planck’s law, the Duane–Hunt law, single-photon light detectors, Casmir effect, and so on.

It appears that Maxwell’s Equations are a classical approximation of QED (Quantum Electrodynamics). So what exactly isn’t correct in Maxwell’s Equations? And gee, the human race is still here.

>>
Same for the Laws of thermodynamics and the laws of motion.
<<

I once heard a physicists say that he didn’t think the laws of thermodynamics would ever be found to be incorrect–but he was hedging his bet.

Newton’s second law is F = dp/dt, that is, force is equal to the rate-of-change of momentum WRT time. Linear momentum is define as: p = m*v. Using the classical assumption that mass is a constant, we get the familiar expression for force: F = m*a. However, if you stick the Lorentz expression for mass into the equation, you get an extremely messy expression for relativistic force. Now which expression do you think most physicists would use? F = m*a? Or the messy but more correct expression for relativistic force? It should be obvious they would use the less precise F = m*a knowing that is some cases it might give a wrong value.

>>
And since when was Newton’s Law of Gravity proven wrong? It has been superseded by Einstein’s General Relativity but Newton’s Law still works unless you are concerned with extreme conditions (e.g. black holes or neutron stars).
<<

Or Mercury’s orbit and bending light. Newton’s gravity force law requires the speed-of-gravity to be infinite. Is the speed-of-gravity infinite? Attempts to utilize Newton’s law with a finite speed always fails. If gravity travels at an infinite speed, then there can’t be gravity waves with a finite speed.

>>
If you can prove it mathematically then you *have* proven it scientifically.
<<

Nonsense. There is not a single mathematic system. Mathematics is a axiomatic system. That means you prove theorems based on a certain number of axioms. Change those axioms, and you change the system. In geometry, there are postulates. Change those postulates, and you change the geometry. In Euclidian geometry, parallel lines never meet. However, on a sphere, the analog to a straight line is a great circle. Great circles (not co-located) will intersect exactly twice. In the algebra you learned in grade school and high school, 1 + 1 = 2. In Boolean algebra, 1 + 1 = 1. In fact, in Boolean algebra, 1 + 1 + 1 + . . . + 1 = 1. There are different algebras like there are different geometries. So which geometry does the Universe belong to?

>>
Results derived from observations are subjective, and therefore are subject to being wrong. Being able to describe reality in terms of math, e.g. Gauss’ Law, is not subjective.
<<

I guess Galileo’s move from Aristotle’s thinking about how the world works to doing experiments to show how the world actually works is lost on you. Observations ARE science. Mathematics is an approximation. I remember my professors reminding me that the Ideal Gas Law only applies to ideal gases. Many mathematical formulas in science are idealized. One must be careful in claiming mathematical proof where it only applies to our assumptions.

Jim

Jim Whelan
Reply to  Jim Masterson
March 22, 2021 3:01 pm

Well stated.

The math has to be checked against reality. For example If I have a plot of land which is 10 meters by 10 meters. I can mathematically calculate the area as 10+10 and claim it is 20 because I “did the math”.

The only immutable laws are those that are essentially definitions. But even those are subject to change should other definitions be found useful.

An example is the laws of conservation of energy and momentum which can be mathematically shown to be laws of invariability over time and space. However, the real meaning of that mathematics is that any deviation in the laws of the universe over time can be expressed as an energy conservation law and that any variability in space can be expressed as a conservation of momentum law. So the conservation laws are preserved by “making up” new kinds of energy or momentum.

Even then it’s not clear that the equations hold if the variations are non-continuous. Quantum mechanics exhibits some discontinuity and even chaotic behaviors.

Tim Gorman
Reply to  Jim Masterson
March 22, 2021 4:19 pm

You jumped from Gauss’ Law to Maxwell’s equations – i.e. you are guilty of equivocation, an argumentative fallacy.

Gauss’ Law has never been modified or disproved. It is very simple.

And it is not the same for the laws of thermodynamics. You saying so doesn’t make it so.

Newton’s Law of Gravity is still true except on the edges. Why does a planet bend light? You use that as an excuse for Newton’s Law of Gravity being wrong but give no explanation showing how gravity doesn’t affect light. As for instantaneous speed of light – why does that impact Newton’s Law? Newton’s Law doesn’t say *when* the forces imping on each mass, there is no “time” in the equation. That is a subjective opinion from you, it isn’t implicit in the math.

Not all math is axiomatic. Geometry is. Algebra is not. A + B = C doesn’t depend on any assumptions or axioms.

Observations are not science, MEASUREMENTS are.

Jim Whelan
Reply to  Tim Gorman
March 22, 2021 10:07 pm

Strange. I learned algebra as axiomatic. You know those funny little things like the associative, ditributive and commutative properties of addition and multiplication

Reply to  Tim Gorman
March 23, 2021 9:00 am

>>
Not all math is axiomatic. Geometry is. Algebra is not. A + B = C doesn’t depend on any assumptions or axioms.
<<

Now you are making silly statements. Basic algebra depends on at least five axioms–look it up.

>>
You jumped from Gauss’ Law to Maxwell’s equations – i.e. you are guilty of equivocation, an argumentative fallacy.

Gauss’ Law has never been modified or disproved. It is very simple.
<<

That may be true if Maxwell’s Equations didn’t include Gauss’s law as is. So now you are saying we can’t combine laws? That’s news to me.

>>
Observations are not science, MEASUREMENTS are.
<<

I consider measurements to be observations. I don’t understand your distinction with these terms.

>>
And it is not the same for the laws of thermodynamics. You saying so doesn’t make it so.
<<

?

>>
Newton’s Law of Gravity is still true except on the edges.
<<

Newton’s law has edges?

>>
Why does a planet bend light?
<<

Why does it?

>>
You use that as an excuse for Newton’s Law of Gravity being wrong but give no explanation showing how gravity doesn’t affect light.
<<

?

>>
As for instantaneous speed of light – why does that impact Newton’s Law?
<<

I guess you’re not reading what I wrote. I said speed-of-gravity was infinite with Newton’s law. I said nothing about the speed-of-light.

>>
Newton’s Law doesn’t say *when* the forces imping on each mass, there is no “time” in the equation. That is a subjective opinion from you, it isn’t implicit in the math.
<<

Actually it does. Pick up any book on orbital mechanics. Newton’s law is all about time and when. Unfortunately, there aren’t enough integrals available to completely solve the two-body problem. One of those problems is the loss of where an object is in its orbit. The solution to this is Kepler’s Equation, but that equation–being a trigonometric equation–can’t be solved precisely.

Jim

Paul of Alexandria
March 15, 2021 4:11 pm

“I consider therefore models in which the feedback on water vapour is negligible (and negative if you include clouds) as much more realistic.” As Willis points out in an adjacent article, and has for a while.

rah
March 16, 2021 2:25 am

I’m a truck driver and have no PhD. But my opinion is the first and foremost problem with the climate models most publicized is that they are formulated, maintained, and reported on by dishonest hacks with a bias towards showing warming who treat their own work and it’s results as the climate gospel.

The Fringe
March 16, 2021 8:13 am

The big question is how much of that 1.2-1.8 has already been accomplished if indeed co2 is taken into account. Chaos and natural fightback a la le Chateliers, along with the Joel Myers rubber band snapping theory ( only so much you can take then the system cascades back the other way baster than it got pulled the initial way) can come into account. In the end the old forecast joke : Anything can happen and probably will” may be the best bet. Peace

Larry in Texas
March 16, 2021 12:19 pm

As a layman, the question I always try to get my fellow laymen to ask is: how much carbon dioxide in the atmosphere is too much?

That is a question no scientist, climatologist, etc. has been able to establish in my own mind a satisfactory answer to. This question remains (like the debate over ECS) and is going to remain a matter of debate and dispute for a long time. The science is never “settled.”

And it does no good to point to confined spaces as a way of looking at the question, because our planet is NOT the kind of “confined space” one can analogize to.

In the meantime, the planet serenely goes on in its mystical, sometimes magical ways, day after day, variation after variation, imagined crisis after imagined crisis.

Drachir Thennek
March 18, 2021 1:35 am

the problem with climate models is that they have been proven to be very accurate and they don’t agree with what deniers want..

Last edited 1 month ago by Drachir Thennek
Tim Gorman
Reply to  Drachir Thennek
March 18, 2021 5:21 am

You have to be kidding, right? Every year the outputs of the models diverge further and further away from reality. How can they then be accurate?

Ulric Lyons
March 18, 2021 6:44 am

The biggest problem with climate models is that their internal variability is not internal. AO/NAO anomalies driving heat and cold waves are discretely solar driven by daily-weekly changes in the solar wind temperature/pressure. ENSO and the AMO act as negative feedbacks to changes in the solar wind temperature/pressure, and control changes in low cloud cover and lower troposphere water vapour. Very strong solar wind states in the 1970’s drove colder ocean phases and global cooling, and weaker solar wind states since 1995 have driven warmer ocean phases and global warming, self amplified by decreasing low cloud cover.

%d bloggers like this: