“We know there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know” — Donald Rumsfeld
Ed Zuiderwijk, PhD
An Observation
There is something strange about climate models: they don’t converge. What I mean by that I will explain on the basis of historical determinations of what we now call the ‘Equilibrium Climate Sensitivity’ (ECS), also called the ‘Charney Sensitivity’ (ref 1), defined as the increase in temperature at the bottom of the Earth’s atmosphere when the CO2 content is doubled (after all feedbacks have worked themselves through). The early models by Plass (2), Manabe & co-workers (3) and Rowntree & Walker (4) in the 1950s, 60s and 70s gave ECS values from 2 degrees Centigrade to more than 4C. Over the past decades, these models have grown into a collection of more than 30 climate models brought together in the CMIP6 ensemble that forms the basis for the upcoming AR6 (‘6th Assessment Report’) of the IPCC. However the ECS values still cover the interval 1.8C to 5.6C, a factor of 3 difference in results. So after some 4 decades of development climate models have still not converged to a ‘standard climate model’ with an unambiguous ECS value; rather the opposite is the case.
What that means becomes clear when we consider what it would mean if, for instance, the astrophysicists found themselves in a similar situation with, for example, their stellar models. The analytical polytropic description of the early 20th century gave way years ago to complex numerical models that enabled the study of stellar evolution – caused by changing internal composition and the associated changes in energy generation and opacity – and which also in 1970, when I took my first steps in the subject, offered a reasonable explanation of, for example, the Hertzsprung-Russell diagram of star populations in star clusters (5). Although always subject to improvement, you can say that those models have converged to what could be called a canonical star model. The different computer codes for calculating stellar evolution, developed by groups in various countries, yield the same results for the same evolutionary phases, which also agree well with the observations. Such convergence is a hallmark of the progress of the insights on which the models are based, through advancement of understanding of the underlying physics and testing against reality, and is manifest in many of the sciences and techniques where they are used.
If the astrophysicists were in the same situation as the climate model makers, they would still be working with, for example, a series of solar models that predict a value of X for the surface temperature give or take a few thousand degrees. Or that, in an engineering application, a new aircraft design should have a wing area of Y, but it could also be 3Y. You don’t have to be a genius to understand that such models are not credible.
A Thesis
So much for my observation. Now what it means. I will here present my analysis in the form of a thesis and defend it with an appeal to elementary probability theory and a little story:
“The fact that the CMIP6 climate models show no signs of convergence means that, firstly, it is likely that none of those models represent reality well and, secondly, it is more likely than not that the true ECS value outside the interval 1.8-5.6 degrees.”
Suppose I have N models that all predict a different ECS value. Mother Nature is difficult, but she is not malicious: there is only one “true” value of ECS in the real world; if that were not the case, any attempt at a model would be pointless from the outset. Therefore, at best only one of those models can be correct. What is then the probability that none of those models are correct? We know immediately that N-1 models are not correct and that the remaining model may or may not be correct. So we can say that the a priori probability that any model is incorrect is [(N-1+0.5)/N] = 1–0.5/N. This gives a probability that none of the models is correct from (1-0.5/N)^N, about 0.6 for N>3. So that’s odds of 3 to 2 that all models are incorrect; this 0.6 is also the probability that the real ECS value falls outside the interval 1.8C-5.6C.
Now I already hear the objections. What, for example, does it mean that a model is ‘incorrect’? Algorithmic and coding errors aside, it means that the model may be incomplete, lacking elements that should be included, or, on the other hand, that it is over-complete with aspects that do not belong in it (an error that is often overlooked). Furthermore, these models have an intrinsic variation in their outcome and they often contain the same elements so those outcomes are correlated. And indeed the ECS results completely tile the interval 1.8C-5.6C and for every value of ECS between the given limits models can be found that can produce that result. In such a case one considers the effective number of independent models M represented by CMIP6. If M = 1 it means that all models are essentially the same and the 1.8C-5.6C is an indication of the intrinsic error. Such a model would be useless. More realistic is an M ~ 5 to 9 and then you come back to the foregoing reasoning.
What rubs most with climatologists is my claim that the true ECS is outside the 1.8C-5.6C interval. There are very good observational arguments that 5.6C is a gross overestimate so I am actually arguing that probably the real ECS is less than 1.8C. Many climatologists are convinced that that is instead a lower limit. Such a conclusion is based on a fallacy, namely the premiss that there are no ‘known unknowns’ and especially no ‘unknown unknowns’, ergo that the underlying physics is fully understood. And, as indicated earlier, the absence of convergence of the models tells us that precisely that is not the case.
A Little Story
Imagine a parallel universe (theorists aren’t averse to that these days) with an alternate Earth. There are two continents, each with a team of climatologists and their models. The ‘A’ team on the landmass Laputo has 16 models that predict an ECS interval 3.0C to 5.6C, a result, if correct, with major consequences for the stability of the atmosphere; the ‘B’ team at Oceania has 15 models that predict an ECS interval 1.8C to 3.2C. The two teams are unaware of each other’s existence, perhaps due to political circumstances, and are each convinced that their models set hard boundaries for the true value of the ECS.
That the models of both teams give such different results is because those of the A-team have ingredients that do not appear in those of the B-team and vice versa. In fact, the climatologists on both teams are not even aware of the possible existence of such missing aspects. After thorough analysis, both teams write a paper about their findings and send them, coincidently simultaneously, to a magazine published in Albion, a small island state reknowned for its inhabitant’s strong sense of independence. The editor sees the connection between the two papers and decides to put the authors in contact with each other.
A culture shock follows. The lesser gods start a shouting match. Those in the A team call the members of the B team: ‘deniers’ who in their turn shout: ‘chickens’. But the more mature of both teams realize they’ve had a massive blind spot about things the other team knew but they themselves not. That those ‘unknowns’ had firmly bitten both teams in the behind. And the smartest realize that now the combined 31 models are a new A team to which the foregoing applies a fortiori: that there could arise a new B team somewhere with models that predict ECS values outside the 1.8C-5.6C range.
Forward Look
So it may well be, no, it is likely that once the underlying physics is properly understood, climate models will emerge that produce an ECS value considerably smaller than 1.8C. What could such a model look like? To find out we look at the main source of the variation between the CMIP6 models: the positive feedback on water vapour (AR4, refs 6,7). The idea goes back to Manabe & Wetherald (8) who reasoned as follows: a warming due to CO2 increase leads to an increase in the water vapour content. Water vapour is also a ‘greenhouse gas’, so there is extra warming. This mechanism is assumed to ‘amplify’ the primary effect of CO2 increase. Vary the strength of the coupling and add the influence of clouds and you have a whole series of models that all predict a different ECS.
There are three problems with the original idea. The first is conceptual: the proposed mechanism implies that the abundance of water vapour is determined by that of CO2 and that no other regulatory processes are involved. What then determined the humidity level before the CO2 content increased? The second problem is the absence of an observation: one would expect the same feedback on initial warming due to a random fluctuation of the amount of water vapour itself, and that has never been established. The third problem is in the implicit assumption that the increased water vapour concentration significantly increases the effective IR opacity of the atmosphere in the 15 micron band. That is not the case. The IR absorption by water vapour is practically saturated which makes the effective opacity, a harmonic mean, insensitive to such variation.
Hence, the correctness of the whole concept can be doubted, to say the the least. I consider therefore models in which the feedback on water vapour is negligible (and negative if you include clouds) as much more realistic. Water vapour concentration is determined by processes independent of CO2 abundance, for instance optimal heat dissipation and entropy production. Such models give ECS values between 0.5C and 0.7C. Not something to be really concerned about.
References
- J. Charney, ‘Carbon dioxide and climate: a scientific assessment’, Washington DC: National Academy of Sciences, 1979.
- G. N. Plass, ‘Infrared radiation in the atmosphere‘, American Journal of Physics, 24, 303-321 , 1956.
- S. Manabe and F. Möller, ‘On the radiative equilibrium and heat balance of the atmosphere‘ Monthly Weather Review, 89, 503-532, 1961.
- P. Rowntree and J. Walker, ‘Carbon Dioxide, Climate and Society‘: IIASA Proceedings 1978 (ed J. Williams), pages 181–191. Pergamon, Oxford, 1978.
- http://community.dur.ac.uk/ian.smail/gcCm/gcCm_intro.html
- V. Eyring et al, ‘The CMIP6 landscape’ Nature Climate Change, 9, 727, 2019.
- M. Zelinka, T. Myers, D. McCoy, et al. ‘Causes of higher climate sensitivity in cmip6 models‘, Geophysical Research Letters, 47, e2019GL085782, 2020. https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019GL085782.
- S. Manabe and R. Wetherald, ‘Thermal equilibrium of the atmosphere with a given distribution of relative humidity’, J. Atmos. Sci., 24, 241-259, 1967.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
As Lewis and Curry, and Lord Monckton derive ECS on the order of 1.4 C with differing methods, I would agree that there is a good chance the true value is less than 1.8 C.
Or a single statistic, inferred with pragmatic but unrealistic assumptions/assertions, reduces the complexity of the system to a plausible absurdity that requires regular injections of brown matter and energy to force a consensus with past, present, and future observations
How do L&C and Lord M account for the observation (mean of HadCRUT, GISS and NCDC) that since the start of the 20th century we are already around +1.1C warmer than what might be called ‘pre-industrial’ (1880-1900)? And we are still a long way off from equilibrium with ocean lag. There would need to be a drastic slowdown in the long term warming trend for 1.4C to be anywhere near ECS.
A low figure, > 1.5 C is used in the Russian climate model (INM-CM4) which tracks UAH data better than any other models.
The problem is presenting a single number that supposedly represents a “global average”. Such things are physical nonsense. Everything else that follows from them is even more nonsensical.
A Monte Carlo distribution is one way of allowing for data uncertainties.
A Monte Carlo distribution developed from a model that is wrong only gives you a spread of wrong values. That’s not really very useful.
Mr Gorman states the obvious. However, our approach is to take the observational data, not the data from models, and perform statistical analyses on that.
” there is only one “true” value of ECS in the real world;”
No, not even one, it is not a real thing to be measured, it is good only in order to compare model results.
About .4 degree C of warming happened from 1900 to the early 1940s, and most of that was not caused by increase of greenhouse gases.
Yes, and could be lower in the LIA.
You are assuming that most if not all of the warming since the end of the Little Ice Age is due to CO2. Since most of the warming occurred prior to the big rise in CO2, that’s not possible.
Don’t go confusing him with the null hypothesis. It made Travesty Trenberth break out in herpes.
Just for sh!ts and giggles though, TFN exactly what year was the great shift to a flat baseline?
FN, even the SPM to AR4 said at least half of the value you cite, specifically the warming period from ~1920 to 1945, was natural and not AGW; CO2 forcing was way too low. So your ‘adjusted’ ~1.1C needs to be at least halved. And I would also point out that natural variability did not magically stop in the second half of the time period. How much there was, dunno. Do know that AR4 and AR5 explicitly assumed essentially zero, which cannot be correct.This attribution problem is why all the models (except the Russian one) run hot. See my several previous posts here on this issue.
Reality is that in the NH the current temperature is similar to that of around the 1940s
There has been no effective warming in the NH in the last 80 years !!
That is what enhanced atmospheric CO2 does to temperatures… ABSOLUTELY NOTHING.
ECS is Zero +/- feedbacks which vary.
See far below. What you assert here CANNOT be true from first principles. We can only know it is something above about 1.16, rounded to 1.2 by people far more expert and qualified than you to opine.
Respectfully Rud, experts and novices can opine all they want. Real scientists show data. Real data, not molested data.
Yup.Like I did extensively below.
You are forgetting all the other energy transfers within the atmosphere
Bulk energy transfers are far far greater than any internal CO2 energy transfer
Emergent phenomena step in pretty quickly, in the order of minutes and hours. You can’t talk about hypothetical doublings when that is going on.
Willis has listed many assumptions in his post.
Here is one which shows the declining influence of CO. By 400 ppm it is not far from Zero which is why I start with 0.0 +/- ?? feedbacks.
1. David Archibald shows how the effect of increasing CO2 decreases logarithmically as CO2 increases in the following:
http://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/
There is also another article on the Logarithmic heating effect of CO2:
http://wattsupwiththat.com/2010/03/08/the-logarithmic-effect-of-carbon-dioxide/
An important item to note is that the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas.
The 1.16 is not measured is it? All based on assumptions as Willis said:
https://wattsupwiththat.com/2021/03/12/there-are-climate-models-and-there-are-climate-models/
• Computer modelers, myself included at times, are all subject to a nearly irresistible desire to mistake Modelworld for the real world. They say things like “We’ve determined that climate phenomenon X is caused by forcing Y”. But a true statement would be “We’ve determined that in our model, the modeled climate phenomenon X is caused by our modeled forcing Y”. Unfortunately, the modelers are not the only ones fooled in this process.
• The more tunable parameters a model has, the less likely it is to accurately represent reality. Climate models have dozens of tunable parameters. Here are 25 of them, there are plenty more.
First principles that ignore all the bulk transfers of energy in the atmosphere.?
This is not “first principles”.. that is ignorance.
Rudd writes
Above? You assume feedbacks are positive over the longer term. That is not a fact.
You mean the “ADJUSTMENTED” fabricated once-were-observations, right, rusty.
Real warming is probably significantly less than shown by HadCrud et al.
Does rusty have ANY EVIDENCE AT ALL this the slight and highly beneficial warming since the COLDEST period in 10,000 years, has anything at all to do with atmospheric CO2?
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
I don’t know how they account for it but my opinion is data altering.
ROFLMAO..
Rusty has just ADMITTED that its not CO2 doing the slight warming.
Well done rusty ! 🙂
Based on Wu et al. (2019), the anthropogenic fraction of the 1.04 degrees’ warming from 1850 to 2020 is 0.73 degrees.
Almost all Urban induced warming.
You seem to assume that global warming can only come from increasing CO2 concentrations.
Dave Fair
Not over the short term. Over the longer term no one has so far been able to explain the observed warming without invoking man-made greenhouse gases (not just CO2) and human land use, etc.
Bad premises lead to wrong conclusions.
WRONG as always, corroded twit.
Not only rusty but bent and twisted and clinging onto a corroded mess of ignorance.
Warming is easily explained by the strong solar cycles and drop in tropical cloud cover.
I notice you COWARDLY squirm away from producing any answers.
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
Argument from ignorance. I can’t think of anything else, so it must be this.
Beyond that, there were no greenhouse gas changes during the Holocene optimum, where temperatures were as much as 5C warmer than today, and man hadn’t developed civilization yet so there were no land use changes.
Please try to come up with a lie that is at least a little bit plausible.
Wrong. Look at the cyclical data. THAT’s not explained by CO2. Eddy ~1000 yr, Suess/DeVries ~190 yr, AMO, etc.
Explain the Little Ice Age.
The Final Nail writes
You mean the models can’t explain it. But the models are explicitly designed not to explain it. They are designed so that control runs don’t cause climate change. Santer tells us they can’t sustain change beyond 17 years in the control runs.
How do you account for the fact that the number of murders has increased and yet we know that the number of demons has not changed because they are immortal and do not procreate? We can infer with confidence that the productivity of demons has increased as CO2 has increased. Isn’t that obvious?
If you assume something that doesn’t even exist, you may convince yourself of some rather absurd ideas.
You should explain to Lord M, etc that he is wrong about the forcing effect of CO2 on warming. Tell him it’s a ghost or something. (I don’t believe in ghosts.)
But you do believe in fairy-tales..
Look at your rancid brain-hosed belief of CO2 warming, despite being totally incapable of producing one iota of empirical evidence.
That is Mills and Boon or Bros Grimm fantasy land stuff
Certainly not science
1… Do you have any empirical scientific evidence for warming by atmospheric CO2?
2… In what ways has the global climate changed in the last 50 years , that can be scientifically proven to be of human released CO2 causation?
Your question is valid only if CO2 is the only determinant of temperature. If something else is acting your question is meaningless.
It obviously doesn’t cross your mind that HadCRUT, GISS, and NCDC are manipulated numbers intended to support the higher ECS. Using those false “calculations” .. (not observations as you pretend), are of no use in a discussion regarding the true ECS.
Please send us a link to the GISS/NCDC and HadCRUT data from 1880-1980, so we can all discuss this as informed adults? I signed an NDA with St. Peter, but I’ll ignore it and swop you my GISS/NCDC data set from 1645-1875… in the interest of not spewing nonsense.
Simple: changes in solar forcing and ocean current cycles. There are many other factors contributing to the atmospheric temperature at any given time.
On the basis of the latest data, we now conclude that ECS will be just 1.0 [0.8, 1.4] degrees.
And as temperature start to drop over the next several years, that “calculation” will tend towards zero.
Yes!!
Respectfully disagree, several times already, here and at Judths. It does us skeptics no favor to overstate the ECS case, when an irrefutable yet more moderate equivalent can make the same point.
i really like just generally winning. Absolute winning often loses.
CO2 is a minor player in energy transfer in the atmosphere, absolutely SWAMPED but other mechanisms
ECS is effectively ZERO
Then stop doing it !!
Mr Istvan says he disagrees with an analysis he may not yet have seen. However, the official estimate of the anthropogenic fraction of observed warming is 0.7, which brings down ECS quite a bit. We do not find ECS to be 1 K because that was our target: we find it to be 1 K because that is what the latest available mainstream data would indicate.
CO2 has absolutely nothing to do with climate, so why give the idea any credence?
Before worrying about the accuracy of the physical parameterizations, the climate and weather modelers must accurately approximate the correct dynamical system of equations. However it has been mathematically proved that all global climate and weather models are based on the wrong dynamical system of equations ( the so called primitive or hydrostatic system of equations). And secondly the continuum solution of the equations must be differentiable in order that a numerical method accurately approximatre a continuous derivative. However discontinuous parameterizations mean that the continuum solution is not differentiable so the mathematical theory of numerical methods is violated.In order to keep the numerical solution appearing to be smooth, an unrealistic amount of dissipation must be applied and this leads to the system that is being approximated far from the one intended.
The unique, well posed reduced system for atmospheric flows: Robustness in the presence of small scale surface irregularities
Dedicated to my mentor and colleague, Heinz-Otto Kreiss.
G L Browning
DAO,91,2020
Great article, really interesting and thoughtful.
One of the fundamental issues of climate models is they depend entirely on the input forcings. Without the input forcing time series the models do nothing. The output of a climate model that can be compared to temperature only seems to work if we average lots of climate models. The mean of the models then looks like temperature.
But the priors (the input forcings) can be used with simple linear regression to predict the mean model output with surprising accuracy (R=0.96). So on that basis, all the models actually do is convert forcings in W/m^2 to temperature. Which is what linear regression does too.
Finally, if we subtract the mean of the models from the individual climate models for each climate model residual we get – random noise as far as I can tell. There is no structure and the first annual time lag autocorrelation is effectively zero.
Climate models are simply random number generators with a trend. The trend comes only from the input forcing data provided. The a priori input forcings already correlate with temperature (R=0.92) by linear regression. So what do climate models do? Not much. But climate scientists seem blind to the fact that the output only depends on the input forcings, not on the internal workings of the model.
Lipstick on the pig
They need to follow the scientific philosophy in theory and practice, and reduce their frame of reference from universal, global, etc. to a limited frame where their models… hypotheses exhibit a consensus with observation, without regular injections of brown matter and energy, and altogether created conceptions, to steer the result.
“So what do climate models do? Not much. But climate scientists seem blind to the fact that the output only depends on the input forcings, not on the internal workings of the model.”
So true!
“One of the fundamental issues of climate models is they depend entirely on the input forcings. Without the input forcing time series the models do nothing” That’s it in a nutshell.
So what do climate models do?
Give only some scientists the impression to handle the real world.
Climate models provide employment for university graduates that otherwise would be completely unemployable in the real world. Climate activist charities exist for precisely the same reason.
The Dynamic Atmosphere Energy Transfer model created by me and Philip Mulholland predicts zero sensitivity from greenhouse gas changes but a minuscule rearrangement of the convective overturning system in lieu of a global temperature rise. Indiscernible from natural variability.
it also predicts features of atmospheres already observed within our solar system.
SW, with all due respect, I have studied your alternative model and found it deficient in several physical respects. If you wish to persist, I will comment on those deficiencies in great detail in a big picture. For example, you claim CO2 is saturated—ignoring the ever elevated radiation threshold LRE saying CO2 can never saturate— but its concentration effect will always decline logarithmically, as Guy Callander said in 1938.
Actual measurements of energy absorption show it leveling off at around 280ppm
Irrespective, CO2 is a minor bit player in atmospheric energy transfer, totally SWAMPED by bulk energy movements.
Your calculated ECS value also relies on the temperature rise being cause ONLY by CO2, which is manifestly FALSE.
I don’t recall even considering that aspect of CO2 since it is not relevant to our model.
We accept that ghgs would affect lapse rate slopes but that circulation changes would neutralise any thermal effect.
So you haven’t really studied it at all.
Should be Transport not Transfer
The very model of chaos (e.g. evolution): nonlinear, incompletely or insufficiently characterized and computationally unwieldy, thus the scientific logical domain and philosophy in the near-space and time. Unfortunately, intuitive science with secular incentives, “benefits”, is politically congruent, economically marketable, and emotionally appealing.
I will only point out one other assumption:
“there is only one “true” value of ECS in the real world”
If the Climate system is chaotic then I am not sure the above assumption is correct. You may get entirely different sensitivities given a doubling of CO2 depending on various other conditions, including one (or many) that resembles random chance. (The presumption is if one could know enough detail then the condition is not random, but in all practical use of a real computer model it appears random).
You *might* and probably can put limits around the ECS and maybe even build it a probability curve.
If this were true then there are limits to predicting future climate that we just cannot go beyond. Trying to predict it using CO2 as the control knob becomes laughable.
The climate system is for sure nonlinear (feedbacks) and dynamic feedbacks are lagged in time. Therefor it is mathematically chaotic. That means it has strange attractors. We know of two strong ones it oscillates between: ice ages and brief noniceages. The last two of those are the Eemian and the Holocene. Based on observational ECS, no reason not to think any Holocene ECS is a fairly stable number, since its ‘real’. Ompenents like albedo and WVF and Cloud feedback are observationally all fairly stable.
NH 1940’s same temp as now, NO warming in 80 years
ECS = ZERO !!
I’d say there’s no global ECS in the same way there is no global temperature. Every column of air anywhere in the world is either slightly or drastically different from any other column of air.
That’s kinda like what I was going to say. There are enough regional differences that ECS is probably different at different locations in the globe. Simple amount of cloud cover would generate different ECS’s because of different “albedos”. Trying to find an average would be probably wrong and at the least, a not very useful number.
JG, I agree. But in fairness, (see below) the warmunists do try to compensate by using ECS anomalies. Besides, if you want to engage them, only do it on skeptical provable terms. Dismissive skeptical results only in ‘Denier dismissal’ rebuttals. Not useful.
I appreciate your comment, however it is my understanding that ECS in the models is an emergent quantity based upon global numbers. I wouldn’t exactly call a quantity that comes out as an anomaly and that is based on temperature anomalies an anomaly itself.
I would add that various geographic features and conditions (mountains, oceans, deserts, etc.) are part of the “various other conditions” you are talking about.
Which is why I don’t even think it appropriate to try to calculate an “average global temperature” much less predict it (or any relatable concept of ECS) in the future.
Don’t forget that the *average* temperature should be taken at a specific point in time, specifically UTC time. That’s because at any specific time about half the earth is in sunshine, i.e. temps going up, and the other half is in dark, i.e. temps going down. That snapshot in time would tell you far more about “average” conditions than the so-called average mid-point temp the AGW alarmists use.
It is natural to view the Earth’s climate by separating land, ocean and atmosphere. This is essentially what climate models do. Unfortunately, that leads to the creation of a massively complex interface between these systems. Turns out there’s a better way to approach the problem.
By combining the atmosphere with the skin of land and ocean areas we create a thermodynamic system that provides us with more insights. Since 99.9% of the IR energy from radiating gases only penetrates into the skin it removes the need to consider them outside this system. The energy into this system comes from the sun and from subsurface below. Also keep in mind, the energy in from solar is less that the total energy radiated to space by the atmosphere/skin system. A good portion of the solar energy penetrates meters into the oceans. Hence, this system cannot be warming the subsurface without violating the 2nd law.
The first significant insight from looking at the problem this way is how can this system be warmed from a completely internal transfer of energy? That is all IR does within this system. It moves around. It cannot possibly warm the system without creating energy out of nothing. That would violate the 1st law. The only ways for the atmosphere/skin system to warm is for 1) more solar energy (no evidence of this) or 2) more subsurface Earth energy or 3) less energy loss to space (the insulation effect).
For 3), we would need the atmosphere/skin to reduce outgoing radiation. That is, in order to adhere to basic thermodynamic law we MUST see a decrease in outgoing radiation. Has such a decrease been seen? Nope. According to CERES the planet has seen no reduction in outgoing radiation. In fact, CERES has seen an increase in the outgoing radiation.
The only way the CERES data can be explained is that 2), the subsurface oceans/land areas, are transferring more energy into the atmosphere/skin system. We know the oceans contain more than 1000 times the energy of the atmosphere. It is certainly possible for them to vary the energy transfer into the atmosphere/skin system.
If climate models used this approach they would be much simpler and they would likely all produce similar results.
AR6 model runs published so far include some with outlandishly high ECS results. As usual, the two Russian INM GCMs produce the lowest ECS estimates, ie 1.9 and 1.8 degrees C per doubling of vital plant food in the air.
From Zeke:
https://www.carbonbrief.org/cmip6-the-next-generation-of-climate-models-explained
Even the Russian results are probably too high. Until computing power increases at least a billion-fold, such GIGO computer gaming will remain worse than worthless for any useful purpose. Without actually modeling clouds, for instance, rather than parameterizing them, GCMs are a titanic waste of time and money, except to show how unsettled is “climate science”.
Ideally, grid cells would be cubes ten meters on a side, but 100 m. might give meaningful ouputs. However, the assumptions behind GCMs may well be fundamentally flawed, rendering the exercise invalid at any resolution.
Perhaps Slartibartfast should be commissioned to build an analog computer to simulate what happens in our increasingly digital world. Why would any thinking person wear a digital watch?
It’s zero
It may be zero relative to the noise but I doubt it’s physically unrelated.
Not philosophically zero.
It could be non zero. Maybe 0.01 C. ;)) It certainly is not a problem for anyone. The CERES data tells us all we need to know. For CO2 to create any warming there would need to be a reduction in outgoing IR. Hasn’t happened. Some might even call it proof that CO2 does not cause warming.
Could be minus 0.01 C too, given that radiation is the only way energy exits the planet. Pretty amazing that after 40 years or more, it can only be purported to be a number other than zero by mental masturbation, bloviating and fabricating data, and even with fabricated data it still hasn’t been shown to be a number other than zero.
How it does not work.
By reflecting away 30% of the incoming solar radiation the albedo, which would not exist w/o the atmosphere and its so-called GreenHouse Gases, makes the earth cooler than it would be without the atmos/GHGs much like that reflective panel propped up on the car dash. Remove the atmos/GHGs and the earth becomes much like the Moon or Mercury, an arid, barren rock with a 0.1 albedo, 20% more kJ/h, hot^3 on the lit side, cold^3 on the dark.
If this is correct, the Radiative GreenHouse Effect theory fails.
For the GHGs to warm the surface with downwelling energy as advertised they must absorb/trap/delay/intercept/whatevah “extra” energy from somewhere in the atmospheric system. According to RGHE theory the source of that “extra” upwelling energy is the surface radiating as a near ideal Black Body. As demonstrated by experiment the surface cannot radiate BB because of cooling by the non-radiative heat transfer processes of the contiguous atmospheric molecules.
If this is correct, RGHE theory fails.
How it does works.
To move fluid through a hydraulic resistance requires a pressure difference.
To move current through an electrical resistance requires a voltage difference.
To move heat through a thermal resistance requires a temperature difference. (Q=UAdT)
Physics be physics.
The complex thermal resistance (R=1/U) of the atmosphere (esp albedo, net Q) is responsible for the temperature difference (dT=Tsurf-Ttoa) between the warm terrestrial surface and the cold edge of space (32 km).
And that heat transfer process involves the kinetic energy of ALL of the atmospheric molecules not just 0.04% of them.
To move electromagnetic energy as photons through a vacuum only requires a temperature above absolute zero.
Temperature requires kinetic energy AND stuff. 0 LoT.
A vacuum has neither and therefore temperature has no meaning.
And no thermal resistance.
It is possible for a system to have more than one quasi stable condition.
Recently, in geological time, the planet has been banging in and out of glaciations.
The climate system is chaotic and apparently has at least two attractors. As far as I can tell, none of the models even consider the possibility of the next glaciation.
IMO, during the present ice age, there are three attractors, ie interglacial, glacial and glacial maximum.
ECS is like butts – everybody has a different size one to show.
And a study of all the butts in the world would be just as useful to humanity as all the studies (conjectures) about ECS.
Can I get a grant to study the female butts aged between 20 & 40 ??
You’d be competing with oh, about 3 billion hetero blokes who’d do that study for free.
We allocate you the 70-and-over butts, will that do ?
Regarding “The third problem is in the implicit assumption that the increased water vapour concentration significantly increases the effective IR opacity of the atmosphere in the 15 micron band. That is not the case. The IR absorption by water vapour is practically saturated which makes the effective opacity, a harmonic mean, insensitive to such variation.”: Absorption of thermal infrared by water vapor from 8 to 15 microns is not saturated.
Indeed. There is little absorption in that spectral region. That means that in that spectral region is the bulk of the radiative heat loss, not through the adjacent CO2 and watervapour bands. It also means that that heat loss is little affected by changes in the adjacent spectral regions, be it of CO2 or watervapour.
Your links says “By contrast, the greenhouse gases capture 70-85% of the energy in upgoing thermal radiation emitted from the Earth surface.”
The electromagnetic energy that is absorbed is either lost to collisions with other molecules or radiated away. How can it be “captured”? If it isn’t “captured” then how does it contribute to opacity?
Perhaps I’m missing something, but it looks to me like the combined CO2 and H20 absorption features are indeed saturated, while the passband between the two major absorption features never will be saturated because there can only be absorption as the wings of the 8 and 15 micron features encroach on the relatively transparent region. What am I missing?
Increased absorption will not have any effect even if it occurred. The energy is eventually radiated somewhere else within the atmosphere/surface skin thermodynamic system or lost to space. Since the average of all emission paths is upwards to space, the energy is eventually lost.
The models fail to accurately predict the effect of clouds on the system. They solve equations for convective clouds but parameterize advective cloud fronts which one can see daily at https://epic.gsfc.nasa.gov/ is the largest contributor to cloud cover.
The Albedo of clouds is .5 to .8, ocean is .06, land is .12….Because clouds randomly cover about 55% of the planet, the Earth’s Albedo averages 0.3. The average radiative temperature of the Earth is about the temperature half the way up the troposphere, interestingly about the average cloud tops. That equation is T=278(1-Albedo)^.25
For Earths A= .3, the temp is 254 K, If A=.06 for ocean, that temp is 274 K, .12 for land gives 269 K, and say .7 for a cloud covered planet gives 206 K which is much colder. You can add approximately 35 C to those temps to get the resultant surface temp for those Albedo extremes, assuming a lapse rate similar to today.
The point is that increased SST causes more evaporation….causes more clouds….causes more reflection of incoming sunlight….causes surface temperature to drop. You can quite easily do a little spreadsheet and calculate how little cloud increase offsets a couple of Watts of CO2 forcing. And you probably won’t even get to the bit about how 1 sq.M of 1 degree warmer SST can make about 10 times as many sq.M of cloud cover…
The inescapable conclusion is:
CLOUDS and and the vapor pressure water of at any given SST are what control the planet’s temperature, CO2 is only a minor player that minorly affects the elevation at which clouds form….
The effect of clouds and more generally what controls the (relative) humidity is one of the ‘known unknowns’.
There are thing to like and not to like in this post, IMO.
A thing to like is ECS likely below 1.8 lower CMIP6 model bound. There are several ways to derive an estimate without relying on CMIP6.
Guy Callendar’s 1938 curve implies 1.67 (method and calculation in essay Sensitive Uncertainty in ebook Blowing Smoke).
Observational energy budget methods (e.g. Lewis and Curry) produce about 1.55-1.65 depending on comparison time periods used.
And Bode feedback analysis on a no feedbacks CO2 only (more follows below) of ~1.2 and the AR4 “ECS likely 3” implies Bode f/(1-f) of 0.65, with 0.5 from water vapor feedback directly derivable from AR4 text, so an implicit additive 0.15 from cloud feedback. Despite his claims otherwise, Dessler’s 2010 paper actually showed zero cloud feedback. And the fact that the AR4 models have about half of observed rainfall implies their modeled WVF is ~2x high, meaning a ‘correct’ Bode is only about 0.25. Plug that into Lindzen’s curve based on 1.2 and ECS is about 1.65.
There are strong reasons to think ECS must be greater than about 1.2, therefore I do NOT like arguments for something about 0.5 as here.
First we can compute from first principles and accepted experimentally determined inputs that the zero feedback (zf) value is something between 1.1 and 1.2. Two examples. (1) Judith Curry had several posts estimating this value in the early days of Climate Etc. (2) Using Monckton’s ‘famous’ equation, the zf value computes to 1.16. Lindzen rounds up to 1.2 in his several papers from 2010-2012 such as his UK Parliament briefing in 2011.
Second, we know observationally that WVF must be positive rather than strongly negative (if 0.5 were to be correct) so ECS must be above ~1.2. The observation is a simple one to make. When I visit the ‘deserts’ (ok Phoenix and Palm Springs) the night time temperature drops sharply in summer because of the relatively low humidity. Here on the beach in Fort Lauderdale it doesn’t because of the relatively high humidity. Ergo, WVF must be globally net positive since 71% of Earth’s surface is water.
WVF is positive until you hit the point where clouds form. Then you quite suddenly go from a few watts positive to hundreds of watts negative in the localized area where clouds are forming. Colloquially hot and muggy to cloudy in an hour or two.
Clouds are not so simple. Depends on type and altitude. For example, high thin cirrus warms rather strongly since the ice crystals are transparent to incoming solar radiation but opaque to outgoing IR. A part of Lindzen’s adaptive iris hypothesis.
Dessler’s paper using two years of TOA IR global satellite observations comparing clear sky to all sky (all with whatever cloud) showed a net zero cloud feedback overall, an almost perfect scatter centered on zero, with an r^2 of 0.02. Were there a positive cloud feedback, there should have been a clear negative TOA IR relationship with all sky having less observed TOA IR and clear sky more, with a significant r^2.
Not to be argumentative, Rud, yes clouds are not so simple, but two strongly opposing feedbacks can easily be “net zero” at some balance point, and lead one to the false yet “verifiable” conclusion that these feedbacks are unimportant. Such is the case with the “a cloud reflects 50 to 90% of sunlight back into space” and “clouds are a net +ve feedback” (or slightly -ve or zero depending on source). The latter conclusion actually is the result of 2 offsetting variables and starting at the point where those variables balance each other.
Regarding Cirrus, if you add a cirrus cloud to the sky, it still reflects more of the 1000 watts/ sq.M of sunlight at that altitude away during the day, than it retains longwave at night. Again starting at a point where everything is in balance causes errors in the conclusion.
You might be right. Dunno. But none less than a weather expert far more reknownd than you, Richard Lindzen, prof emeritus at MIT, says you are just wrong about cirrus. Read his papers. Learn.
Lindzen is a thermodynamicist, and one of the best there ever has been with atmospheric phenomena. IMHO I’m not saying anything he would disagree with. His Iris effect includes the effect of improved IR transmission to outer space as Cirrus clouds decline which is to be expected. His mechanism for decline of Cirrus as humidity increases is a hard one to confirm.
On a homeostatic water planet, net feedbacks are more likely to be negative than positive. Thus at this point in the Holocene, ECS between 0.0 and 1.2 degrees C per doubling of essential plant food in our air is not only possible but probable, IMO.
Besides clouds, the GCMs ignore or downplay such non-radiative, negative feedbacks as convective and evaporative cooling.
It is not true that climate models ‘ignore’ ( your words) convection or evaporation. The problem is they cannot ‘model’ them, so they are parameterized, which drags in the attribution problem. See my years ago post here, “The Trouble with Global Climate Models” for the specifics about which you are apparently unaware.
Please show your homework math supporting your claims. I explained mine (with references), which you have not refuted. And more generally to the ‘IR saturation’ crowd above try reading essay Sensitive Uncertainties’ for a laymans expalnation of why that argument fails as a matter of fundamental physics. Heck, I even gave you all the Modtran spectral analysis.
I tire of these contentless, nonsense ‘no ECS’ true denier opinions here at WUWT. As has been said, “You are certainly entitled to your own opinions, but NOT to your own facts.” Those just are.
Parameterization is totally worthless, pretend input. Meaningless, as the values are whatever the GIGO computer gamer wants them to be.
As for evaporative and convective cooling homework, I refer you to the late, great Father of Hurricanology Bill Gray, a pdf:
A Personal View of the Advancements and Failures in Tropical Meteorology
Or consider this discussion of what GCMs ignore in a Royal Society paper:
https://royalsocietypublishing.org/doi/10.1098/rsta.2015.0146#d3e1120
From past climate, it can be approximated by relating equilibrium warming to radiative forcing. In global climate models (GCMs), climate sensitivity is normally not tuned, but it results from aggregating or parametrizing small-scale processes and ignoring long-term ones (red ellipse in figure 2). GCM-based estimated of TCR and ECS ignore certain processes even within the time frames they consider (grey bars within the red ellipse)…
Next to the evaluation of the full-blown feedback processes in the models, a key challenge is to study the limits of using the linear framework discussed in this paper. How far can one push a GCM into being very sensitive or very insensitive to explore the range of plausible magnitudes of feedbacks and their rate of change? Do cloud, convection and aerosol parametrizations bias GCMs to be too sensitive, or not sensitive enough? For which purposes can we safely use the effective radiative forcing estimates of the linear regression methods? Over which time frames is the assumption of a constant λ justified? Can GCMs serve as a perfect model test bed for simple frameworks, as shown in figure 4? For which climatic base states, feedbacks and their interaction would it be wise to include nonlinear descriptions? For which temperatures, forcing scenarios, and locations does the rate of change of the feedback term matter? When is using a certain fit to estimate the global or regional temperature response justified? How does the coupling of ocean, atmosphere and sea ice determine the evolution of surface temperature patterns enhancing different feedback processes? How can we understand uncertainty propagation in nonlinear systems, with correlated uncertainties, and using computationally expensive climate models? In the light of all these questions, we argue to further explore various uses of feedback frameworks rather than squeezing them into a one-fits-all-concept, and to carefully explore the applicability and predictive capacity of each concept for a range of purposes.
It’s possible that increased CO2 even cools parts of the planet, as has been suggested for the hottest moist tropical areas and Antarctica. There has been no warming at the South Pole since record-keeping began there in 1958, yet given how dry the air is there, that’s where the GHE of more CO2 should be most pronounced.
A single global number for ECS must combine the GHE for polar, temperate and tropical zones. The effect appears greatest in the north polar zone, yet there are few actual stations there with long continuous records, so the temperature “data” there are largely made up.
Willis has listed many assumptions in his post.
Here is one which shows the declining influence of CO. By 400 ppm it is not far from Zero which is why I start with 0.0 +/- ?? feedbacks.
1. David Archibald shows how the effect of increasing CO2 decreases logarithmically as CO2 increases in the following:
http://wattsupwiththat.com/2013/05/08/the-effectiveness-of-co2-as-a-greenhouse-gas-becomes-ever-more-marginal-with-greater-concentration/
There is also another article on the Logarithmic heating effect of CO2:
http://wattsupwiththat.com/2010/03/08/the-logarithmic-effect-of-carbon-dioxide/
An important item to note is that the first 20 ppm accounts for over half of the heating effect to the pre-industrial level of 280 ppm, by which time carbon dioxide is tuckered out as a greenhouse gas.
Which is probably appropriate since Monckton has shown frequently that he pays little attention to significant figures in his calculations.
I have disagreed with him several times over at Judith’s, and he has always courteously replied. Look it up. Never over significant figures (precision), because we both know we are deep into fairly uncertain approximations. Nice try. FAIL.
And he has been courteous to me in the past — when I have agreed with him. That sounds like someone who does not take criticism well, even when it is meant constructively. Perhaps it is the phase of the moon.
In response to the needlessly but characteristically churlish Mr Spencer, our paper is constantly updated with the latest mainstream climatological values. The mean doubled-CO2 radiative forcing in the CMIP6 models for AR6 is 3.52 Watts per square meter. The product of that forcing and the Planck sensitivity parameter 0.299 K/W/m2 is 1.053 degrees, which is thus the current midrange estimate of reference sensitivity to doubled CO2.
Our approach to rounding is straightforward. All the input data are available to at least 1 decimal place of precision, but we perform all the calculations to extended precision of 16 decimal places. The bottom-line result, like all our calculations, is displayed in the computer output table to three decimal places, but is rounded to one decimal place when reported. All the calculations are then run through a Monte Carlo distribution to derive the error bars.
Whether Mr Spencer likes it or not, that approach is standard, and is unobjectionable to all but vacuous nit-pickers. It has not met with any objection from reviewers on grounds of failure to comply with some kindergarten rule of thumb or another about rounding. One of our collaborators is a Professor of Statistics, who kindly keeps us straight on these matters.
Surely you must be confusing me with one of the resident trolls!
You claim, “that approach is standard, and is unobjectionable to all but vacuous nit-pickers.”
Au contraire! It is commonly accepted that the number of significant figures implies the precision of a particular measurement and that in the absence of a formal uncertainty evaluation such as a propagation-of-error calculation, or at least the calculation of a standard deviation of a primary measurement, the uncertainty is +/- 5 units to the right of the last significant figure. If the formal uncertainty is known, then it should be stated along with the mean value and if it is a standard deviation, whether it represents 1-sigma or 2-sigma.
It is also commonly held that a final answer, involving multiplication/division or exponentiation, should retain no more significant figures than the least precise number, albeit sometimes an extra guard digit is justified if it is an intermediate answer to be used for further calculations. Using 16 digits for calculation is both unnecessary and unwarranted, although one has little control over how the computer performs floating-point calculations, unless you are using a language that allows an optional double-precision.
My primary objection is the you frequently present ‘albedo’ as having a value of 0.3. The commonly accepted implication is that the value you provide is 0.3 +/- 0.05, and any resultant calculation should show no more than one significant figure. If albedo is known to 2 or 3 significant figures, then it should be shown as 0.30 or 0.300, respectively, with a notation that the zeroes are indeed significant. Although, I doubt that the continuously varying cloud cover allows any certainty above one significant figure.
It is you who is being defensive and churlish rather than admit you are being careless.
The problem with statisticians and mathematicians in general is that their behavior towards calculations is often cavalier because, it would seem, they so frequently work with exact numbers, and don’t live in the same sphere as metrologists, physicists, and chemists.
In any event, Rice University hardly qualifies as kindergarten!
http://www.ruf.rice.edu/~kekule/SignificantFigureRules1.pdf
If you want climate models to be closer to reality, remove CO2 from them.
Remember 95+% of CO2 is completely natural, and happens every year. Humans only add about 24ppm total, which is miniscule compared to natural CO2, and even then CO2 is a trace gas which cannot possibly have any effect on climate.
This message somehow must be socially amplified until it gets to western political leaders like Biden. Take co2 out of the equation. There is not only zero evidence that it plays any measurable role in any climate statistic but the idea that it might do so, given the numbers you state is laughable. When will this madness get its required response? This is not or should not be. A political issue. It is environmental fanatics against reality!
CO2 causes the largest error since the ASSUMPTION of its effect is much too large. Will anyone admit it?
I see one problem in general with climate models that causes them to generally overstate ECS. There are enough climate models that are close enough to their average in terms of stating or indicating ECS to give somewhat of a consensus, that overstates ECS. The problem I see is that they’re selected and/or tuned to hindcast the past, especially the 30 years before their hindcast-forecast transition years. For the CMIP3, CMIP5 and CMIP6 models, the 30 year periods before their hindcast-forecast transition years had part of the warming during this time being caused by upswing phase of multidecadal oscillations, which these models did not consider. For CMIP5 models, the last year of hindcasting (“historical”) is 2005 and the first year of forecasting (“projections”) is 2006. According to some Fourier work I did on HadCRUT3, about .2 degree C of the warming reported by HadCRUT3 was from a periodic cycle that had periodic nature sustained for two cycles from a peak time in 1877 to a peak time in 2005 with a period of 64 years and a peak-to-peak amplitude of .218 degree C. Because the CMIP5 models were done without consideration of multidecadal oscillations, they hindcasted about .2 degree C more warming from positive feedbacks to effects of increase of manmade greenhouse gases (especially the water vapor feedback) than actually happened. Overdoing the water vapor feedback causes models to show greatly excessive degree of the tropical upper tropospheric warming hotspot. And, lately I have noticed denial of the Atlantic Multidecadal Oscillation being favored by some climate scientists including Dr. Michael Mann.
The hindcast assumes too much influence by CO2.
“Suppose I have N models that all predict a different ECS value. Mother Nature is difficult, but she is not malicious: there is only one “true” value of ECS in the real world”
Why does ECS have to be a fixed – one true value? Can’t it be a variable- assuming all other potential forcings aren’t changing? If it’s a variable, perhaps all those numbers are right given certain conditions. Which would mean something fundamental is missing from all the models.
OK, I claim to not understand most of the conversations here – other than about forests and wood energy- but I do try to follow the discussions and have learned a lot from this site.
The climate models fail to converge because the climastrologists refuse to even consider that CO2 climate sensitivity could be less than zero.
The effective emission height is ~5.105 km.
7 – 13 µm: >280 K (near-surface).
>17 µm: ~260 – ~240 K (~5km in the troposphere).
13 – 17 µm: ~220 K (near the tropopause).
TOA (emission height) is that altitude at which the atmosphere effectively becomes transparent to any given wavelength of radiation… and for some wavelengths, TOA is very near the surface. The emission profile is equivalent to a blackbody with a temperature of 255 K, and thus an effective emission height of 5.105 km.
Combine that 255 K effective emission height temperature with the lapse rate to get surface temperature, and you’ll find there is no “greenhouse effect”, thus no CAGW.
The lapse rate is said to average ~6.5 K / km. 6.5 K / km * 5.105 km = 33.1825 K. That is not the ‘greenhouse effect’, that is the tropospheric lapse rate. The climate loons have conflated the two. Polyatomic molecules such as CO2 and H2O reduce the adiabatic lapse rate, not increase it (for example: dry adiabatic lapse rate: ~9.81 K / km; humid adiabatic lapse rate: ~3.5 to ~6.5 K / km).
9.81 K / km * 5.105 km = 50.08005 K dry adiabatic lapse rate (due to homonuclear diatomics and monoatomics… see below), which would give a surface temperature of 255 + 50.08005 = 305.08005 K. Sans CO2, that number would be even higher (on the order of 314 K).
Water vapor (primarily) reduces that to 272.8675 K – 288.1825 K, depending upon humidity. Other polyatomics (such as CO2) also contribute to the cooling, to a much lesser extent. The higher the concentration of polyatomics, the more vertical the lapse rate, the cooler the surface. Also remember that the atmosphere is stable as long as the actual lapse rate is less than the adiabatic lapse rate… and a greater concentration of polyatomic molecules reduces the adiabatic lapse rate… thus convection increases.
This occurs because polyatomics transit more energy from surface to upper atmosphere by dint of their higher specific and latent heat capacity, which acts (all else held constant) to reduce temperature differential between different altitudes. One would think this would cause an upper-atmospheric ‘hot-spot’, and indeed the climastrologists originally claimed that such a hot-spot would be a signature of CAGW… but that hot-spot was never found, and for a very good reason… because the increased atmospheric CO2 concentration augers more energy from surface to upper atmosphere (an upper atmosphere warming effect) while also more effectively radiatively cooling the upper atmosphere… and the radiative cooling effect far overwhelms the convective warming effect. That’s why the upper atmosphere has exhibited a long-term and dramatic cooling.
Near-surface extinction depth is ~10.4 m at current CO2 concentration, and a doubling of CO2 concentration would reduce that to ~9.7 m. The troposphere is essentially opaque to 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation. In fact, it’s opaque to that radiation right up to ~15 – 20 km (TOA for that wavelength of radiation). That’s where the emission height of CO2 is.
Tropospheric thermalization by CO2 is effectively saturated… but upper atmosphere radiative cooling by CO2 is not saturated.
Thus, tropospheric CO2 thermalization only serves to increase CAPE (Convective Available Potential Energy), regardless of CO2’s atmospheric concentration. This increases convection, which is a cooling process (it transits energy from surface to upper atmosphere, then radiatively emits it… more CO2 will transit and emit more energy).
It is the homonuclear diatomics and monoatomics which are the ‘greenhouse gases’… they can receive energy via conduction by contacting the surface just as the polyatomics do, they can convect just as the polyatomics do, but in order to emit their energy, they must have their net-zero magnetic dipole moment perturbed via collision. But in the upper atmosphere, collisional processes take place far less often due to the reduced atmospheric density, and any collision is more likely to thermalize the energy, rather than emit it.
Homonuclear diatomic vibrational mode quantum states are relatively long-lived and meta-stable, and so the majority of that energy is thermalized via v-t (vibrational-translational) collisional processes.
Monoatomics have no vibrational mode quantum states (so they cannot contribute to thermalization warming), but their lower specific energy acts to convectively transit less energy per parcel of air than would more-complex molecules such as polyatomics.
Remember that radiative emission is the sole means by which our planet can shed energy to space. Remember also that CO2 is the primary radiative coolant in the upper atmosphere.
Thus, in an atmosphere consisting solely or primarily of homonuclear diatomics, radiative emission to space would be reduced. The upper atmosphere would be warmer (because it could not as effectively radiatively cool), thus the lower atmosphere would have less buoyancy, thus convection would decrease, thus the surface would be warmer.
In addition, because polyatomic molecules make the lapse rate more vertical, a paucity of polyatomic molecules would make the lapse rate less vertical (less energy would be transited from surface to upper atmosphere per parcel of air, temperature differential between altitudes would be greater, thus the lapse rate would force surface temperature to be much higher).
On a world without water (ie: an atmosphere consisting solely or primarily of homonuclear diatomics), the surface would be much warmer. On our mostly-water world, a decrease in atmospheric CO2 content would cause a similar effect, which would be compensated by an increase in atmospheric water vapor content (the warming due to lower CO2 atmospheric content would cause more evaporation of water, humid air is more buoyant than dry air, so convection would increase, thus the warming due to less atmospheric CO2 concentration would be compensated by cooling due to more water vapor content), which would again make the lapse rate more vertical by transiting more energy from surface to upper atmosphere.
Polyatomic molecules act to increase thermodynamic coupling between heat source (in this case, the surface) and heat sink (in this case, space) (as compared to homonuclear diatomics and monoatomics). They thermalize energy in the lower atmosphere, convect it to the upper atmosphere, and radiatively emit it.
Homonuclear diatomics act to thermalize energy picked up via conduction with the surface (and energy picked up via collision, and energy picked up via the odd collisionally-induced radiative absorption), but cannot as effectively radiatively emit that energy. They also act to reduce the energy content of any parcel of air (as compared to a similar parcel of air with the homonuclear diatomics replaced with polyatomics), thus less energy is convectively transited from surface to upper atmosphere..
Monoatomics act to reduce the energy content of any parcel of air (as compared to a similar parcel of air with the monoatomics replaced with homonuclear diatomics or polyatomics), thus less energy is convectively transited from surface to upper atmosphere.
If CO2 was such a great ‘heat-trapping’ gas, it would be used as a filler gas between double-pane windows… it’s not because window manufacturers know monoatomics with low specific heat capacity reduce thermodynamic coupling between heat source and heat sink.
Nice write up. Thanks.
There was an experiment done that showed that using CO2 as a filler in double glazed windows, gave INCREASED energy transfer compared to normal air.
Fred, do you have a link for this?
Was there increased flow both ways?
RE: CO2 glazing: https://escholarship.org/uc/item/6sn232sk
Link was on my old computer.. sorry.
https://principia-scientific.com/industry-experts-co2-worse-useless-trapping-heatdelaying-cooling-2/
Very good LOL, 8 out of 10, add in cloud albedo and you’ll get 10.
Great write up indeed. Am sure someone estimated amount of ground water pulled to surface for irrigation use…now that might change broad regional surface temps.
I’m sure it does, given the high latent heat of vaporization of water.
In fact, if the climate loons were so awfully worried about global warming (whatever the cause), they’d be advocating for hitting the heat where it hurts using a method that’s effective… in other words, they’d be advocating for water misters on tall buildings in cities.
That’d quash the UHI effect, cool the air in cities so people wouldn’t suffer as much during heat waves, and reduce cooling costs.
Where I grew up, we had something similar… except the buildings were fir trees. My friend’s dad had a landscaping business, so I purchased a couple thousand feet of small plastic tubing, T-fittings and small misters, and Ty-Rap’d (my dad is an electrician, so he had plenty of Ty-Raps, the precursor to zip-ties) them in the trees around the northern and western perimeter of the wind-break (in summer, wind generally blew from the West-Northwest). If it got too hot, I’d go open the hose faucet, the trees would get wet, the evaporative cooling made a nice cool breeze.
It needn’t be a large flow of water, either.
Let us assume each building emits 20 L (5.283 US Gallons) of water in a fine mist per hour, and let us assume that the misters automatically kick on at 300 K (80,33 F, 26.85 C).
At 300 K, ∆H_vap = 2437300 J/kg or 677.02777778 Wh/L
So each building would be providing cooling equivalent to running a 13540 W A/C unit.
That doesn’t sound like a lot, until you get a thousand buildings doing the same, amounting to 13.54 MW of cooling equivalent.
The people on the street wouldn’t notice water misting down on them, the mist would be fine enough to almost immediately evaporate… all the people would notice is cool air flowing down the sides of the buildings.
But let’s instead destroy society, destroy capitalism, impoverish hundreds of millions of people, force hundreds of millions of people into government dependency, destroy our sovereignty, enslave ourselves to an unaccountable and unelected cabal of communist UN profiteers… yeah, that’s what the liberal climate loons are shilling for. LOL
Yes, more simply stated, the radiating gases allow energy to be distributed (through absorption/collision) into the atmosphere to molecules that are already gravitationally distributed. That is the reason for the lapse rate and the entire reason the surface is 33 C warmer than it would be without those radiating gases. There is no greenhouse effect.
Downwelling radiation is real, but that energy is simply being moved from one location to another one. When all emission paths are averaged out, you find the average is upward. Again, due to the distribution of mass. The probability of absorption is higher where there is more mass so the path downward is shorter. This means the atmosphere is supporting an outward, constant energy flux where some amount of that energy temporarily energizes the gases, but the energy **effectively** goes in one direction, to space.
Computing some warming effect from looking only at downwelling energy is a mental trap that many skeptics have accepted. In reality, it’s all part of the same outward flux when averaged out over the trillions of emission events.
I’m not so sure this claimed ‘backradiation’ is of sufficient radiant intensity to even have much of an effect.
If ‘backradiation’ from CO2 atmospheric emission causes CAGW, where is that ‘backradiation’ coming from?
Near-surface extinction depth is ~10.4 m at current CO2 concentration, and a doubling of CO2 concentration would reduce that to ~9.7 m. The troposphere is essentially opaque to 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation. In fact, it’s opaque to that radiation right up to ~15 – 20 km (TOA for that wavelength of radiation). That’s where the effective emission height of CO2 is.
So this ‘backradiation’ must be coming from that ultra-thin ~10.4 m layer of atmosphere immediately above the surface, in order to even reach the surface… except that’s the same layer of atmosphere which is thermalizing nearly all of that 13.98352 µm to 15.98352 µm (to account for the absorption shoulders of CO2) radiation.
CO2’s absorption of IR in the troposphere only has the effect of thermalizing that radiation and thus increasing CAPE (Convective Available Potential Energy), which increases convection of air to the upper atmosphere (carrying with it the latent and specific heat of polyatomic molecules… more polyatomic molecules will carry more energy and will more readily emit that energy in the upper atmosphere), which is a cooling process.
Mean free path length for radiation decreases exponentially with decreasing altitude and vice versa due to air density changing inversely exponentially with altitude, therefore the net vector for radiation in the 13.98352 µm to 15.98352 µm band is upward, so the majority of ‘backradiation’ which could possibly reach the surface would be from that very thin layer of atmosphere which is within ~10.4 m of the surface, and the great majority of that energy is being thermalized and convected. So where’s this ‘backradiation’ energy coming from that’s going to cause catastrophic anthropogenic global warming, especially considering that the maximum able to be absorbed by CO2 is 8.1688523 W/sr-m^2, and the maximum able to be absorbed by anthropogenic CO2 is 0.29652933849 W/sr-m^2?
At 287.64 K (the latest stated average temperature of Earth) and an emissivity of 0.93643 (calculated from NASA’s ISCCP program from data collected 1983-2004), at a photon wavelength of 14.98352 µm (the primary spectral absorption wavelength of CO2), the spectral radiance is only 5.43523 W / m^2 / sr / µm (integrated radiance from 13.98352 µm – 15.98352 µm of 10.8773 W/sr-m^2 to fully take into account the absorption shoulders of CO2).
Thus the maximum that CO2 could absorb in the troposphere would be 10.8773 W/sr-m^2, if all CO2 were in the CO2{v20(0)} vibrational mode quantum state.
While the Boltzmann Factor calculates that 10.816% of CO2 are excited in one of its {v2} vibrational mode quantum states at 288 K, the Maxwell-Boltzmann Speed Distribution Function shows that ~24.9% are excited. This is higher than the Boltzmann Factor calculated for CO2 because faster molecules collide more often, weighting the reaction cross-section more toward the higher end.
Thus that drops to 8.1688523 W/sr-m^2 able to be absorbed. Remember, molecules which are already vibrationally excited can not absorb radiation with energy equivalent to the vibrational mode quantum state energy at which they are already excited. That radiation passes the vibrationally excited molecule by.
That’s for all CO2, natural and anthropogenic… anthropogenic CO2 accounts for ~3.63% (per IPCC AR4) of total CO2 flux, thus anthropogenic CO2 can only absorb 0.29652933849 W/sr-m^2.
CO2 absorbs ~50% within 1 meter, thus anthropogenic CO2 will absorb 0.148264669245 W/m^2 in the first meter, and the remainder 0.148264669245 W/m^2 within the next ~9 meters.
CO2 absorbs this radiation regardless of any increase in atmospheric concentration… extinction depth is ~10.4 m at 14.98352 µm wavelength. A doubling of CO2 atmospheric concentration would reduce that to ~9.7 m. Thus any tropospheric thermalization which would occur at a higher CO2 atmospheric concentration is already taking place at the current CO2 atmospheric concentration. Thus the net effect of CO2 thermalization is an increase in CAPE (Convective Available Potential Energy), which increases convective transport to the upper atmosphere, which is a cooling process.
Tropospheric thermalization is effectively saturated. A doubling of CO2 doesn’t appreciably reduce extinction depth at the band centered around 14.98352 µm. But upper-atmospheric radiative shedding of energy to space is not saturated… and more CO2 molecules will cause more upper-atmospheric cooling, increasing buoyancy of lower-atmosphere air and thus increasing convection. IOW, polyatomic molecules (such as CO2) increase thermodynamic coupling between heat source (in this case, the surface) and heat sink (in this case, space) due to the fact that they have higher specific heat capacity than the monoatomics (Ar) and homonuclear diatomics (N2, O2).
An increased CO2 atmospheric concentration will emit more radiation in the upper atmosphere (simply because there are more molecules absorbing energy in the lower atmosphere, more molecules convectively transporting energy to the upper atmosphere and advectively transporting energy poleward, and more molecules capable of emitting radiation in the upper atmosphere), thus more radiation will be emitted to space, and that represents a loss of energy to the system known as ‘Earth’, which is a cooling process.
This illustrates what I’m stating:
http://imgur.com/Zxq4KlB.png
That’s a MODTRAN plot at 287.64 K for 415 ppm vs. 830 ppm CO2 for 13.98352 µm to 15.98352 µm radiation (to fully account for the absorption shoulders of CO2). It assumes no water vapor, no CH4, no O3 present. Note that the troposphere plots aren’t appreciably different, whereas the 100 km plots (ie: at the edge of space) are appreciably different. IOW, a doubling of CO2 atmospheric concentration doesn’t appreciably change the upward or downward radiative flux in the troposphere (because the extinction depth for those wavelengths at 415 and 830 ppm is low enough that it’s thermalizing nearly all of that radiation, the net effect being an increase in CAPE (Convective Available Potential Energy), which increases convection, which is a cooling process), but it does appreciably change how much energy is exiting the system known as ‘Earth’, and that represents a cooling process. One can clearly see the effect of CO2 upon energy emission to space, as delineated by the shoulders of the emission spectrum of CO2 in the 100 km plots. That cools the upper atmosphere, and since the lapse rate is ‘anchored’ at TOA and since the heat transfer equation must (eventually) balance, that means the lower atmosphere must cool toward the temperature of the upper atmosphere (because a higher concentration of polyatomic molecules shifts the lapse rate vertically, <u>and</u> radiatively cools the upper atmosphere faster than the lower atmosphere can convectively warm it), and thus the surface must <i>cool</i> with an increasing CO2 atmospheric concentration. This is what is taking place, we’re just working through the humongous thermal capacity of the planet, which warmed due to a now-ended long series of stronger-than-usual solar cycles (the Modern Grand Maximum), but it is cooling (in fact, it’s projected that we’re slipping into a Solar Grand Minimum which will rival the Dalton Minimum, and may rival the Maunder Minimum).
Hey Fred, maybe you can help me out here… I’m attempting to calculate the adiabatic lapse rate for CO2 alone, but I’m getting a number that seems too high. I suspect it’s because I’ve not accounted for the low atmospheric concentration of CO2, but my brain’s not working right now, so I can’t figure out how to do that. Need sleep.
I did it once before, but apparently I didn’t keep the calculations.
Your input would be appreciated.
g = gravitational acceleration
c_p = Specific Heat Capacity at Constant Pressure
R = gas constant = 8.31 J K-1 mol-1
M = mean molecular mass = 44.0095 amu x 0.001 kg mol-1 = 0.0440095 kg mol-1
UNITS:
cp: (kg m2 s-2 K-1 mole-1 )/(kg mole-1) = m2 s-2 K-1
dT/dz: g/cp – m s-2 /(m2 s-2 K-1) = K m-1
c_p = 9/2 R / M = 9/2 8.31 J K-1 mol-1 / .0440095 kg mol-1 = 849.70290505459048614503686704007 m2 s-2 K-1
dT/dz = -g/c_p
9.80665 m s-2 / 849.70290505459048614503686704007 m2 s-2 K-1 = 0.01154126923853456344431073672951 K m-1
0.01154126923853456344431073672951 K m-1 * 1000 m = 11.54126923853456344431073672951 K km-1
Like I said, that seems high. Perhaps because I’ve not accounted for the low concentration of CO2 in the atmosphere.
I have a vague recollection of contrasting the current mean molecular mass of air to that of CO2… I’ll work on it tomorrow. Time to sleep.
Much more sensible to just calculate the change in lapse rate due to say doubling or quadrupling current CO2 concentration
You are also using numbers that are far more “accurate” than is logical.. Stick to a couple of dps.
Apart from that, your calcs look reasonable.
Yeah, that’s my ultimate goal, calculate for a 1,000,000 ppm CO2 concentration atmosphere, then back-engineer to whatever ppm concentration I want to use.
Actually, we could do this with every gas in the atmosphere, and come up with a much more exact answer than the climastrologists do… in large part, it is the lapse rate, after all, which determines surface temperature.
As for the decimal points, I’ve gotten in arguments with pedantic liberal kooks who claimed that even a fourth decimal point rounding means the entire answer is incorrect, so I’ve gotten into the habit of making the answer as exact as possible. It drives the liberal kooks mad, because they have nothing to nitpick. LOL
A correction / addendum (bolded):
Thus that drops to 8.1688523 W/sr-m^2 able to be absorbed. Remember, molecules which are already vibrationally excited can not absorb radiation with energy equivalent to the vibrational mode quantum state energy at which they are already excited. That radiation passes the vibrationally excited molecule by ( unless there are degenerate vibrational mode quantum states… there are three for CO2 ):
CO2{v21(1)}: 667.4 cm-1, 14.98352 µm
CO2{V22(2)}: 667.8 cm-1, 14.97454 µm
CO2{v23(3)}: 668.1 cm-1, 14.96782 µm
In such a case, CO2 would simply absorb / thermalize energy a maximum of three times, increasing the convective cooling effect of CO2 by increasing CAPE (Convective Available Potential Energy).
I was a statistics and economics dual major during my university years, and the one thing that climate modellers have no choice but to conveniently forget is that every time they run the model they lose a degree of freedom (i.e. the equivalent of one data point no longer being accessible).
Even if an individual forecaster is scrupulous enough to account for these trial runs, when the academic community is only presenting the ‘best’ models, they aren’t accounting for every ‘failed’ model that was used by others and didn’t yield results good enough to publish in a journal.
So this leaves cherry-picked results that have essentially zero (realistically, thousands of negative) degrees of freedom given the paucity of accurate temperature data, and no way to ‘re-run’ the experiment.
The estimates for error bars rely on the degrees of freedom, and once you get close to zero, your sigmas are virtually infinitely large (i.e. the results are worthless).
This is precisely the same reason that economic forecasts can be wildly incorrect.
“Climate Science” is a new field that doesn’t have the hundreds of years of failure behind it that Economic Forecasting does, and the data comes out at a far slower pace than most economic data.
So, what you are saying is, climastrologists are a bunch of venal liars using “statistics” to justify their right to utter false prophesy so as to mislead the general public, rob them blind, and enslave their children, all done from the elevated heights of a religious platform no sinner may criticise, because everything they say is too complex for mere mortals to understand?
Because that’s what Economics Forcasting has delivered so far…
The whole concept of ECS is based on the concept of an “average global temperature”. Averaging temperature readings is physically meaningless since temperature is not an extensive property : it cannot be added and thus cannot be averaged.
An average temperature is a statistic but it is physically meaningless. So yes, the whole effort is pointless ( from a scientific point of views ). But we all know that this is not about science, it is a dishonest pseudo-scientific dressing for a political agenda.
Greg, that is one way to look at it, but I think erroneously. The obviously varying locational actual temperatures are washed out by computing the anomaly for each location. It is only the location anomalies that are ‘averaged’, and the ECS refers to the expected change in that anomaly average per CO2 doubling. That is mathematically sound. The confusion arises because everybody refers to ECS as if a new equilibrium ‘temperature’, when it is only indirectly.
I disagree. You can no more “average’ anomalies than you can absolutes. Uncertainties in the absolute temperatures grow as you increase the number of data points being “averaged”. Those uncertainties carry over to the anomalies as well. You can’t decrease uncertainty by subtracting two values. Therefore the uncertainties in the “average” grows as you add more data values.
It’s just like laying multiple boards end-to-end, each with its own uncertainty value. The more boards you add the higher the overall uncertainty becomes in the final length. It’s no different with temperatures laid end-to-end – i.e. added together. It doesn’t take long for the uncertainties to become larger than the differences in the “average” you are trying to discern.
What is this model fetish and why does it waste so much WUWT time? There’s not a single greenhouse gas climate model with a half decent empirical tests, validations or attempted falsifications. Climate modelling is nearly all fake science. The probability that all the CIMP6 climate models are incorrect is 1. Or 100%.
“Das ist nicht nur nicht richtig; es ist nicht einmal falsch!”
– Wolfgang Pauli
MP, it is because only in a future predicted by climate models are there any climate concerns worth worrying about. The CAGW castle is build on model sand. And, the IPCC therefore does its level best to pretend the many billions spent on them was good money rather than bad, lest the whole climate enterprise collapse.
The beauty of CMIP6 is that AR6 has dug itself an even bigger hole, harder to climb out of.
Willis explained the problem with models very well.
Even were scientists able to model the effect of clouds ‘correctly’ the models would still be wrong, and would still be biased towards man-made climate change. Political groups, E-NGOs and billionaires are biased to see man-made climate change. Fourty, or more, years of climate models, and the testimony of climate modellers tell us. Tens of billions of dollars trying to prove sky pixies say so. Only scientists who comply with funding bias are funded.
Without the scientific method, the models will always be very wrong. That’s why skeptics should concentrate on empirical testing, validation, and falsification. Without it, there is no science. Our intelligence could double, our AI and machine-learning algorithms could be orders of magnitude better – but the models will still be wrong, and no amount of logical refutation can make models better. Only empirical falsification can help. It’s the only thing we have to stop bad science.
“Modelling” is what the enemy uses to bullshit us common folk. If we come right out and call them bullshitters, we are rude denialists reverting to ad hominems to discredit our betters.
Reprinting their Holy Writ, and throwing it before us swines, we all get a go at analysing said Word of Gawd as well as each of us are able or willing. “Death by a thousand paper cuts” I think someone called it? So we spend a lot of time on their models. It is the only thing they offer us swine, they have nothing else… Pure fear porn.
That said, now you also know why every second troll writes such beautiful prose on the impropriety of non-experts commenting on the Holy Writ as delivered by the Holy Profits of climaskatology, it really gets their gall, their Montessori education was all about “consensus Truth”, “perception is reality” and “believe the science”.
Not that I have much to offer, mind, I’m just here for the awsome stuff the people all over this site teach me daily.
Earth’;s Climate is totally controlled by varying amounts of dimming Sulfur Dioxide aerosols in the atmosphere, primarily from random volcanic eruptions, and as such, can NEVER be modeled, unless the modeler’s provide solutions for controlling temperatures by adjusting atmospheric SO2 aerosol levels.
Those who maintain that the ECS for CO2 is zero are the only correct voices on this blog..
.
That requires repeating and highlighting…
Those who maintain that the ECS for CO2 is zero are the only correct voices on this blog..
In order….sunshine, clouds, evaporation and condensation of rain, Net ground/sky IR, convection…..aerosols are somewhere about 1/10 of the smallest of these….just sayin’
DMacKenzie:
The sun is not a variable, but the amount of sunshine reaching the Earth’s surface is what drives Earth’s temperatures, and that amount is controlled by the varying amounts of dimming SO2 aerosols in the atmosphere. Increase them and it cools down. Decrease them and it warms up.
http://www.skepticmedpublishers.com/article-in-press-journal-of-earth-science-and-climatic-change/
Title: “The Problem with Climate Models”
There is not a singular problem climate models.
The problems with climate models are manifold. The problems are many, and they are all entangled, wrapped around each other like a proud dung beetle rolling up his ever-growing ball of manure prize. The models and their problems are like that ball of manure that forms a massive junk ensemble – it stinks. It’s an ensemble the CMIP’ers like to publish in an “Emperor’s New Clothes” Fallacy. Only those with divine climate training and appreciation of junk science can see and smell the goodness.
Ed identifies one of those problems here: Convergence. It is a really a lack of convergence, better seen as a divergence. The more models produced by the Climate Dowsing community, the wider the ECS upper and lower bound estimate gets; rather than a convergence as n increases.
Another massive problem in the climate models is they are iterative input value error propagation machines that produce statistical error that quickly overwhelms any conclusions in the output that can be drawn. This is the Pat Frank-developed problem with the GCMs. This problem I think of as a sticky glue that holds the junk model outputs together and makes unwinding the ensemble, a turd ball the CMIP’ers create, as an intractable problem. Better to just toss them all in the junk bin than to try to parse and tease out nuggets of goodness from them. Junk all the way down.
Another major problem with too hot running climate models, at least in their current implementations, is they all predict the mid-tropospheric tropical hot spot. No one in the climate modeling community wants to talk about this elephant in the room anymore. They all hand-wave it away and try to ignore it as they have for over 20 years. The lack of observation of this prediction after almost 30 years of looking would in other science disciplines, been the basis for either tossing out the hypothesis completely, or realizing the strong water vapor feedback part of GHG theory is likely incorrect and implementing a weak GHG theory. But then the modellers political paymasters would be unhappy.
So those 3 problems:, divergence, iterative error propagation machines, and failure of major prediction tells us that the CMIP3/5/6 ensembles are merely junk science. That the climate modelers all assemble every few 4-6 years and put each of the climate model outputs together into an ensemble and proudly roll-out their predicted ECS range makes them not much more than dung beetles. At least the dung beetle though knows what he has is manure.
Excellent analogy.
[fix the misspelling of your email in the autofill and your comments won’t get flagged-mod]
The different computer codes for calculating stellar evolution, developed by groups in various countries, yield the same results for the same evolutionary phases, which also agree well with the observations. Such convergence is a hallmark of the progress of the insights on which the models are based, through advancement of understanding of the underlying physics and testing against reality, and is manifest in many of the sciences and techniques where they are used.
The IPCC alters the data to support their thesis and their models without any effort to understand the underlying physics. They are a political organization who produce propaganda and have zero credibility in science community.
Why is there no research on producing experimental evidence quantifying co2 climate sensitivity?
Climate science is in it’s absolute infancy, it is extremely complex and the complexity ensures it will always be changing. Any individual or organization that asserts they have the ability to model or predict the climate is/are ignorant to climate science.The science community needs more data and fewer computer models.
“Such models give ECS values between 0.5C and 0.7C. Not something to be really concerned about.”
– Ed Zuiderwijk, PhD
In the many decades that I have been involved in the “global warming” / “climate change”/ “wilder weather”/ “we’re all gonna die from false fabricated climate BS” / debate, I’ve watched rationally calculated ECS decline by almost an order of magnitude, from almost 10C to about 1C/(2xCO2).
My own calculations of ECS range from Plus1C/doubling to Minus1C/doubling, based on the ASSUMPTION that increasing atmospheric CO2 drives temperature, which is probably FALSE – unless the future can cause the past, which is extremely improbable in our current space-time continuum. I suggest that ECS, at best, should be treated as an “Imaginary Number”. 🙂
The following post is from 2013.
https://wattsupwiththat.com/2013/09/19/uh-oh-its-models-all-the-way-down/#comment-1108144
[To spare our hardworking moderators, I’ve deleted links – see the original post for links.]
[excerpt]
One could also say “an infinitude of worthless climate models”, programmed by “an infinitude of dyslexic climate modellers”, yielding “an infinitude of exaggerated global warming predictions” (er, sorry, “projections”).
John said above:
David Appell had the first comment on Judith’s blog entry, which is entitled “Consensus Denialism.” Here is what he said about Judith:
“The distressing thing is how some people are all ready to attack models, instead of helping make them better.”
OK David, here are some helpful suggested steps to make the climate models better:
1. Adjust the Surface Temperature (ST) database downward by about 0.05 to 0.07C per decade, back to about 1940, to correct for a probable strong warming bias in the ST data.
2. Decrease the ECS (sensitivity) to about 1/10 of its current level, to between 0.0 and 0.5C. If ECS exists, it is much smaller than current estimates.
3. Eliminate the fabricated aerosol data to enable the false high ECS values used in the climate models. The aerosol data was always provably false (Google “DV Hoyt” ClimateAudit).
4. Include a strong natural cyclical variation based on either the PDO (~60 years) or the Gleissberg Cycle (~90 years) – see which one fits the ST data best.
Other than that, the models are great! Actually no, not so great – the models have probably “put the cart before the horse” – we know that the only clear signal in the data is that CO2 LAGS temperature (in time) at all measured time scales, from a lag of about 9 months in the modern database to about 800 years in the ice core records – so the concept of “climate sensitivity to CO2” (ECS) may be incorrect, and the reality may be “CO2 sensitivity to temperature”. See work by me and Murry Salby (and Hermann Harde and Ed Berry).
BTW, this does not preclude the possibility that increases in atmospheric CO2 over the past ~century are primarily due to human combustion of fossil fuels, but there are other plausible causes – (Google “mass balance argument” Engelbeen Courtney).
So good luck with those models David – hope this helps to make them better. 🙂
Regards to all, Allan