by Rud Istvan,
EPA administrator Pruitt wants to “Red Team” the catastrophic anthropogenic global warming (CAGW) consensus best reflected in the IPCC assessment reports (AR). At its core, CAGW rests on just three propositions:
1. CO2 is a ‘greenhouse’ gas retarding radiative cooling. This should not be in serious dispute since Tyndall experimentally proved it in 1859.
2. The Earth is warming. Although the details are in dispute because of temperature data quality problems and ‘adjustments’, the general fact is not. The Earth has been intermittently warming since the Little Ice Age (LIA) ended. For example, the last Thames Ice Fair was in 1814.
3. CO2 and its knock-on effects caused the recent warming, and climate models (such as the CMIP5 archive for IPCC AR5) predict this will continue to catastrophic levels. This is an extremely dubious proposition.
This guest post addresses proposition 3. It does so in a short sound bite ‘abstract’ useful for debating warmunists, and then in a typical WUWT full climate science guest post. It is a modest Red Team contribution.
Sound bite ‘abstract’
Climate models have run hot since 2000. Except for the 2015-16 now fully cooled El Nino blip, there has been no warming this century except by Karlization or (newly) Mearsation. Yet this century comprises about 35% of the total increase in atmospheric CO2 since 1958 (Keeling Curve). The climate models went wrong on attribution. The warming ~1920-1945 is essentially indistinguishable from that of ~1975-2000. AR4 figure SPM.4 said the earlier period was mostly natural (because not enough change in CO2). The CMIP5 archive assumes the latter period is mostly CO2 (and other GHG). That assumption is fatally flawed; natural variation did not magically cease in 1975.
Fully documented post
CMIP5 climate models have run hot since before 2000, and the divergence of CMIP5 from observations is highly statistically significant. Details are in Dr. Christy’s 29 March 2017 Congressional written testimony (available on line), from which Figure 2 provides sufficient up-to-date evidence.
This divergence is rooted in the attribution problem between natural and anthropogenic warming. It is unavoidably inherent in CMIP5 models for a very basic reason.
To properly model essential climate features like convection cells (thunderstorms), a grid cell needs to be less than 4km on a side. The finest resolution in CMIP5 is 110km at the equator; the typical resolution is 280km. This is because halving grid size requires an order of magnitude more computation. So adequately simulating such atmospheric processes from first principles is computationally intractable. Details are in my 8/9/2015 WUWT guest post “The Trouble with Climate Models”.
The solution is to parameterize such processes (for example, put a number on the probability of how many thunderstorms per grid cell per time step –a conceptual rather than actual example as parameters are a bit more complicated). Parameters are obviously just guesses. So they are tuned to best hindcast compared to observations; for CMIP5 the ‘experimental design’ was from yearend 2005 back three decades to 1975.[1]
Parameter tuning implicitly drags the attribution problem into CMIP5.
Dr. Richard Lindzen, MIT professor emeritus, first made the observation that the period of warming from ~1920-1945 is essentially indistinguishable to that from ~1975-2000. This is readily apparent visually, and is also true statistically.
IPCC AR4 WG1 figure SPM.4 and the accompanying text make clear that the earlier period (circled in blue) was mostly natural; there simply was insufficient change in atmospheric CO2 to explain the rise in temperature without natural variability. A portion of figure SPM.4 (readily available via the IPCC) is reproduced below as sufficient evidence.
The IPCC intent of AR4 WG1 figure SPM.4 was to convince policy makers that the second warming (circled in red) had to be AGW. But that IPCC logic is fatally flawed. The SPM did not tell policy makers about model parameter tuning, which clearly drags natural variation into the model parameter tuning period assumed by IPCC to be AGW. So the warming is falsely attributed only to CO2. Note also the subtle “cheat” in Fig SPM.4 of models using only natural forcings. The issue is not guessed natural forcings. We do not know why natural variation occurs, only that it does (no model of ENSO periodicy, for example). Natural forcings are not the issue; only the resulting natural warming variation is. Natural temperature variation, not ‘forcings’, is the proper statement of the attribution problem. The AR5 WG1 SPM makes IPCC’s erroneous and unscientific belief explicit:
§D.3 This evidence for human influence has grown since AR4. It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century. [Bold mine]
Natural variation did NOT stop in the mid-20th century. And that is why CMIP5 models now run hot.
[1] Taylor et. al., An Overview of CMIP5 and the Experimental Design, BAMS 93: 485-498 (2012).
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Has anyone seen this yet?
Found it on GreenieWatch.
Their model is able to explain the temperature of rocky celestial bodies and show that atmospheric composition isn’t much of a factor.
A key entailment from the model is that the atmospheric ‘greenhouse effect’ currently viewed
as a radiative phenomenon is in fact an adiabatic (pressure-induced) thermal enhancement analogous to compression heating and independent of atmospheric composition.
https://www.omicsonline.org/open-access/New-Insights-on-the-Physical-Nature-of-the-Atmospheric-Greenhouse-Effect-Deduced-from-an-Empirical-Planetary-Temperature-Model.pdf
Very interesting, this is what we’ve been screaming about in regards to Venus ever since Hansen created the sci-fi meme that Venus was hot because of CO2. You’d think composition would still have a measureable effect, especially when water vapor is present.
And maybe the water vapor is exactly why Earth is slightly warmer than the best of their models predict. This paper is a big fat nail ready to seal the climate cult coffin forever.
The first reference is to Volokin and ReLlez, which has been withdrawn. Nothing new here, anyway; this was all published previously. Calling these curves “models” is a stretch. Four coefficients, only five planets. The whole thing is out on a limb. A twig. .
It was withdrawn because the authors used pseudonyms to get it published. Now how is the paper wrong? Can you show how their four coefficient models fail to predict the temperature of any rocky planet or satellite?
At your link:
Venus Earth Moon Mars Titan Triton
“The average near-surface atmospheric densities (ρ, kg m-3) of
planetary bodies were calculated from reported means of total
atmospheric pressure (P), molar mass (M, kg mol-1) and temperature
(Ts) using the Ideal Gas Law, i.e….”
It’s my opinion that when the basis of an approach relies so much on the ideal gas law, it’s going nowhere.
Long-term natural cycles run hot and cold too. Gee, I wonder how they relate?
“greenhouse” gas retarding radiative cooling…. experimentally proved
In isolation, then extrapolated to global extremes. This is why the models, “hypotheses”, demonstrate low skill to reproduce, forecast, let alone predict, temperature or climate change, in the past, present, and future.
Incompletely and, in fact, insufficiently, characterized, and unwieldy. Hence the need for a science philosophy and method where accuracy is inversely proportional to the product and space offsets from an established frame of reference (i.e. scientific logical domain).
Why the continued use of this chart?
The hind-casting from Mount Pinatubo make the models look more accurate than they really are. The apparent correctness is fake. A better graph would be a difference of mean model from observed temperature over time. Another approach would be to just smooth out the hind-casting dip. I’ve seen this chart a hundred times and disappoints me each time.
Interesting point about the average grid resolution of the models being 280 km or so. There are two issues that arise that I can see. I speak from the experience gaine I learning the hard way when first using CFD software to model flow around a ship hull – I am a naval architect by profession).
Firstly, that distance is way larger than the size of tropical storms and cumulus cloud formations which are the mechanisms that actually deliver much of the heat-water vapour LHV energy transfer tin the atmosphere. The ‘models’ cannot therefore be ‘modelling’ the actual phenomena using the Navier Stokes equations so must just be using a ‘fudge factor’ driven simplistic formula to mimic nature, fitting her with a fig leaf and then asserting the size and details of what cannot be seen. The model I was using only looked at ‘potential flow’ ( i.e. flow of a frictionless fluid) and ignored turbulence as it occurs at much smaller scales than required to look at wave wake formation and the curvature of the hull shape. (calculation times would not double but increase by 2 – 3 orders of magnitude).
Secondly, there is a potential ‘false convergence’ problem with a model mesh size that is larger in size than the fluctuations of the phenomena you are ‘modelling, in my case the wave wake formed by the ship moving through the water. My mesh size was too coarse to properly model the steeper bow waves and the code converged to a false solution that yielded a much higher bow wave. Sound familiar?.
I was able to pick it very quickly because I had photographs of the vessel operating at service speed and knew more or less what the result should be (this was an job to model new bulbous bows on the vessel).
Clearly, committed CAGWarmistas would have just laughed me out of the place and concluded that the photographs needed some touching up to reflect virtual reality.
Add that to the so called surface temperature record raw data simply being utterly unfit for purpose due to UHI effect and all that ‘data from a bucket’ biases and there is not much left of the CAGWarmista case. I don’t generally defer to the lawyers but gee whiz, I really would like to see a top shelf prosecutor have these swine in the witness box.
Your ‘fudge factors’ are the GCM parameterizations that then get tuned to hindcast.
Regarding tuning, IPCC uses fine words to describe an ugly practice:
«When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift. The approach assumes that the model bias is stable over the prediction period (from 1960 onward in the CMIP5 experiment). This might not be the case if, for instance, the predicted temperature trend differs from the observed trend. It is important to note that the systematic errors illustrated here are common to both decadal prediction systems and climate-change projections. The bias adjustment itself is another important source of uncertainty in climate predictions. There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques.»
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )
I wish politicians could understand the meaning of that section.
It seems clear then that the ‘hindcasting’ is not run very well or too far back that it has to deal with inconvenient truths. That said hindcasting to what? ?’adjusted’ surface data records using an unfit for purpose set of instruments? It is one thing to let people know what their local temperature is where they live UHI effects and all but quite another to use that to accurately estimate a global value to a precision such as to reveal meaningful variations. Maybe its the tuning that is the problem. That said a coarse mesh model has its fundamental inadequacies in modelling nature and the result is still fudge factors. The issue is how accurate are the ‘fudge factors’ / ‘GCM parameterizations’.
“Dr. Richard Lindzen, MIT professor emeritus, first made the observation that the period of warming from ~1920-1945 is essentially indistinguishable to that from ~1945-2000. This is readily apparent visually, and is also true statistically.”
This fact means that the warming to date provides no evidence to support AGW, no evidence to support the greenhouse effect and no evidence to suggest any future climate trend.
In fact, it does more than that. The hiatus suggests that there is no greenhouse effect, While we cannot provide evidence that the later warming is natural, AGW or a mixture, the balance of evidence is GHG has no effect.
The entire global warming hypothesis is based on what evidence?
“To properly model essential climate features like convection cells (thunderstorms), a grid cell needs to be less than 4km on a side. “
According to this post, the main reason behind the fact that we cannot predict future climate by computer modeling is that there is weather on the Earth. I find this rather funny.
“According to this post, the main reason behind the fact that we cannot predict future climate by computer modeling is that there is weather on the Earth.”
The reason they don’t use “a grid cell less than 4km on a side” in their models, is because of a lack of computing power.
Warmists could try a different kind of climate modeling except current GCMs suite climate alarmists for at least 2 reasons: (1) They claim their models are simulations, (2) Projections from the models have long tails reaching as far as 9ºC/10ºC climate sensitivity for the hotter running models. The curve may have a very low probability there but they still claim there’s a ‘chance’ that the climate could send us to hell. Indeed an alarmist visited this site a few weeks ago and said just that. I think Lindzen makes this point too in his essay in Climate Change : The Facts (2015). I also make the point too: They have to pose an existential threat to humanity otherwise we wouldn’t even remember their names. We are very sensitive to threats against our existence.
A famous, Nobel Prize winner said that when experimental observations contradict theory, the theory is wrong.
The great man. Richard Feynman
Nitpick: Dick Lindzen was not the first person to note the similarity of the early 20th century warming to the 1976-97 period. Anyone with eyes could see it, even before the end of the second warming.
From page 55 of Sound and Fury, published in 1992 and written by yr. obt. svt. in 1991:
“In the overall record, the most prominent feature of the past century is a rapid warm-up of 0.5 [deg] C (0.9[deg]F) that took place between 1915 and 1930. Because that increase occurred so early, it could hardly have anything to do with the enhanced greenhouse. Thus, the “natural” variability–how much climate changes with or without human influence–of the Northern Hemisphere must be on the order of at least 0.4 [deg] C…”
And I am 100% certain several other had previously written or said something similar.We really don’t know who first noted this, but it is rather obvious.
PJM
Thanks for the note. Have the temperature models been changing that early warming to minimize or eliminate it?
I am referring to the manipulations of the raw data (BEST, HADCRUT, etc.)
Indeed it is obvious! I made the point half a dozen years ago idependently, noting that the slope of the warming trend ~1975 – 2000″was the same as the one 60yrs ago, indicating that any CO2 warming was modest indeed. I’m not a climate scientist but felt I’d joined the club!
“Nitpick: Dick Lindzen was not the first person to note the similarity of the early 20th century warming to the 1976-97 period. Anyone with eyes could see it, even before the end of the second warming.”
This Hansen 1999 U.S. surface temperature chart shows a much better, and truer view of the time period in question, than does the bastardized Hockey Stick chart:
The C-AGW argument requires a chain of arguments all of which must be true or they don’t get the full C, A, G and W effect. Nobody should argue the CO2 impedes long wave IR into space. Energy from the Sun is mostly received at the surface. Energy is radiated into space from the top of the atmosphere. The more IR opaque gas there is in between, the larger a temperature delta is necessary to transport IR from the surface to the top (excluding transport via evaporation/condensation).
It is not clear to me that most people are careful to distinguish the effect of CO2 in dry air, versus CO2 in water vapor saturated air, but it appears that some people handle this appropriately. As such, I am inclined to think that CO2 doubling has little effect in humid tropics, but larger in dry 40 Latitude desert areas?
In any case, the direct effect of CO2 doubling is not large enough to account for observed warming + adjustment effects, so a large amplification feedback is invoked.
The claim is that a weak CO2 effect controls a strong feedback for a significant impact in GW.
To make this argument, it must be claimed that there is no natural variability, which would also get amplified by strong feedbacks, as the proposed feedback mechanisms is influenced by temperature and not exclusive to CO2. Hence, the miracle hockey stick, which wiped away natural variation, making Mann the hero/savior/Christ figure to the C-AGW zealots.
To further deny the existence of natural variability, the sun must be nearly constant. By narrowly looking at TSI over a limited period, and by averting their gaze from other impacts, the conclusion is the Sun cannot be responsible for observed temperature between 1970-2000. Of course, TSI may not be nearly as steady as claimed over longer periods, and besides, would not small TSI variations also be amplified? Then the AGW theorist further ignore non-TSI solar effects, i.e. the stuff the might impact cloud formation etc, be it Svensmark’s cosmic ray or Corbyn’s SLAT.
Another part of the C-AGW claim of proof is climate models. Here on this site, many people denigrate computer models, and even the C-AGW are beginning to downplay the contribute of models. I would suggest not blaming the computer, which is only does what it is programmed to do. What needs to be evaluated is whether the “model” correctly models important effects, including which are not modeled at all, and of course: are the input parameter correct? Otherwise, the model output will correctly assert that adding heat and insulation leads to warming, but not necessarily that CO2 is climate control knob.
Joe, collisions are much more frequent than photon emissions by excited CO2. This can then lead to the relatively huge number of water vapour molecules sharing the heat and radiating it to space. Where is this taken into account?
That, I think, is really the nub. I have a longish post coming up on it, if it ever gets through the reviewing queue.
as per quantum mechanics, there is a probability of absorption, a probability of emission, a probability of direction and a probability of multiple reabsorptions. Those probabilities are not functionally additive and combining them mathematically has become beyond me since I hit 60yrs.
It’s not apparent to me why too low CO2 C.S. implies negative feedback. Not unless 1) you exclude all other factors apart from man-made warming, 2) you conveniently discover that your GCMs have long tails predicting a tiny chance of catastrophe. Those are both political factors. Scientists here underestimate the role of politics in how it channels climate alarmists to present their argument in the form that it’s made.
Much like Linear No Threshold (and the demonization of anyone who contradicted it) seem, to me, an essential part of the anti-nuclear movement. Greens will always pose their strongest arguments as existential threats to humanity.
That’s not what Gavin says. There is a new article on Bloomberg…
“Gavin Schmidt, director of NASA’s Goddard Institute for Space Studies, said that new paper provides independent backing for recent work from his unit and elsewhere. He concludes that projections of future warming derived from recorded temperature trends “are biased low.””
https://www.bloomberg.com/news/articles/2017-07-06/global-warming-might-be-speeding-up
That is where Gavin’s bread is buttered.
Gavin also said:
“The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission.”
Gavin Schmidt (Climatologist, climate modeler and Director of the NASA Goddard Institute for Space Studies (GISS) in New York). (See Comment 17)
Rulers run hot too if you don’t know what you’re doing with them.
The entire Red Team vs Blue Team exercise would be fantastic, but it answers itself once it’s described as such:
On one side we have actual science that shows:
– Carbon Dioxide is the only throttle in the Carbon Cycle of Life. (All of the carbon in the carbon cycle funnels through Carbon Dioxide.)
– Carbon Dioxide is the base of the food chain for all Carbon Based Life Forms
– Carbon Based Life Forms require Carbon
And on the other side we have:
– the invented term: ‘Settled Science” as a way to construct an argument when no actual science is available
– no laws, no axioms, no postulates, no formulae, nothing to reason with
So, one side explicitly proves that Carbon Dioxide is essential to life, More CO2, more potential life. It should take compelling science to convince humanity to restrict global CO2. Settled Science is not compelling science.
Main reasons why!
1) CO2 being a greenhouse gas behaves differently between atmosphere, land and oceans.
2) CO2 requires circular reasoning with positive feedback that does not exist.
3) Solar energy greatly influences all SST’s. (differences down to ocean circulation and upwelling)
4) Oceans are the only/main thing that can disguise solar influences.
5) The main greenhouse cause is not actually a gas (even water vapour), but heat retention in the oceans.
6) Solar influences on short term cycles never reach equilibrium.
Ignoring a lot of science against their views in determining no/little natural influence greatly shows up in models. They have only been looking for AGW influence not natural causes, so their judgment on anything else has made them to be a bunch of charlatans.
As a chemist I know all about IR spectroscopy but it a big leap to accepting global warming. However, I realise that “the science is settled” on this one.
Looked at another way, what is causing the hiatus? apart from a blip due to a known effect, it looks as though the temperature is returning to its current “normal ” level.
Either there is a natural climate driver that is negating global warming due to increased carbon dioxide, or there is nothing happening.
Oh dear.
Indeed we may start with the opposite theory: all waming/pause is natural variability with a wavelength of 60-80 years (PDO?). The only influence of the extra CO2 then is in the difference in slope: while the increase 1910-1945 and 1975-2000 are near identical, in 1945-1975 there was a small cooling, while 2000-2014 was flat.
Thus the difference in slope for the two “pauses” may be the small influence of the increase in CO2…
may also be a different set of ENSO
The other part of Point 3. which needs to be considered is, “predict this will continue to catastrophic levels.”
I’m happy to concede that without CO2 in the atmosphere the global average temperature would be cooler than now. Some estimates say 30 deg C cooler, Lindzen has suggested only 2.5 deg C cooler. If Lindzen is right then a bit of extra CO2 even doubling will produce only a small increase in temperature. If 30 deg C is correct than we have had a massive temperature change without any catastrophic climatic events unless you view the world’s current climate as catastrophic (I don’t). Either way why should it suddenly become catastrophic and for who? It’s different to say catastrophic breaking in a piece of wood being bent till it suddenty breaks in a mathematically catastrophic way.
The catastrophic part is ‘necessary’ to force unviable mitigation solutions onto the public while extorting $100 billion/year for the Green Climate Fund. Except no part of the catastrophic stuff is credible. Covered sudden SLR, ocean acidification, extinctions, and extremes in several essays in my most recent ebook, Blowing Smoke.
Thanks Rud, Blowing Smoke is now on my tablet awaiting my holiday. Cheers
CO² is not the only IR absorptive gas in our atmosphere. The calculated 30K is still contested by some. There are some who believe that atmospheric weight can be shown to provide the effect we see.
PV=nRT. P = 1BAR at surface close to 0 at TOA V = volume (assume constant) R = ,gas contant T = temperature ~ PV/R or P Therefore as pressure rises temp rises. n = 1 mole of gas.
Huge number of assumptions in this equation and calculating the effect precisely, well difficult. Hence controversy.
Imagine an earth without atmosphere, add one non IR absorptive molecule and it will bounce about between ground and TOA with an average Kinetic Energy, max at surface and 0 at TOA. Then add another and keep adding. The surface temperature will not rise at each addition. Eventually you will have a full atmosphere with no warming of the surface and a higher TOA at 0K. If this atmosphere causes 30 deg C warming at the surface please explain the mechanism.
Rud
A very nice clear easy to read post – thank you.
With all of the cash thrown at supporting the theory of CO2 warming, one would have thought that are mere pittance could be used to prove the conerstone theories that their forcasting is based on. See my comments above.
Regards
OzoneBust, thanks for the complement. Nice to be appreciated. I am trying to be of service here.
As for your above comments on saturation, please read essay Sensitive Uncertainty. The CO2 Greenhouse effect NEVER saturates, because the effective IR escape level ( the so called effective radiative level, ERL) can always go higher. Higher is also colder, which is the physics behind the log CO2 doubling relationship.
Regards. You are almost a climate Jedi.
Rud
My comments on CO2 saturation are only focused on the atmospheric CO2 densiity not the actual greenhouse effect on the CO2 saturation.
Eventually I will put together a post to identify what happens to relative saturation during temperature movements within equilibriums. Currently working on sea ice, now that is an interesting one.
Also a report on the 2016 Hurricane Matthew, why is weakened approaching Florida.
Always enjoy your posts and comments.
Does that mean I get a green saber ?
Regards
I completely agree. It was informative and a pleasure to read.
They run “hot” because they are politically forced to do so. End of story.
I was recently reviewing some old WUWT articles that I had saved for future contemplation, particularly this one from Tom Vonk, and it finally clicked with me.
If you look at the outgoing TOA spectrum, it is clear that there is a good sized chunk of the outgoing spectrum taken out by CO2.
http://www.xylenepower.com/Mars_EarthM.gif
All things being equal, it certainly appears necessary that, if that radiation is not getting out, the entire curve must move higher, i.e. surface temperatures must rise, in order to get the same area under the curve one would get if the divot were not there, i.e., to establish equilibrium of outgoing radiative energy with incoming.
So, clearly, the GHE must exist, and CO2 must contribute to it.
But, what does this say of the dependence of the GHE upon the concentration of CO2? Is it a monotonic dependence in which more CO2 will always take out more of the spectrum, requiring greater surface temperature to preserve the integrated area? Or, is there a point of diminishing returns? Perhaps even an inflection point, where the divot starts to shrink, and the impact of greater CO2 concentration actually produces net cooling? Is the functional dependence perhaps something like this:
http://i1136.photobucket.com/albums/n488/Bartemis/co2surftemp_zpsd400ef15.jpg
whereby the effect is always positive (secant line) but locally negative for a given atmospheric state (tangent line)?
In Vonk’s write-up, he explains that the cartoon explanation of the GHE, that the surface radiates IR energy up, which is blocked by the CO2, and half of which then radiates back down, is wrong. (Actually, we can see immediately from the spectrum above that this is incorrect, as the divot could never be deeper than 50% below the asymptotic blackbody curve, yet here, it is much deeper).
The intercepted surface radiation is generally not directly re-radiated. The mean time to re-radiation is much longer (by orders of magnitude) than the time to collision with other atmospheric particles. When collisions occur, energy is exchanged, and excited CO2 molecules pass their energy along to the other atmospheric constituents. Some of the intercepted surface radiation is re-radiated, but most is passed along to other atmospheric molecules.
But, this is a two way street – energetic atmospheric molecules pass on their energy to excite the IR emitting modes of the CO2 molecules. When those CO2 molecules so excited emit, they are acting to cool the atmosphere.
Ultimately, for a given solar input, surface radiation, and atmospheric composition, a steady state is reached. In this steady state, CO2 molecules are both heating and cooling the atmosphere – heating it by intercepting surface radiation, and passing it along to other atmospheric constituents, and cooling it by accepting energy from those atmospheric constituents and radiating it away.
So, if we have reached a steady state, and we increase CO2 concentration, what happens? Increasing the concentration increases the amount of surface radiation intercepted. But, it also provides more outlets for other atmospheric constituents to relieve themselves of their pent-up energy. Actually, more so, because the surface radiation is only coming from one direction, while the other atmospheric constituents are coming in from all sides.
So, do we get greater heating, or greater cooling? Or, might we be at a point where there is no net change, where the greater heating and cooling potentials essentially cancel each other out?
I suspect it is the last, that we are at a point where they essentially cancel out. I will give my reasons in another post sometime, but will put this out for others to ponder for now.
Bart,
In general, I do follow your reasoning with only a few points where I have comments:
According to Jeff Id, the ratio re-emission to collission for excited CO2 at 1 bar air pressure is about 1:10,000:
https://noconsensus.wordpress.com/2010/08/17/molecular-radiation-and-collisional-lifetime/
At lower air pressures that shifts towards less collissions and thus more re-emissions.
That there is increased backradiation by increased CO2 was actually measured at Barrow and Oklahoma:
http://newscenter.lbl.gov/2015/02/25/co2-greenhouse-effect-increase/
and outwards more IR retention in the CO2-band was measured by satellites. Thus the effect is going both ways, not necessary 50/50.
If the absorption / radiation / collission by CO2 is acually cooling or heating the atmosphere is mainly a matter of re-radiation/collission ratio. That is more collission at lower heights (the troposphere) and more (outgoing) radiation in the stratosphere.
More CO2 in general will give more warming, as can be seen in the outgoing spectra of Modtran. Modtran is a reduced resolution calculation program based on Hitran. The latter was developed for the calculation of the absorption/transmission of any mix of GHGs at any height of the atmosphere and is based on data obtained in laboratory conditions. The total calculation is from using the layer by layer (=pressure) spectra up to the desired height. One can make the calculations by choosing a lot of parameters of greenhouse gases, water vapor feedback, clouds or not, rain or not and the “1976 standard atmosphere”, that is with all the measured parameters of clouds and rain of the year 1976. See:
http://climatemodels.uchicago.edu/modtran/
For 280 ppmv, standard atmosphere, no clouds or rain, fixed RH:
upward at 70 km height: 268.564 W/m2
For 560 ppmv, standard atmosphere, no clouds or rain, fixed RH:
upward at 70 km height: 265.613 W/m2
2.9 W/m2 more are retained in the cloud free atmosphere for the same ground temperature for a CO2 doubling. By increasing the ground temperature with about 1 K one reaches the same outgoing energy as for 280 ppmv CO2…
Thanks for looking, Ferdinand. I am trying to reason this out. I am certain the impact is essentially nil for reasons you and I do not need to go into in this discussion 😉 So, the question becomes, why is it essentially nil?
It seems pretty straightforward to me – if greater concentration is taking out increasing amounts of TOA radiation, then there must be heating. I think this is why the AGW advocates hold onto the hypothesis so tenaciously. They are just certain that, if less radiation is making it to space in that band, there must be heating. And, I would have to agree at this point in time. There does not appear to be a way around it.
So, the question then becomes, what are the weak points in that argument? The most obvious, and first I want to plumb, is that while a given level of CO2 produces a particular gap, it is an assumption that increasing concentration will produce a greater gap, and it is projected to do so in logarithmic fashion. But, it is not proven that this is so, or at least not yet to my satisfaction.
There are studies in which this has been tested, to some degree or another, so what do they say?
You bring up the one that is looking at the downwelling radiation at the surface. But, it is TOA that we are interested in. It stands to reason that, if other atmospheric components are thermalizing CO2 in greater profusion, then there should be more downwelling radiation in any case. So, this test is not really applicable.
I don’t think the question can be resolved by models, because they are based on present knowledge that assumes monotonic increase in the gap.
I have some papers that have looked at actual TOA radiation, e.g., here. This work appears to show increasing attenuation of TOA radiation in the CO2 band.
One objection that could be raised is that it covers only one area of the globe in the central Pacific. But, there are some other suspicions I have regarding the way the data are processed. I am not yet ready, however, to air those suspicions. If I make a breakthrough before this thread gets locked out, I will post it, so you may want to check back from time to time.
Any other comments are welcome.
Thanks, Ferdinand. I have a longer response, but it is held up in the queue. If and when it appears, this comment will make more sense. If this comment gets held up, well then, it will all come together.
In summary, I consider the ground test to be inapplicable – we need TOA differentials. Nor can I rely upon analysis tools which assume the relationship to be proved – that results in a circular argument.
I pointed out a paper that does purport to show changes in TOA radiation in line with expectations, but there is a problem – it is not a controlled experiment, in which concentration and temperature can be varied independently. So, the cause of the observations could be the concentration, or it could be the rising temperature, and the sensitivity of the emissivity curve with respect to it. If it is the former, and the concentration causes the temperature to rise, then that makes things even more difficult to separate.
Ideally, what we would like to see is a measurement in which the two quantities are going in opposite directions. Unfortunately, the last time that happened, between about 1940 and 1970, we didn’t have the means to make measurements. Perhaps, in the years to come, if we get the cyclical downturn in temperatures I am expecting, we will be able to get information that will allow us to separate the two variables.
Climate models are trick of maths. Getting that maths trick into the model, in the way they did, explains all one need to know about climate models.
There is a real danger with complex models that the output will conform to preconceived expectations and adjustments made to underlying data and assumptions until the “correct” result is produced.
I would be interested to understand projections made with slightly skewed but entirely plausible data and assumptions. Running the model with a small range of inputs to get a range of futures does not eliminate bias.
Having said that, consuming (oil, gas, coal) in a few hundred years which has taken several hundred million years to lay down seems likely to have an impact on climate. Whether we should seek to reduce or globally mandate carbon and other emissions reductions (thus far unworkable) or simply adapt is a different issue.
“Why Climate Models Run Hot”?
Take a clue from Henry “Henny” Youngman, who once asked,
Q: “Why do husbands die before their wives?”
A: “Because they want to!”
CACA acolytes like it hot, since the real range of ECS from 0.0 to 2.0 degrees C isn’t scary enough to maintain funding levels, hence the range has to stay where it has been since first invented 40 years ago, ie 1.5 to 4.5 degrees C.
I like Rud’s approach but would expand it a little.
The road to global warming has a number of “Iffy Bits”. Here are some.
1. The difference between Boltzmann inferred earth temperature and actual. We have several alternative explanations such as the atmosphere –gravity effect and work done by convection.
2. The effect of collisions and water vapour on CO2 GHG effect.
3. The magnification of warming by water vapour, humidity evidence and hot spot evidence. Positive vs Negative feedbacks.
4. The observational warming/ fiddled raw data/massaged data/UHI/etc
5. Attribution
6. Why are the failed models not binned?
7. Why is there a hiatus?
8. What are the predictions, timescales and consequences of failure?
Stay tuned for another guest post torrow, per CtM, that fleshes out some of your line of thought. CtM had noodled on it for years, and myndraft has him thinking of an SKS antidote website, perhap another subsection section here like sea ice.
Normally, to find the effect of one parameter you can choose two methods. Both lead to isolation of either the parameter you want to measure or the interference parameters. A little like finding a new planet from the existing orbit variations of a nearby planet from Einstein’s general relativity.
So find a low humidity region and measure the effects or look for a small effect in a high humidity region (bloody difficult). Or think outside the box and measure the effects on another terrestrial body.
There are much simpler, more macro sound bites. Which bite much harder.
Since there is zero correlation between geological history levels of CO2 and temperature Tyndall’s experiment proved nothing. I.e. The 800 year lag.
“For example, the last Thames Ice Fair was in 1814.”
On a warmist site, I read a comment by one of the biggies there that the Thames now flows more swiftly at the ice fair location site due to man-made changes to it. (Please correct this if it is wrong.)
Its true. Rebuilding of London bridge
Didn’t that relate to bridge piers being closer together in the past than today’s modern designs that have a larger span between piers? The older designs would be easier to block with ice, at least so went the theory. (You could also just as easily claim similar volumes of water pushed through a smaller gap would move faster, thus making ice build-up less likely, could you not?)
Absolutely correct. The fact that this is ignored when attempting to ‘disappear’ the Little Ice Age is quite telling.
“Didn’t that relate to bridge piers being closer together in the past than today’s modern designs that have a larger span between piers?”
Yes, that’s what I remember reading.
CheshireRed July 6, 2017 at 1:53 pm
Didn’t that relate to bridge piers being closer together in the past than today’s modern designs that have a larger span between piers? The older designs would be easier to block with ice, at least so went the theory. (You could also just as easily claim similar volumes of water pushed through a smaller gap would move faster, thus making ice build-up less likely, could you not?)
However the gaps were so small that they could easily be blocked by ice washed down from upstream, here’s a contemporary illustration from 1814: