Guest Opinion: Dr. Tim Ball
In his recent article on WUWT titled “HADCRU Power and Temperature” Andy May refers to the challenges of modelling the atmosphere. He wrote,
The greenhouse effect (GHE), when calculated this way, shows an imbalance of 390-239=151 W/m2. Kiehl and Trenberth, 1997 calculated a similar overall forcing of 155 W/m2 using the same procedure. This GHE calculation makes a lot of assumptions, not the least of which is assuming the Earth has an emissivity of 1 and is a blackbody. But, here we want to consider the problem of using a global average temperature (T) for the Earth, which is a rotating sphere, with only one-half of the sphere facing the Sun at any one time.
Models vary from hardware models or simple scaled down versions of reality to complete abstractions. A model car is an example of the former and a mathematical formula with symbols replacing variables of the latter. The problem with the hardware is it is impossible to scale down many things because the physical properties change. For example, it is impossible to scale down the change of ice from solid and rigid to plastic and flowing as occurs in an alpine glacier in a hardware model. In the abstract model, each variable loses most of its real-world properties.
Climate models are abstract models, except they are made up of a multiple of models all interacting with each other. Those interactions bear little resemblance to reality.
In summary,
- We have virtually no data.
- This is true even for fundamental variables like temperature, precipitation atmospheric pressure, and atmospheric water content.
- Data is replaced by symbols that eliminate most of the properties of the natural variable.
- In many cases the “data” is generated in another model and used as ‘real’ data in the larger model.
- The models are essentially static representations of average conditions. The one thing we know with certainty is that the Earth’s atmospheric system is dynamic, changing daily, seasonally and constantly over the course of time.
- The models consistently fail the standard test of scientific understanding and accuracy by producing inaccurate predictions.
Initially, I learned the basics of weather and especially forecasting necessary for aviation. These were expanded when I gave lectures on aviation weather as an operations officer on an anti-submarine squadron flying over the North Atlantic and then for a search and rescue squadron flying in northern and Arctic Canada. I recall one search out of Fort Chipewyan, northern Alberta when we observed first-hand the severe limitations of knowledge and therefore forecast skills. We took an Environment Canada forecaster with us to provide more local information. He was unable to come even close from the data originating from Edmonton. Eventually, we listened to his forecast and then took him flying with us to show him the reality.
After I left the military because I lost my flying category, and as they say, I didn’t want to fly a desk, these experiences caused my return to university to try and determine the limitations of knowledge about weather and climate.
When I began studying them from an academic perspective, the first thing I realized was the complete paucity of data. This was reinforced when I learned about the work of H.H. Lamb who set up the CRU with the realization that without data, no understanding of the mechanisms was possible and accurate forecasting beyond hope.
“…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.”
I include this quote in as many articles as logic allows because things are worse now. Sadly, this is due to many graduates of the CRU and their disciples, like Gavin Schmidt.
This awareness led to my doctoral thesis that involved reconstruction of weather and climate patterns for central Canada over a 300-year span. Serendipitously, it was while in places like Fort Chipewyan that I became aware of the remarkable weather and meteorological journals of the Hudson’s Bay Company.
In basic climatology and some meteorology texts of the time, the student learned about a single cell atmospheric model (Figure 1).
Figure 1
The objective is to show that in a non-rotating world a simple, single cell system would exist. The strength of the basic circulation pattern is a function of the temperature difference (gradient) between the Equator and the Poles. This is a lesson for the Intergovernmental Panel on Climate Change (IPCC) who claim that the Poles will warm more than the Equator, thus weakening the system. The reality is that the difference is almost totally a function of the amount of insolation received not a difference in the greenhouse gas difference, especially CO2.
The next problem that climate change theorists and modelers face is that the Earth is rotating and changing how it presents itself to the insolation over time. In 1735 George Hadley’s determination of the existence of a single tropical cell, now named after him, (Figure 2) from winds recorded by British sailing ships, is a classic example of inductive reasoning.
Figure 2
This gradually evolved over the next 200 years into a three-cell system (Figure 3).
Figure 3
Note that this diagram appeared in a text titled Tropical Meteorology published in 2011. A similar pattern is used in the UK Met office (UKMO) web site diagram. (Figure 4). The caption reads
Circulating Cells: The Hadley cells have the most regular pattern of air movement, and produce extreme wet weather at the equator and extreme aridity on the deserts. The polar cells are the least well-defined.
The terminology starts the misunderstanding. The word extreme is wrong. The wet at the equator and dry in the desert is normal. The polar cell is well-defined as evidenced by the over 90 percent persistence of the polar easterly winds, the Polar Front that separates cold polar air from warm subtropical air, and the circumpolar vortex (jet stream) (Figure 5)
Figure 4
Figure 5 (source: author)
Questions began to emerge about the existence of the Ferrel cell at about the time the IPCC was forming in the late 1980s. William Ferrel proposed its existence in 1856 to explain newly measured wind speeds, especially the mid-latitude westerly winds. The Encyclopedia Britannica is better informed than the UKMO because they write,
Ferrel cell, model of the mid-latitude segment of Earth’s wind circulation, proposed by William Ferrel (1856). In the Ferrel cell, air flows poleward and eastward near the surface and equatorward and westward at higher altitudes; this movement is the reverse of the airflow in the Hadley cell. Ferrel’s model was the first to account for the westerly winds between latitudes 35° and 60° in both hemispheres. The Ferrel cell, however, is still not a good representation of reality because it requires that the upper-level mid-latitude winds flow westward; actually the eastward-flowing surface winds become stronger with height and reach their maximum velocities around the 10-km (6-mile) level in the jet streams.
The problem is they, like the UKMO and the IPCC do not provide an alternative model. The reason is because they have no data as they explain in AR5.
In the past few years, interest in an accurate depiction of upper air winds has grown, as they are essential for estimating the state and changes of the general atmospheric circulation and for explaining changes in the surface winds (Vautard et al., 2010).
We also learned in AR4 that,
Due to the computational cost associated with the requirement of a well-resolved stratosphere, the models employed for the current assessment do not generally include the QBO.
From a March 2015 conference in Victoria BC, we learned that
The Quasi-Biennial Oscillation is one of the most remarkable phenomena in the Earth’s atmosphere. High above the equator, in the stratosphere, strong zonal winds blow in a continuous circuit around the Earth. At a given altitude, the winds might start as westerlies, but over time they weaken and eventually reverse, becoming strong easterlies.
They asked,
Why is the QBO important? It is certainly relevant for seasonal prediction, where the state of stratospheric winds affects interactions between the tropics and the mid-latitudes, and may also affect the tropical troposphere directly and possibly how the solar cycle interacts with the atmosphere.
They concluded that,
The poor representation of the QBO in climate change models means that no-one knows what will happen to the QBO in the decades ahead – will it remain largely unchanged, will its period lengthen, or will it change more radically?
The question is how and on what structure are the climate models built? The most recent representation I am familiar with is in Figure 6:
How do you build a computer model to represent this structure and all the mechanisms it encompasses? But the challenge is much greater than that because the diagram is a representation of the average, which is a fixed statistical condition. In reality, it is an extremely dynamic system that changes on an almost infinite number of time scales from hourly to millions of years. Even if you can approximate the data and mechanisms with a mathematical formula there is the problem mathematician and philosopher A. N. Whitehead identified,
“There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.”
The IPCC acknowledged this in the Third Assessment Report when they wrote,
In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.
There is one major component of computer models that distorts and masks the reality and allows models that consistently fail to persuade policy. Pierre Gallois (1911-2010) summarized it in his comment that
If you put tomfoolery into a computer, nothing comes out but tomfoolery. But this tomfoolery, having passed through a very expensive machine, is somehow ennobled and no-one dares criticize it.
This parallels the well-known, but somehow ignored acronym GIGO, Garbage in, Garbage out. Somebody perceptively said in climate models it was Gospel in, Gospel out. However, in the case of climate computer models, the Garbage in includes not only the input or what the IPCC calls forcings but the very structure.
I collect quotations that appear to epitomize a period such as a decade, a century, or a millennium. My prime candidate so far for the 21st century was from Alan Greenspan former President of the US Federal Reserve and thereby de facto architect of US financial policy. Greenspan was asked in his appearance before Senator Markey’s hearing into the financial collapse of 2008 what went wrong. He simply replied, my model was wrong. When asked how long he had been using it he replied, 40 years.
If we consider the first meaningful climate model the one developed at NOAA’s Geophysical Fluid Dynamics Laboratory (GFDL) in the late 1960s (1967?) the first meaningful climate model because it included atmosphere and ocean processes, then it is 50 years.
Now consider the IPCC situation where they take the worst of both worlds by combining the output of climate and economic models. They have done this now for 27 years (1990-2017), and despite supposed updates and improvements their predictions or projections are still wrong, but still being used to determine global environmental and energy policy.
Andy May correctly identifies that the problems of climate modeling are much greater than the Kiehl Trenberth energy balance diagram and its numbers. They are so fundamental, as some of us have identified for decades that it is remarkable that the IPCC managed to fool the world and all the scientist affiliated with or supporting that agency and its work. It is why my book is titled “Human Caused Global Warming: The Biggest Deception in History.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I posted this in 2006
You posted what, exactly, in 2006?
Ya, what ?
Are you the Rick Sanchez, violinist, with a grad degree from LSU c. 40 years ago? JMW
A fair supposition of why the climate models do not reflect reality. If the actual behavior of the atmosphere is not known all that well, a simplified computer model will exaggerate that faulty understanding.
Many “simplified” models get quite close to reality – and often far closer than much more complex models.Indeed, where our understanding of the fundamental processes is limited, simple models usually do much better. But all such models have only a 70-80% accuracy rate.
That high? Or are you talking about one week out?
. . . like?
Weather models working for a week show that the structure and movements of the atmosphere are well known. Search for my comment below mentioning multidecadal oscillations for my explanation of what’s most wrong with climate models.
If the models were good, we would know beforehand the exact path that the recent hurricanes would take. The models do no such things accurately, only approximately.
As an engineer I would not even remotely consider some mathematical construct as ‘a model’ unless I had a very clear understanding of the mechanisms in play and their relative significance. Otherwise, and whatever so called ‘experts’ might assert, the mathematical device is really, at best just some sort of data fit formula, probably just a partly informed stab in the dark but may even be just rent seeking voodoo.
Nicely said, thank you.
I can produce a model that takes the amount of change in my pocket, the winning lottery numbers, the serial number on my dry-cleaning receipt and today’s temperature and claim it predicts tomorrow’s wind speed (given a suitable number of parameters — e.g. coefficients and exponents in a polynomial).
Even if the parameters are “tuned” to produce a good fit with past observations, any belief that it will somehow provide any insight into the future is misguided at best.
So now if I indulge in the same exercise, but rename the variables and parameters with scientific-sounding designations, and choose their values from actual observations with similar names, what have I succeeded in proving? Is my model necessarily correct because I have given it the aura of scientific respectability by using a little Latin and some other esoteric terminology? There’s no reason to believe that my “new” model is any more representative of the physical world than my old model was.
If I go further and say the the difference between what my “model” says and the real-world observations of that which I am claiming to predict is due to the boogey-man, and I just happen to need trillions of dollars to “fight” this boogey-man, would I have any shred of credibility among clear-thinking people?
Could I call myself a “scientist” and still hold my head high?
Another wonderful essay Dr. Ball. Thanks for taking the time and effort to write it.
I am of the opinion that as long as climate “science” is politicized we will never figure the climate cycles out. I am also of the opinion that as long as climate “scientists” credit CO2 with being able to “warm the surface by back radiation” we will not see science make any progress. It is the most anti-science idea I have seen in 6 decades of life.
Again, great essay.
+1
John Doran,
Mike,
The first thing you need to do is let go of the idea of “heat” and “heat flows”. Heat is just the human perception of the energy level of an object; in this case the air around us. Try instead to frame everything as energy levels and flows. Energy, in the macro sense, moves from areas that have higher levels to those with lower levels. Energy can be transferred in a number of ways: via radiation, conduction, phase changes, and nuclear fission/fusion. The AGW proponents only seem to focus on the radiation, however a tremendous amount of energy is transferred to the top of the atmosphere via conduction and convection (aided by phase changes of H2O). Even if additional CO2 in the lower atmosphere can slow down radiative cooling, these other mechanisms will mostly compensate (as the laws of thermodynamics say they must), resulting in a very modest increase in the total average energy held by the lower atmosphere.
Think about it: what affects the total energy of any gas the most? Incoming energy, total mass, density, or composition? The gas laws tell us that the first three dominate to such a degree that the last one, composition, can be virtually ignored. The only exception to that is H2O because of the phase changes. And even then, it is still a distant fourth.
@markstoval
I’m not all that well educated in thermodynamics so I would be grateful if you could clear a few things up for me (since you mentioned at least one of them).
Is the claim of the ability of “back-radiation” to warm the surface credible? To me the claim sounds like “the surface warms the atmosphere and the atmosphere, in turn, warms the surface”, which I have a hard time grasping. What have I missed?
Another mystery (since I’ve started) is about the hypothetical “parcel of air rising and adiabatically cooling” that I have come across from time to time. Wouldn’t this “parcel of air” begin to rise because it is less dense (warmer/lower pressure, assuming the mass of the “parcel” is static) than the surrounding air, and wouldn’t it rise to a level at which its density was matched? And if it were adiabatically cooling as it rose, would that not imply it was expanding (and thus becoming less dense) as it seems to me by my admittedly vague understanding of the word “adiabatic” that it must? How would that work, exactly? Is equilibrium impossible in this case?, or is there some force other than the convective that would make a “parcel of air” rise?
As strange as my questions may sound in a scientific forum such as this, I’m sincere in asking them — I really do not know the answers.
Thank you so very much for any insight you may be able to provide.
Mike, see my comment in response to nickreality65: https://wattsupwiththat.com/2017/09/16/climate-models-cant-even-approximate-reality-because-atmospheric-structure-and-movements-are-virtually-unknown/comment-page-1/#comment-2612528.
The equation for radiation between 2 surfaces is Q=k (Thot^4-Tcold^4), somewhat oversimplifying what k involves. The Thot^4 part is always bigger or we wouldn’t call it the hot surface! And heat naturally flows from hot to cold. The “back radiation” is actually the littler -Tc^4 part where Tc is the average temperature of the “sky”. The sky is colder than the ground but warmer than outer space partly because of clouds and partly because of infrared radiative gases like water vapour and CO2 giving the sky a “temperature”. So you can see from the equation that if a constant watts/sq. meter heat source, say like an electric blanket, is radiating infrared heat upwards and you put something of intermediate temperature between the heat source and outer space, then the only way the way the same watts can be radiated is if the surface temperature of the heat source can get hotter somehow. Since our heat source is the ground being warmed by the Sun, it can just get a couple of degrees warmer tomorrow if the Sun’s heat can’t escape today, and bring everything back into radiative equilibrium.
Hope this help you and Markstoval…
@DMacKenzie That seems to make sense, thank you.
@Gary Young Kerkin — Thanks Gary.
I appreciate the time each of you invested in replying, and thank you both again.
@Paul Penrose — I’m sorry, I overlooked your reply and thus neglected to thank you.
Please accept both my apologies for my rudeness and my thanks for investing the time and effort in replying.
I’m honestly trying to learn, but the fact is that I’m not particularly well educated nor am I that well-suited to comprehending what little education I do have.
Your clear explanation was very valuable in advancing my tiny glimpse of the science. Thank you again.
1. The Ideal Gas Law is a limit case model. Real gases in the lab and, especially, real gases in the open atmosphere do not have to conform to the Ideal Gas Law relation by logical necessity. For this, the Ideal Gas law is an approximation. For some uses, this is good enough. For high resolution work, needed for highly accurate forecasts, you can’t use the Ideal Gas Law. You must ‘correct’ it for the fact that the Ideal Gas Law is true only for the conditions used to create it. Similarly the Henry’s Law relation for the dissolution of gases in liquids must be corrected for any chemical reaction within the system beyond the simple solvation (for water, that’d be hydration, where the gas intercalates within the loose hydrogen bonded water clusters).
2. Heat is not light and light is not heat. Heat is a concept. Reality is energy, kinetic, potential, chemical, and more. The thermodynamic temperature is the geometric mean (a statistic) of a defined sample of matter’s internal kinetic energy and only its kinetic energy (the energy of motion in 3D space, rotation, vibration). The GMST is an average of averages and essentially useless for proper calorimetry. {How are we going to do that with respect to the whole Earth (not just its surface and atmosphere) given that we are in this system? /rhetorical} Well, at least, don’t extrapolate beyond the limits that your data and knowledge have/ and be aware that ‘heat’ can be converted to light and vice versa. Said conversion is not the only one possible. It may not be dominant at any one place or time in the here-and-now; so the inversion of color or brightness temperature to the thermodynamic one may be off, may not be off, is subject to change and you may not have enough data to properly specify your model.
3. From the kinetic theory of matter, all samples of matter above absolute zero (thermodynamic temperature) must contain internal kinetic energy. From that, radiation will be emitted from the sample and absorbed by the sample. Depending on the details, no power may be transfered. Think on that a minute. The narrower the wavelength absorbed or emitted, the less power is transfered. Thus, in the limit of a single frequency of light, absorption and emission imply no possible change in the amount of kinetic energy in the sample. Of course in the real world, this situation is not possible, so kinetic energy can and does undergo the conversion.
4. To ‘heat’ something, that something must have its internal kinetic energy increased. That’s not the same thing as slowing the decrease in internal kinetic energy via conversion of kinetic energy into any other form. Thus, ‘back radiation’ exists, but that does not mean that ‘heat’ will flow from cold to hot, net. Indeed, it can’t in the real world. The hotter a sample is, thermodynamically, the more radiation that sample can emit relative to the cooler one. Net: hot -> cold, even though radiation is going both ways.
5. Increasing the kinetic energy increases the velocity of motion. Since gases are not condensed, they can move very quickly (compared to everyday human motion) such that at typical Earth surface temperatures and pressures, the velocities are on the order of one kilometer per second.
6. From number 5, all gases will mix thoroughly, given enough time, particularly when not confined by impermeable barriers. Nevertheless, if there are chemical reactions taking place, the net concentration will be a function of the rate kinetics of the sources and sinks. Since these will not be uniform every where and every when, localized differential partial pressures will exist. Whether the differences are enough to measure is another question.
7. Interestingly enough, outer space is not cold given that the matter present is moving very rapidly. It is ‘cold’ only in the sense that the high vacuum contains so little matter in any one defined volume. Small mass, small heat capacity in the extensive sense.
Mike Shlamby: See also https://skepticalscience.com/Second-law-of-thermodynamics-greenhouse-theory.htm. Read the Basic tabbed pane, then watch the video at the bottom, then read the Intermediate tabbed pane.
“we will not see science make any progress”
Meh. The goal is to keep the taxpayer money flowing to the correct people, not actually make scientific progress (unless you mean progressive scientific progress of course).
The goal is to force governments to make widespread changes to their economies and societies. And for some individuals to satisfy their Messiah complexes and to save the world.
The ultimate goal is a global vegan commune of approximately one billion sould, run by some subset of the tinpot despots represented in the UN General Assembly.
I believe the broadest goal is to overthrow the modern “Western” model of rule by consent of the governed/the rule of law and not men, because it’s incompatible with the desires of a hyper-wealthy psychopathic criminal “elite”, who want to dominate people and play God and so on . . same old same old, basically ; )
I think many over-complicate the matter by taking the BS our would-be man-gods pay to have generated and disseminated too seriously, when it’s really just whatever seems to work toward undermining “Western” society/civilization, and establishing an authoritarian wonderland for some hedonistic degenerate creeps.
References:
Trenberth et al 2011jcli24 Figure 10
This popular balance graphic and assorted variations are based on a power flux, W/m^2. A W is not energy, but energy over time, i.e. 3.4 Btu/eng h or 3.6 kJ/SI h. The 342 W/m^2 ISR is determined by spreading the average discular 1,368 W/m^2 solar irradiance/constant over the spherical ToA surface area. (1,368/4 =342) There is no consideration of the elliptical orbit (perihelion = 1,415 W/m^2 to aphelion = 1,323 W/m^2) or day or night or seasons or tropospheric thickness or energy diffusion due to oblique incidence, etc. This popular balance models the earth as a ball suspended in a hot fluid with heat/energy/power entering evenly over the entire ToA spherical surface. This is not even close to how the real earth energy balance works. Everybody uses it. Everybody should know better.
An example of a real heat balance based on Btu/h is as follows. Basically (Incoming Solar Radiation spread obliquely over the earth’s cross sectional area, Btu/h) = (U*A*dT et. al. leaving the lit side perpendicular to the spherical surface ToA, Btu/h) + (U*A*dT et. al. leaving the dark side perpendicular to spherical surface area ToA, Btu/h) The atmosphere is just a simple HVAC/heat flow/balance/insulation problem.
David Middleton will disagree with you a few days agobhe was using miles per hour to indicate how powerful a hurricane was. I seem to agree with you that power needs a watt there somewhere as a an indicator eg of how powerful the sun is at warming our planet.
Ron,
Good grief. “Power” is a word used in many, in many forms. In physics, power is measured in “watts” that determines the rate at which energy is generated or consumed over time.
Miles per hour when discussing hurricanes is a measurement of their wind speed, and thus the FORCE they exert, NOT how much energy they generate or consume.
David Middleton wasn’t talking about the “watts” in a hurricane because NO ONE talks about them that way. Why would they?
You can’t talk about the power of a hurricane (in W) until it blows things around. It has a certain amount of energy over its life that could cause movement of objects but it doesn’t all get used up in one short time span.
Dr. Ball, thanks for citing my post and expanding on it so eloquently. This bit is very important, at least to me,
“But the challenge is much greater than that because the diagram is a representation of the average, which is a fixed statistical condition. In reality, it is an extremely dynamic system that changes on an almost infinite number of time scales from hourly to millions of years.”
As an experienced Earth scientist with some knowledge of chemistry and physics – and a newby to climate, I am amazed at the complexity of climate science. How, some modelers (I hesitate to call them scientists) think they can package all of that into a computer algorithm, with the data we have available now, is beyond my comprehension. As Anthony might say: “the stupidity, it burns.”
Andy. The thought that keeps running through my mind as a wander through this interesting thread is that modeling can be a great tool for seeing how things work and what one understands and what one doesn’t. Problems arise only when one decides one’s model is more or less complete, reasonably accurate (within identifiable limits), and useful. i.e. — that the model is “finished”
There’s a major step in going from cocktail napkin to useful model called “validation” That’s where you test your model against reality and fix the flaws. The five day or week or ten day weather models have been undergoing validation for decades. Overall, they seem far from perfect — especially on the longer end. They still need additional fixing, and everybody knows it. But on the whole they seem substantially better than nothing.
The GCMs on the other hand appear to be completely unvalidated. AFAICS one is probably better off using the Old Farmer’s Almanac for long term planning. If nothing else, the OFA is cheap. $7.95 (paperback) at Amazon.
The OFA, btw, claims to have a secret, centuries old, algorithm for predicting weather-climate. I have always suspected that the “algorithm” involves the consumption of copious amounts of ethanol and use of darts.
The OFA has made correct long term (yearly) predictions over two thirds of the time over the last century. The national weather service long range predictions have been correct only around one third of the time over that same span. Which one should we pay attention to? Which one is more scientific?
People mistake men wearing lab coats holding beakers of mysteriously bubbling fluids for scientists. People mistake complicated mathematics and Latin terms for science.
Arguments from authority are so common because they work on so many people. And, of course, in most of the world ‘authority’ means men with clubs and guns.
I need to consult with those well known climate experts J Lawrence and Robert de Niro before I respond to this article.
Pfff! Earthquakes weren’t even mentioned once.
How dare you leave out noted extreme event expert Irwin Allen? His body of work has been admired by millions!
A very nice guest post, meteorology driven. For a different perspective on why climate models are gibberish, see my two previous guests posts here, ‘The Trouble with Models’ and ‘Why Models are Wrong’. In sum, computational resolution=>parameterization=>attribution and indistinguishability between ~1920-1945 and ~1975-2000.
Climate models are wrong because they cannot be right – if you start from first principles as we currently understand them , you get nowhere near reality. That’s why they have to “tune” them.
And they are then wrong because we have no initial state that we can use as the beginning of a model run. If Iw ant to send a rocket to Mars, I know where the rocket is and where Mars when model it, but we simply don’t know what “state” the millions of bits of the climate are in at any one time.
Yup, but climate modelers still allow people to think/claim that ‘taking an average’ somehow reduces the problem, when it doesn’t. Hence we get shown an ensemble-average of IPCCmodels, as if the mere inclusion of a number of favored models didn’t affect the average! Is there a model that the IPCC ever rejected? And on what grounds?
Modelers certainly ought to know all this by now, so I do seriously wonder if they are ever even taught this at all, .or if it is something much worse.
Michael Mann stated something very close to that in his Harvey editorial. He alluded that climate scientists like him are right because they understand the “Fundamental physical principles” of Earth. He’s SUCH a mathematician!!! He thinks there’s an algorithm for Earth, and that if you add the average of this factor to the average of that factor, you’re going to get an average answer of X every single time. He simply cannot even fathom a system that does not conform to his limited understanding, let alone build a model that represents such a system.
Aphan – do you mean mathemagician?
Lol! Great word- mathemagician!
Except that a good magician needs style, keen observation skills, and great sleight of hand. Mann is dull, blind, and clumsy.
Since I don’t think I see it mentioned in the article, has any climate model, a priori, ever replicated the QBO?
By which I mean the model produced something that looked like the QBO, rather than being forced to do so simply by a process of eliminating model runs which the investigator didn’t much like.
Test
I’d say, very successful.
Seconded!
Great article! I would love to see the recent CERN CLOUD experiment results added to the discussion. I think the CERN results are important but have yet to see any critical analysis within the broad context of climate modeling addressed so well here.
My take on climate modeling can be expressed as a thought experiment. Think of the earth’s climate system as a black box. The output of the black box is the earth’s temperature. Climate research should focus first on analyzing the output. If one cannot analyze the output of a complex system, what is the likelihood that the complex system itself can be successfully modeled?
An example of output is the HadCRUT4 time-temperature data. A simple numerical analysis of the data indicates a high likelihood of the beginning of an absolute decline in the global mean surface temperature trend line within the next decade. The first derivative of the temperature trend line is positive but has decreased in value every month for the past 20 years or so. The derivative is likely to become negative in the mid-2020s and increase in negative value well into the 2030s, i.e., the mean global surface temperature will decline.
Is this analysis of any practical use? Probably not. Any projection of the temperature trend can only be expressed as a range of values, probably with only short-term credibility. A more sophisticated analysis might reduce the range and extend the time frame enough for a result to be useful for long-term planning purposes. I leave that problem for interested students to pursue.
I’ve never been very good with acronyms but I’ve finally figured out what CAGW stands for – Computer Aided Global Warming!
I thought it meant – Citizens Against Government Waste, a group against the Paris Accord and other green movements. Both usages make sense.
Am I wrong in my belief that all of these computer climate models have CO2 as a positive forcer to temperature? If so, doesn’t that invalidate them as a method to prove CO2 forcing temperature?
Not only that, but they have parameterized the sensitivity.
“US Standard Atmosphere Model & Observations Prove Maxwell’s Mass/Gravity/Pressure Theory of the ‘Greenhouse Effect’ is Correct & Falsifies CAGW”
The above is an introduction to the US Standard Atmosphere that guided our missions to the moon. It was the results of the best and brightest trying to understand the atmosphere of the earth.
If you don’t know this story, you can’t understand the horror of the CO2 bull.
http://hockeyschtick.blogspot.com/2014/12/why-us-standard-atmosphere-model.html
I have read it. And no it does not dispute global warming by the greenhouse effect. Using the logic used in your reference the atmosphere should continue to cool down at above 20 km height. Where’s at above this height it warms up due to greenhouse effect. The effect is present lower than this as well in case you jump to another misconclusion.
It is not the air it is the oceans. That is where the energy is. The air is in the fourth decimal place.
And the oceans are all but opaque to DWLWIR.
Due to the omni-directional nature of DWLIR and its wavelength, DWLWIR does not penetrate more than about 6 microns, where if it does anything at all, it is likely to only power evaporation from the top micron layer.
…and, since the models also cannot model individual thunderstorms and small circulation, and per this post cannot accurately model or project large circulation changes, so essentially they cannot model the power, flux and speed of the earth’s hydrological cycle, and therefore they simply are not fit for purpose.
@rverney
Exactly. The physics of water should be nailed on the door ala Martin Luther King to lead a climate reformation.
The current insanity would have us believe the impossible, including:
1. A cold CO2 molecule can heat warmer water by a process that is unable to penetrate below the first 6 microns
2. Water vapor, which loses energy upon evaporation and releases more to space somehow gains energy magically to provide a positive feedback
3. The most critical input for all life on earth, CO2, which used to be abundant in our early atmospheric sphere and is critically low in our current atmosphere is a pollutant.
I believe the alarmists should be mandated to assert the exact level in ppm that is “correct” for CO2 in our atmosphere and prove under laboratory settings at what level it becomes a pollutant by demonstrating it’s deleterious effects on plants and humans.
If it is a pollutant, like indoor smoke, this should be easy to demonstrate.
They should also be required to demonstrate how CO2 enriched cold air, when blown across the surface of water heats it.
Lastly, they should also be able to demonstrate how rain at a temperature below the surface of water can increase its temperature.
If they can prove these impossibilities they profess, then they deserve increased funding.
The structure and movements of the atmosphere are well known, as evidenced by how well the better weather models work. The main problem with climate models is that they were selected and/or tuned to hindcast the past, especially 1975-2005, without considering all known factors that affect global temperature. The main missing factor is multidecadal oscillations. Those were on an upswing during most of 1975-2005 and accounted for about .2 degree C of the rapid warming during that time. So the climate models have excessive climate sensitivity to change of CO2 in order to have increasing CO2 causing about .2 degree C more warming during that time than it actually did.
The models aren’t tuned to hindcast the past. They are programmed to replicate the past in order to imply some predicative value that simply doesn’t exist.
Ron,
Apparently you don’t understand hindcasting.
Hindcasting is a test to see how accurate a model is. It takes a model and starts it at a past date and lets it run through to the present to see if the model produces the same results as the actual temperature record did.
If output of the model does not match past real-life observations, then the parameters are altered…TUNED…to better fit reality/observations.
If you are stating that all models are programmed to replicate the past, rather than actually tested with hindcasting, you’d better have the evidence to prove it.
.
.which, as far as the future is concerned, is a distinction without a difference, as there are many parameters to choose from, and so there are numerous scenarios dependent on assumptions that can yield the past, yet have no predictive value. This is particularly true when both the past and the present are poorly known.
I am not so pessimistic that we have practically no data. We have pretty good data, if the think the numbers of the climate key figures averaged over the year and the globe. I have used the following observed data for validating my spectral calculations:
1) According to the synthesis analysis of Stephens et al. (2012), the measured downward fluxes vary between 309.2 – 326 W/m2 in 13 independent studies the average value being 314.2 W/m2. The same flux value is 310.9 W/m2 using the spectral analysis tool applied in alternative warming theory of my model (Ollila, 2017). The conclusion is that the analysis method used in calculating the warming effects of GH gases gives results, which are near to the measured values of the real atmosphere and the difference is in average 1.0 %. This difference is well inside the error margin of ±10 Wm.2 estimated to be the accuracy of measured LW fluxes.
2) The LW radiation flux at TOA in the clear sky conditions according to the spectral analysis calculations (Ollila, 2014) is 265.3 Wm-2. According to the NASA CERES (2017) satellite observations from 2000 to 2010 this flux has been 265.8 Wm-2. The difference is only 0.19 %.
3) The emitted LW flux of the Earth surface is about 396 W/m2 corresponding to the average global temperature of 15.9 degrees. The total absorption by the atmosphere is 396-239 = 157 W/m2. The observed values and the calculated values are almost the same.
I think that the skeptics cannot win this fight by questioning the observed measurements. What we should do, is to question the IPCC’s simple model. That model is not correct and that is why the climate sensitivity is only 0.6 degrees.
Dr. Antero Ollila
Phil Jones (Ex head of CRU) admitted that southern hemisphere temperature data was mostly made up.
What data are you referring to? Satellite data covers about 40 years. Earth is 4.5 billion years old. Yes, we have no data in that context.
Dr. I am not able to do the math but from my reading and guessing. I have come to conclusions that climate sensitivity is indeed below 1 C. I am certain the climate change people know this also, getting them to admit this is not going to happen any time soon, after all CO2 doubling of less than 1 C makes CO2 climate change problem moot. The collect far to much money off this moot to give up now. Do I believe dishonest morally corrupt, yes.
“reading and guessing” and not “able to do the maths”. Sounds like three excellent reasons for trusting
the rest of your post and your conclusion about the climate sensitivity. I would ask what you have read — there are plenty of published values for the climate sensitivity in the literature ranging from about 0 to 10
degrees. So how did you choose (or guess) which one to believe in?
Germino
“Gut instinct” is subConscious processing that cannot be expressed across Consciously.
He’s going with his gut instinct. Which is *not* inherently wrong.
Q = U A dT describes the heat flow from the warm inside of your house to the cold surroundings. It also describes the heat flow from the earth’s surface to 32 km where the molecules end. Above 32 km it’s S-B radiation. First principles work just fine, no need for an initial state, it’s a continuous process..
“Molecules end” at 32km????
Aphan,
I find a consensus that 99% of the atmosphere’s mass is below 32 km, so yeah, that’s where molecules effectively end. What does density by km look like? Pretty much gone by 32 km. Where does NOAA stop taking temperatures? 32 km. No molecules mean nothing to measure after 32 km.
nickreality65,
The stratosphere extends to roughly 50 km, and it also contains the ozone layer which extends from roughly 17 kms to 50 kms. Ozone is a molecule. The ozone layer is made of many molecules.
Even the exosphere (two layers above the stratosphere) contains molecules at 700 km-10,000km:
“This layer is mainly composed of extremely low densities of hydrogen, helium and several heavier molecules including nitrogen, oxygen and carbon dioxide closer to the exobase. The atoms and molecules are so far apart that they can travel hundreds of kilometers without colliding with one another. Thus, the exosphere no longer behaves like a gas, and the particles constantly escape into space. These free-moving particles follow ballistic trajectories and may migrate in and out of the magnetosphere or the solar wind.”
Satellites take measurements of the temperature in the stratosphere too, regardless of what you think NOAA talks about.
https://www.bing.com/images/search?view=detailV2&ccid=D4LUBGKZ&id=D0B1EF276727F08F7E5CBB00CE32728096C7587D&thid=OIP.D4LUBGKZAbWnjRg52imKAAEsEL&q=atmospheric+density&simid=608029300531923941&selectedIndex=82&ajaxhist=0
http://eschooltoday.com/ozone-depletion/where-is-the-ozone-layer.html
The next layer, extending about 15-60 km is called the stratosphere.
The ozone layer is mainly found in the lower portion of the stratosphere from approximately 20 to 30 kilometres
(12 to 19 mi) above earth, though the thickness varies seasonally and geographically.
So, above about 32 km there are no molecules of significance. 99.9% of mass is below 48 km.
Q = U A dT describes the heat flow from the warm inside of your house to the cold surroundings. It also describes the heat flow from the earth’s surface to 32 km where the molecules end. Above 32 km it’s S-B radiation. First principles work just fine, no need for an initial state, it’s a continuous process..
nickreality65
Technically, the equation as stated is correct. But “U” changes for each wall of the house, the roof, and the floor-ground. “U” changes also across the planet, and across the seasons and latitudes. But, the one-size-fits-all-averages flat-earth model popularized by Trenberth and Hanson and Gore and Kerry and Mann (and the propaganda at large) ignores such trivial problems as actual values in the real world..
U (1/R) represents the combined thermal resistive effects of convection, conduction, latent and radiative heat processes which together demand a differential temperature to motivate that energy flow, i.e. heat.
Just like the resistance of a complex electronic circuit requires a voltage differential powered by a battery.
Just like the resistance of a complex hydraulic circuit requires a pressure difference powered by a pump.
The thermal resistance of the complex atmospheric circuit requires a temperature difference powered by the sun.
The genesis of RGHE theory is the incorrect notion that the atmosphere warms the surface (and that is NOT the ground). Explaining the mechanism behind this erroneous notion demands some truly contorted physics, thermo and heat transfer, i.e. energy out of nowhere, cold to hot w/o work, perpetual motion.
Is space cold or hot? There are no molecules in space so our common definitions of hot/cold/heat/energy don’t apply.
The temperatures of objects in space, e.g. the Earth, Moon, space station, Mars, Venus, etc. are determined by the radiation flowing past them. In the case of the Earth, the solar irradiance of 1,368 W/m^2 has a Stefan Boltzmann black body equilibrium temperature of 394 K, 121 C, 250 F. That’s hot. Sort of.
But an object’s albedo reflects away some of that energy and reduces that temperature.
The Earth’s albedo reflects away about 30% of the Sun’s 1,368 W/m^2 energy leaving 70% or 958 W/m^2 to “warm” the surface (1.5 m above ground) and at an S-B BB equilibrium temperature of 361 K, 33 C cooler (394-361) than the earth with no atmosphere or albedo.
The Earth’s albedo/atmosphere doesn’t keep the Earth warm, it keeps the Earth cool.
perhaps one should look at it as follows:
The atmosphere has thermal mass and thermal inertia. It means that it keeps the planet surface cool during the day, and warm during the night.
The thermal mass and inertia is very much less than that of the oceans, which also act in a similar manner so as to make a more temperate state.
Perhaps true if you ignore that other energy source that contains about 1000 times more energy then the atmosphere. ( the oceans of course)
We do not know about the equilibrium balance of the oceans. We do not know the residence time of disparate W/L insolation, ( just that it varies from nano seconds to centuries) or the accumulation or loss of energy due to long term solar flux.
We do not have adequate understanding of clouds, or disparate factors affecting the hydrological cycle, or of jet streams and cloud formation, etc…
Nicholas Schroeder September 16, 2017 at 4:02 pm:
You seem to be conflating the planet’s albedo with its atmosphere, Nicholas. Bodies like our Moon do not have an atmosphere, but they still have an albedo.
RP:
And without an atmosphere you’ll be baked to a crisp on the Moon.
Only on the bright side. You’d freeze to death on the dark side of the Moon.
Nicholas
Only 48% of the Suns energy reaches the “surface” to warm it. 29% reflected by albedo, 23% absorbed by the atmosphere as incoming radiation=48% available for surface heating.
NASA
Data. Doctor, it would seem you use this word without knowing what it means.
Data is defined as: “facts and statistics collected together for reference or analysis.”
It would seem to me, that if one could accurately measure the downward flux, all 13 studies would have arrived at the same number for that measurement, and not a range of “measurements” that have a 16.8 w/m2 difference between it’s low end and it’s high end. One certainly wouldn’t be able to claim it as accurate if that one number had an “estimated” error margin of ±10 Wm.2 either.
The actual downward flux would be an actual, accurate measurement-not a calculation or an estimate, and that actual, accurate measurement would be a fact, or a data point. The upward flux would be an actual, accurate measurement, not a calculation or an estimate or an average, and that actual, real number would be a fact, or a data point.
You pointed out that in 13 studies, the “flux” ranged between two numbers that were 16.8 w/m2 different, and that even the “average” between them had a margin of error that was +/- 10 w/m2!!!
And then you claim to be able to arrive at a climate sensitivity of only 0.6 degrees????
I question your “measurements”. I doubt they are actual observations. If so….why are they not all the same?
Aphan, I agree. Also consider that even if we knew the average precisely, that is not saying a lot, as the earth is nowhere average and the processes are not linear.
“The emitted LW flux of the Earth surface is about 396 W/m2 corresponding to the average global temperature of 15.9 degrees.” This is displayed on the ubiquitous K-T model.
This is not possible outside of a theoretical model. The molecules at the surface with their conduction, convection and latent heat processes are “participating” media so ideal S-B does not apply. Claiming both molecular processes and ideal S-B radiation effectively doubles the amount of energy in the system violating conservation of energy.
240 W/m^2 and 255 K work at the ToA because there are no molecules.
1,368 W/m^2 and 394 K work at the solar photosphere at average earth orbit because there are no molecules.
Solar luminosity, spherical area, S-B, 5,778 K work at the surface of the sun because there are no molecules.
One popular RGHE theory power flux balance (“Atmospheric Moisture…. Trenberth et al 2011jcli24 Figure 10) has a spontaneous perpetual loop (333 W/m^2) flowing from cold to hot violating three fundamental thermodynamic laws. (1. Spontaneous energy out of nowhere, 2. perpetual loop w/o work, 3. cold to hot w/o work, 4. doesn’t matter because what’s in the system stays in the system)
“Claiming both molecular processes and ideal S-B radiation effectively doubles the amount of energy in the system”
How? The molecules in our atmosphere do not create/generate “energy” on their own, they are simply “media” that “interacts” with the energy that comes into, and out of, Earths system.
And you seem to think that radiation cannot heat atoms just as it does molecules. Do you actually think that if a given space does not have a lot of molecules in it, it doesn’t warm or cool or react to radiation?
What data? The original, unadulterated temperature readings, that have been purposely hidden or destroyed? Or the ‘adjusted’ data, that gets continually readjusted to suit their current needs of rectifying the past to prove their future models? Or are you talking about the ‘proprietary’ data that Mann, et al, refuse to let anybody else see?
The entire climate “science” debate is based on literally made up numbers, using models that predict only the outcomes that the people who pay the modelers want to see. It is the opposite of science. Science doesn’t have witch hunts. Science doesn’t call people who disagree rude names and call for them to be murdered. No, this is not science. This is either religious cult activity, or standard government action by self serving ‘elites’ to enhance their power, control, and social status among their fellows.
A more complete description for the climate models is: GI GC GO
Garbage In combined with Garbage Calculations produce Garbage Out
GA and GS could also be added. Garbage Assumptions and Garbage Simplifications
So we get Garbage In along with Garbage Assumptions and Simplifications run through Garbage Calculations produce Garbage Out.
But better to Keep It Simple: Climate Models are Garbage
Are computers capable of performing this type of repeated calculation without iterative errors mounting up?
Are these calculations beyond the inherent design flaws of the architecture of the computer?
Theoretically, that depends on the modeling language, programming techniques, and available memory. Practically, the answer is a simple no.
These limitations on computer modelling explain why sticking your head out of the window and observing the actions of ants are still important adjuncts to weather forecasting.
“This GHE calculation makes a lot of assumptions, not the least of which is assuming the Earth has an emissivity of 1 and is a blackbody” << junk science, junk all day long.
The earth cannot be treated as a blackbody in any way shape or form, to do so is an admission you have no idea what you are doing
Earth’s emissivity is a a guess-fest, more overarching useless averages to allow correct calculations of useless fantasy values.
GHG forcing is nothing but a follow on from “all things equalism”, again complete guessery to allow for further calculations of meaningless values.
Why do scientists fill in guesses when they have no data… it is soooooooo annoying
Re GHG and allthingsbeingequalism
We truly have no frikkin idea what the temp would be without an atmosphere, nor what it is with one, relative to forcing or lack thereof.
This issue is not just climate science, and “scientists” the group seem to accrue more scallywags than the normal demographic distribution would suggest. No one has more trouble admitting they are flat out wrong than a scientist. #egos
The genesis of RGHE theory is the incorrect notion that the atmosphere warms the surface (and that is NOT the ground). Explaining the mechanism behind this erroneous notion demands some truly contorted physics, thermo and heat transfer, i.e. energy out of nowhere, cold to hot w/o work, perpetual motion.
Is space cold or hot? There are no molecules in space so our common definitions of hot/cold/heat/energy don’t apply.
The temperatures of objects in space, e.g. the Earth, Moon, space station, Mars, Venus, etc. are determined by the radiation flowing past them. In the case of the Earth, the solar irradiance of 1,368 W/m^2 has a Stefan Boltzmann black body equilibrium temperature of 394 K, 121 C, 250 F. That’s hot. Sort of.
But an object’s albedo reflects away some of that energy and reduces that temperature.
The Earth’s albedo reflects away about 30% of the Sun’s 1,368 W/m^2 energy leaving 70% or 958 W/m^2 to “warm” the surface (1.5 m above ground) and at an S-B BB equilibrium temperature of 361 K, 33 C cooler (394-361) than the earth with no atmosphere or albedo.
The Earth’s albedo/atmosphere doesn’t keep the Earth warm, it keeps the Earth cool.
Bring science, I did.
http://writerbeat.com/articles/14306-Greenhouse—We-don-t-need-no-stinkin-greenhouse-Warning-science-ahead-
http://writerbeat.com/articles/15582-To-be-33C-or-not-to-be-33C
http://writerbeat.com/articles/16255-Atmospheric-Layers-and-Thermodynamic-Ping-Pong
Markstoval, at 3:07 pm I have read it. And no it does not dispute global warming by the greenhouse effect. Using the logic used in your reference the atmosphere should continue to cool down at above 20 km height. Where’s at above this height it warms up due to greenhouse effect. The greenhouse effect is present lower than this as well in case you jump to another misconclusion.
They hard code in that more CO2 causes warming so that is what the output of their simulations show. They beg the question making their work totally useless. The reality is that the radiant greenhouse effect has not been observed, in a real greenhouse, on Earth, or anywhere else in the solar system. The radiant greenhouse effect is science fiction and hence the AGW conjecture that depends upon the existance of a radiant greenhouse effect is also science fiction.
Convection will probably never become a significant and well modeled part of the ‘climate models’ because it is intimately tied to the adiabatic lapse rate (ALR). The ALR is defined as the change in temperature (dT) of the air attendant to a change in the altitude (dh) within the atmosphere. Physics and thermodynamics identify this quantity as being dependent on the acceleration due to gravity (g) and the heat capacity (Cp) of the air at a constant pressure. The exact mathematical expression for the ALR is:
dT / dh = – g / Cp
It is important to note here that the Cp of Carbon dioxide (0.844 Kj/Kg-K) is not significantly different from those of nitrogen (0.743) or oxygen (0.659), and because of this a small change (0.0300 % to 0.0400 %) in the concentration of this trace gas has an insignificant effect on the overall Cp of the air.
Inclusion of the ALR in climate models would result in a model with CO2 concentrations having a negligible effect on temperature. In other words, the concept of CO2 as a ‘greenhouse gas’ is completely erroneous. Consider that the total solar irradiance that is NOT absorbed by CO2
The myth of CO2 as a Greenhouse Gas was spawned by a misinterpretation of the physics of the atmosphere of Venus, which exhibits an average temperature of about 462° C at the surface, and contains 96.5% carbon dioxide. Someone incorrectly inferred that a ‘runaway greenhouse effect’ was operating here, when in fact the cause of the high temperatures of the atmosphere of Venus has more to do with the elevated pressure, which is 92.10 atmospheres at the surface.
The temperature of the atmosphere of Venus at an altitude (51 km), where the pressure compares to that of the atmosphere at earth’s surface (1 atm), is the valid comparison. We should then be comparing earth’s 15° C (313 Kelvins) with Venus’ 70° C (368 Kelvins), about what one would expect given Venus’ proximity to the sun.
Tadchem You completely ignore the spectroscopic properties of CO2 and other greenhouse gases. Those that are absent in oxygen, nitrogen and argon. Electomagnetic radiation interacts with electric dipoles which are absent in 99% (it’s a dry sample )of the atmospheric gases. So oxygen nitrogen and argon are effectively invisible to the incoming and outgoing electomagnetic radiation. The trace of CO2 interacts then you probably know all this and are just being awkward
richard verney September 15, 2017 at 12:41 am, responding to a comment by David Wells September 14, 2017 at 11:24 am on the House defunds Obama ere methane rules thread:
Not only is there overlap in these absorption bands by the various radiative gasses, one important point often over looked is that the LWIR radiant spectrum emitted at 288K (being the global mean temperature), contains very little IR radiant energy at wavelengths shorter than 5.0 microns.
It follows from this important fact that the 3.3 micron absorption band of Methane, and the 4.3 micron (and shorter wavelength, eg the 2.5 micron) CO2 absorption bands are all but completely inactive, such that these cannot add anything of significance to to the DWLWIR.
One needs to get from the laboratory to the real world. The effectiveness of Methane and CO2 is overstated since people frequently overlook the IR wavelength at which the Earth is emitting. It is not simply that water vapour is the most abundant radiative gas, but in places where there is little overlap in the absorption bands, the planet is not emitting much IR radiative energy in those wavelength.
tadchem: The claim that convection will “probably never become a significant … part of climate models” is just flat out wrong. It is well known and also fairly obvious that convection (and latent heat) are the major sources of heating the upper atmosphere. Given that hot air rises and the surface temperature is higher than at temperature at higher altitudes then clearly the atmosphere is unstable with hot air constantly rising and cooling and then falling again. All of this is in the climate models.
The actual physics of small grid flux ( thunderstorms for instance) is not in the models according to what I have heard, just general assumptions based on physics. For instance, what percentage of DLWIR energy in each grid goes into accelerating the hydrological cycle vs heating the oceans vs heating the lower atmosphere? Are existing grids of a size adequate to capture these non linear residence time of energy flux affects?
Amen and amen. The composition of the atmosphere has zero impact on avg global temperature. The mass of the atmosphere and energy received at the top of the atmosphere are all that matter.
Mark Twain observed, “The trouble with most of us is that we know too much that ain’t so.”
Adding to the “Δ33C without an atmosphere” (see other article) that completely ain’t so is the example of Venus.
Venus, we are told, has an atmosphere that is almost pure carbon dioxide and an extremely high surface temperature, 750 K, and this is allegedly due to the radiative greenhouse effect, RGHE. But the only apparent defense is, “Well, WHAT else could it BE?!”
Well, what follows is the else it could be. (Q = U * A * ΔT)
Venus is 70% of the distance to the sun so its average solar constant/irradiance is twice as intense as that of earth, 2,615 W/m^2 as opposed to 1,368 W/m^2.
But the albedo of Venus is 0.77 compared to 0.31 for the Earth – or – Venus 601.5 W/m^2 net ASR (absorbed solar radiation) compared to Earth 943.9 W/m^2 net ASR.
The Venusian atmosphere is 250 km thick as opposed to Earth’s at 100 km. Picture how hot you would get stacking 1.5 more blankets on your bed. RGHE’s got jack to do with it, it’s all Q = U * A * ΔT.
The thermal conductivity of carbon dioxide is about half that of air, 0.0146 W/m-K as opposed to 0.0240 W/m-K so it takes twice the ΔT/m to move the same kJ from surface to ToA.
Put the higher irradiance & albedo (lower Q = lower ΔT), thickness (greater thickness increases ΔT) and conductivity (lower conductivity raises ΔT) all together: 601.5/943.9 * 250/100 * 0.0240/0.0146 = 2.61.
So, Q = U * A * ΔT suggests that the Venusian ΔT would be 2.61 times greater than that of Earth. If the surface of the Earth is 15C/288K and ToA is effectively 0K then Earth ΔT = 288K. Venus ΔT would be 2.61 * 288 K = 748.8 K surface temperature.
All explained, no need for any S-B BB RGHE hocus pocus.
Simplest explanation for the observation.
Actually the radiant greenhouse effect has not been observed anywhere in the solar system. The radiant greenhouse effect is science fiction. Hence the AGW conjecture is science fiction.
Have you run this calculation for Mars?
FTOP
The comparison needs atmospheres, At 0.02 kg/m^3 it’s a stretch to say that Mars even has an atmosphere.
Nicholas, they use parachutes to land rovers on Mars: https://mars.nasa.gov/mer/mission/spacecraft_edl_parachute.html
…
Did you know that without an atmosphere a parachute wouldn’t work?
Ron, the atmosphere warms above the tropopause because of the ozone absorption of UV.
“it is remarkable that the IPCC managed to fool the world and all the scientist affiliated with or supporting that agency and its work.”
The “world” is easy – they’re just the ignorant sheep.
The “scientists” is twofold – it’s the grants and the fact that climatey “scientists” never cut the grade to be trainable in real hard-science areas. It’s all quite a worry all round though. Many tend to think that, in a modern western world, people are far too well informed and educated to be vulnerable to large-scale propaganda any more. Manmade global warming shows just how wrong that assumption is.
GIGO = Garbage In, Gospel Out.
In 1972 the Australian Computer Society ran a “lapel badge” competition for its Sydney conference. This was the one that took out the prize.
For those who appreciated Johnny Mathis, the runner up was “On a Clear Disk, You Can Seek Forever.”