Reposted from Dr. Roy Spencer’s blog
September 13th, 2019 by Roy W. Spencer, Ph. D.
Have you ever wondered, “How can we predict global average temperature change when we don’t even know what the global average temperature is?”
Or maybe, “How can climate models produce any meaningful forecasts when they have such large errors in their component energy fluxes?” (This is the issue I’ve been debating with Dr. Pat Frank after publication of his Propagation of Error and the Reliability of Global Air Temperature Projections. )
I like using simple analogies to demonstrate basic concepts
Pots of Water on the Stove
A pot of water warming on a gas stove is useful for demonstrating basic concepts of energy gain and energy loss, which together determine temperature of the water in the pot.
If we view the pot of water as a simple analogy to the climate system, with a stove flame (solar input) heating the pots, we can see that two identical pots can have the same temperature, but with different rate of energy gain and loss, if (for example) we place a lid on one of the pots.

A lid reduces the warming water’s ability to cool, so the water temperature goes up (for the same rate of energy input) compared to if no lid was present. As a result, a lower flame is necessary to maintain the same water temperature as the pot without a lid. The lid is analogous to Earth’s greenhouse effect, which reduces the ability of the Earth’s surface to cool to outer space.
The two pots in the above cartoon are analogous to two climate models having different energy fluxes with known (and unknown) errors in them. The models can be adjusted so the various energy fluxes balance in the long term (over centuries) but still maintain a constant global average surface air temperature somewhere close to that observed. (The model behavior is also compared to many observed ocean and atmospheric variables. Surface air temperature is only one.)
Next, imagine that we had twenty pots with various amounts of coverage of the pots by the lids: from no coverage to complete coverage. This would be analogous to 20 climate models having various amounts of greenhouse effect (which depends mostly on high clouds [Frank’s longwave cloud forcing in his paper] and water vapor distributions). We can adjust the flame intensity until all pots read 150 deg. F. This is analogous to adjusting (say) low cloud amounts in the climate models, since low clouds have a strong cooling effect on the climate system by limiting solar heating of the surface.
Numerically Modeling the Pot of Water on the Stove
Now, let’s say we we build a time-dependent computer model of the stove-pot-lid system. It has equations for the energy input from the flame, and loss of energy from conduction, convection, radiation, and evaporation.
Clearly, we cannot model each component of the energy fluxes exactly, because (1) we can’t even measure them exactly, and (2) even if we could measure them exactly, we cannot exactly model the relevant physical processes. Modeling of real-world systems always involves approximations. We don’t know exactly how much energy is being transferred from the flame to the pot. We don’t know exactly how fast the pot is losing energy to its surroundings from conduction, radiation, and evaporation of water.
But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.
Thus, we can either make ad-hoc bias adjustments to the various energy fluxes to get as close to the desired water temperature as we want (this is what climate models used to do many years ago); or, we can make more physically-based adjustments because every computation of physical processes that affect energy transfer has uncertainties, say, a coefficient of turbulent heat loss to the air from the pot. This is what model climate models do today for adjustments.
If we then take the resulting “pot model” (ha-ha) that produces a water temperature of 150 deg. F as it is integrated over time, with all of its uncertain physical approximations or ad-hoc energy flux corrections, and run it with a little more coverage of the pot by the lid, we know the modeled water temperature will increase. That part of the physics is still in the model.

This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.
This directly contradicts the succinctly-stated main conclusion of Frank’s paper:
“LWCF [longwave cloud forcing] calibration error is +/- 144 x larger than the annual average increase in GHG forcing. This fact alone makes any possible global effect of anthropogenic CO2 emissions invisible to present climate models.”
I’m not saying this is ideal, or even a defense of climate model projections. Climate models should ideally produce results entirely based upon physical first principles. For the same forcing scenario (e.g. a doubling of atmospheric CO2) twenty different models should all produce about the same amount of future surface warming. They don’t.
Instead, after 30 years and billions of dollars of research they still produce from 1.5 to 4.5 deg. C of warming in response to doubling of atmospheric CO2.
The Big Question
The big question is, “How much will the climate system warm in response to increasing CO2?” The answer depends not so much upon uncertainties in the component energy fluxes in the climate system, as Frank claims, but upon how those energy fluxes change as the temperature changes.
And that’s what determines “climate sensitivity”.
This is why people like myself and Lindzen emphasize so-called “feedbacks” (which determine climate sensitivity) as the main source of uncertainty in global warming projections.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Does CO2, like water vapor, reduce the vertical temperature gradient?


Roy,
Error propagation is a red herring when the models don’t even have the complete or correct physics. One example is cloud seeding. Models are not able to predict cloud cover. That’s like saying we have a model for the stovetop pot without (crucially) knowing the flame size, in your analogy. But if the models incorporate the “greenhouse effect” as conventionally defined, we have an even bigger problem. The latter is predicated on an application of the Stefan-Boltzmann equation which gives unphysical results in familiar situations. More specifically, “green house effect” calculations involve adding radiant heat fluxes (IR intensities) together and then solving for temperature in J = s T^4, where s is a constant and J is the light intensity. For example, suppose we have two candles, each at temperature T. If we apply the SB equation as is conventionally done in the climate science milieu––i.e., adding light intensities and then solving for T––we will deduce a temperature for the candles together to be 2^(1/4) T. Try it yourself; take the above equation and plug in 2J (one J for each candle) then solve for T (yes, two candles means twice the intensity if you capture all the energy). But, of course, adding two same-temperature sources does not give a higher temperature result in the real world. BTW, the term “radiative” forcing is misleading and should not be used. Radiation does not always “force” temperature up unless the receiving medium is cooler than the radiating one. Insisting on using “forcing” subconsciously leads folks to assume radiation in all circumstances results in heating. So that’s one issue in climatology: inaccurate language leads to unphysical formulations, in this case of SB.
The only way the atmosphere could possibly result in higher temperatures on the surface, all else being equal, is to change the its composition such that it is a better insulator of radiant energy flux. Even If we assume that CO2 is a better IR insulator than whole air––though I have not seen experimental support of the latter––then in principle it could cause warming. If CO2 is indeed more IR-insulating than whole air, then the question is scale, especially relative to other phenomena. There is only one CO2 molecule per 2500 air molecules in the atmosphere today. If we took all of the CO2 and raised its temperature by dT degrees, its impact on surrounding air would be to raise its temperature, at most, by dT / 2500 (because air has greater heat capacity than CO2, due to the presence of water vapor). That’s the approximate scale of theoretical impact, 0.0004 dT to its surroundings. So, to effect a 0.1C change on its surroundings, CO2 would have to be separately raised by 250C somehow from the surface of the earth. If IPCC models are to be believed, that would mean CO2 would have to be heated up by 2,500C to justify an increase of at least 1C of surrounding air. Shall I continue this reductio ad absurdum?
That what I expected that someone thinks that climate modelers adjust on a yearly basis a proper correction term to get a wanted temperature output. It is not so. That kind of model has no meaning and GCMs do not behave like that.
It seems to be too difficult to accept this basic simple property of GCMs: cloud forcing is not variable in those models but is a constant effect. IPCC says this way: cloud feedback has not been applied. It means that cloud forcing is not changing according to temperature variations or according to GH gas concentrations.
Antero Ollila
Even if cloud forcing is treated as a constant instead of a variable, it is a necessary parameter and has associated error for which only the upper and lower bounds are estimated. That error has to be taken into account for assigning uncertainty to the output of the chain of calculations. There are two ways in which the error can be handled. 1) The extreme upper-bound value is added to the nominal value, and the extreme lower-bound is subtracted; the calculations are then performed for both values. 2) The calculations are only performed for the nominal value of the parameter, and the propagation of error is performed separately. The latter approach is the easiest and quickest. However, all too frequently, the associated error is overlooked.
In summary, when determining a calculated output, using nominal values will give an estimate of the mean value from the chain of arithmetic operations. However, as Frank is demonstrating, the uncertainty can grow so rapidly that the mean value has almost no meaning because the possible range has become so large.
I try one more example. The TCS of the IPCC is about 1.8 C and there is only one effect besides CO2 and it is positive water feedback. IPCC says that water feedback about doubles the original CO2 warming effect. Do they inform that there are any other effects like albedo changes, cloud feedback changes or anything else? No, they do not say that.
This does not mean that I keep the IPCC’s model correct and fully explaining the temperature increase since 1750. I do not think so and my own reproduction of CO2 radiative forcing study shows that the real CO2 forcing is about 41 % about the 3.7 W/m2 per 560 ppm.
I do not think that Dr. Spencer would try once again.
I know this post probably will be lost ….. BUT ….
The more appropriate experiment would be to record the temperatures of the pot with various lids of porosity and create a record. Then get a computer model that uses an algorithm that inaccurately predicts the size of the lid going back in time. ….. and then claim that the model is accurately predicting the temp of future lids.
That is what Franks paper did.
You’ll note, it doesn’t directly calculate the forcing from a particular lid, that is settled physics. The issue is predicting which lid will apply.
In all this discussion the term error has been used with two different meanings. One meaning is the difference between a value and the true value. This is a bias. Bias’s can add and subtract and thus can cancel in part. The other meaning is a statistical error or a measure of the certainty of a value. Statistical errors always add and cannot cancel out. We need to be much more careful of our use of these terms in this discussion.
I have to confess that the more I learn about climate models the less confidence I have in their usefulness. As Steven Mosher points out, it is the feedbacks that are important. Unfortunately the major feedbacks can be positive or negative depending on the circumstances and the really important ones like cloud cover cannot be modelled anyway for a number of reasons. So what use is the CMIP6 series? We know that it has the same predictive skills as the previous models (none).
The take away message is that policy is being based on model output but politicians have not got a clue what that means. They hear the tripe put out by ER and an adolescent who should be in school. Words like crisis, emergency and tipping point are being cynically used to panic the decision makers, the public and the children. Tales of extreme weather dominate our daily news but the data shows otherwise. But how many scientists point this out? Instead we see government scientists cherry picking temperature measurements, comparisons and dates in order to claim doubtful records at every opportunity, not to mention re-writing the data, editing the historic record and changing the gradients of the temperature charts.
Climate science has become a cesspit of misinformation. Where are the true scientists?
We see some here, and we should salute their honesty and integrity. But the majority keep their heads down while others ruthlessly exploit the scam that climate change has become.
I fully expect this comment to be removed but I hope it is allowed to remain. The current debate is about whether or not our ignorance of the model inputs renders the output meaningful or meaningless. It has more importance than simply academic interest. For that reason, I hope the current impasse can eventually be resolved.
Wow, I think this comment is very interesting (I haven’t read a;ll of the tread so if this has been discussed I apologize in advance)
“This is why climate models can have uncertain energy fluxes, with substantial known (or even unknown) errors in their energy flux components, and still be run with increasing CO2 to produce warming, even though that CO2 effect might be small compared to the errors. The errors have been adjusted so they sum to zero in the long-term average.”
I think this is where Pat and Roy are at odds. Yes, the climate models have been “adjusted” to produce output that makes sense. By definition, when you have to adjust the errors for output to make sense you get the physics wrong. As I said in the last post by Roy, the way to think about Pat’s paper is that it shows how much adjusting need to be done in order for the climate models to work.
I really like analogies, but I don’ think Roy’s is a very good one. I would start with a pot without a lid and a pot with a very porous screen over it. I would next ask the question – what happens to the temperature if the screen mesh is made slightly smaller. My guess is that the answer is that the convection forces increase slightly through the slightly smaller openings. I believe this is what Steven Wilde would argue is happeoing. A slight change in convestive forces offset any effect of higher co2 levels.
Nelson
” By definition, when you have to adjust the errors for output to make sense you get the physics wrong.”
This is the kind of mistake we are all making (not me, of course) when we use the error term to discuss fixed biases and then in the next sentence use the error term to mean statistical error as Pat is doing. In this case Nelson means bias when he uses the term error. He is talking about accuracy, not statistical error. In all these computations the quadrature sum of the statistical errors of ALL the parameters sum (appropriately) through all the iterations. It is obvious to me that the statistical error will quickly reach unreasonable numbers and the calculated answer has no validity. In the 60’s when computers were still small (our huge computer had 4K of magnetic core memory) I dabbled in the programs for combining statistics. That was eye-opening. 😉
John Andrews
When I got my first Atari computer I attempted to model the terminal velocity of a falling object, using the approach of System Dynamics modeling. Things were going great at first, with reasonable results. Then, as the object approached what I thought was a reasonable terminal velocity, the results started to oscillate wildly. The problem was round-off error and division by numbers approaching zero.
I’m of the opinion that people all too frequently plug numbers into equations without giving thought to whether the inputs are reasonable or what the associated uncertainties are. Hence the old GIGO.
“Modeling of real-world systems always involves approximations. We don’t know exactly how much energy is being transferred from the flame to the pot. We don’t know exactly how fast the pot is losing energy to its surroundings from conduction, radiation, and evaporation of water.
But we do know that if we can get a constant water temperature, that those rates of energy gain and energy loss are equal, even though we don’t know their values.”
That’s a gross misrepresentation of the real world climate system, where ENSO and the AMO act as negative feedbacks to net changes in climate forcing, controlling low cloud cover and lower troposphere water vapour also as negative feedbacks. The AMO is always warm during each centennial solar minimum. Playing the internal variability zero sum game completely obscures the negative feedbacks.
Oceans protect the Earth against drastic temperature changes.
Does CO2 have a greater impact than ENSO on the water vapor content of the atmosphere?

Oceans keep the globe warmer than it would otherwise be. It’s not the atmospheric greenhouse effect which stops their surfaces from cooling every night.
https://www.linkedin.com/pulse/heat-capacity-neglected-from-climate-models-ulric-lyons/
It seems to me a large part of the controversy is caused by people trying to imagine the physical meaning of an uncertainty measure. People do this because the uncertainty bounds Dr. Frank arrives at exceed the physical bounds people think are reasonable for the climate system.
The problem of uncertainty values getting larger than what is physically possible is always there when using a probability distribution that runs from -infinity to +infinity.
This “problem” sidetracking the discussion could be avoided if Dr. Frank used his emulator and its associated uncertainty propagation to answer the question “How long can a climate simulation run until the uncertainty bounds start approaching the physical bounds?”. From his paper it is clear the outcome would be much shorter than the 80 years left in this century. Illustrating the climate models are not fit for purpose.
kletsmajoor,
“It seems to me a large part of the controversy is caused by people trying to imagine the physical meaning of an uncertainty measure. ”
It may be a problem for some people, but that is not the main argument against Dr Frank’s approach. The problem is that (a) he is treating the LCF as though it is actually a forcing (It isn’t) and (b) he is arguing that any and all uncertainty in LCF will accumulate year-on-year in quadrature into temperature uncertainty (It doesn’t).
kribaez,
In his paper Dr. Frank says at page 2:
“To be kept in view throughout what follows is that the physics of
climate is neither surveyed nor addressed; nor is the terrestrial
climate itself in any way modeled. Rather, the focus is strictly
on the behavior and reliability of climate models alone, and on
physical error analysis.”
To analyse uncertainty propagation it is not necessary to understand the physical system described by a set of equations. Only the mathematical properties of those equations matter.
In my view the question Dr. Frank has to answer is whether his emulator is good enough to represent the uncertainty propagation properties of CMIP5 climate models. I think he does that in his paper. As an engineer I’m no stranger to error analysis and far as I can see his math is correct.
“To analyse uncertainty propagation it is not necessary to understand the physical system described by a set of equations. Only the mathematical properties of those equations matter.”
I agree.
It is the mathematical properties of the system which force the net flux to zero over the spin-up period, and ensure that any change in LCF has a limited effect on temperature. Sampling from an LCF distribution, say U(-4, +4) returns a tightly constrained GSAT with an uncertainty distribution of ca U(-3, +3) for an average GCM, not a distribution with a range of 100K.
I have decided that Dr Franks has perhaps made a serious statistical error by using SI10.2. It would explain why be believes that he can accumulate all uncertainty in LCF despite high autocorrelation, and why he is resistant to the idea of offset errors in net flux even though the system is mathematically obliged to reduce net flux to zero during the spin-up . In both instances, the application of SI10.2 would lead him to wrong conclusions.
“why he is resistant to the idea of offset errors in net flux even though the system is mathematically obliged to reduce net flux to zero during the spin-up ”
You just identified one of the biggest uncertainties associated with the models. How do we know the net flux should reduce to zero during the spin-up? In fact, the Earth has been warming since the end of the last ice age, leading to the conclusion that the net flux is *not* zero and hasn’t been for thousands of years. When you force the model to output a stable system when we know the system is not stable then there is an in-built uncertainty from the very beginning!
Tim Gorman,
I don’t disagree. However, the aim of the spin-up period is not so much to “match history”, but to test the long-term characteristics of the AOGCM. If in reality, there is a post-1700 long-term upward drift or an unmatched oscillatory behaviour in actual global temperature then the subsequent projection of the GCM might still represent a valid estimate of the incremental temperature change caused by the input forcing series. It just becomes improper to compare that with observed temperature change without taking into account the underlying natural variation component.
But Dr Franks is not challenging the view that the radiative flux balance controls temperature gain. His uncertainty calculation is sourced from a component of that flux balance.
In practice, it is not necessary to assume that the net flux balance is exactly zero. It is sufficient to show that it is well-bounded in order to highlight the problem with Dr Frank’s approach. This can be done using either physics or a statistical argument. But Dr Frank won’t accept the statistical argument until (a) he recognises that an error in a component of the flux balance is not the same as an error in the net flux imbalance, and
(b) he accepts that his Equation S10.2 is just wrong when we are dealing with correlated data.
kribaez: “However, the aim of the spin-up period is not so much to “match history”, but to test the long-term characteristics of the AOGCM. If in reality, there is a post-1700 long-term upward drift or an unmatched oscillatory behaviour in actual global temperature then the subsequent projection of the GCM might still represent a valid estimate of the incremental temperature change caused by the input forcing series.”
I note carefully your inclusion of the word “might”. That alone indicates that there is uncertainty in what the AOCGM model outputs. Thus it follows that it is unknown whether the model gives a valid estimate or not. If that uncertainty is greater than the incremental temperature change the model outputs then the model is useless for projecting anything . And *that* is what Frank’s journal shows.
“n practice, it is not necessary to assume that the net flux balance is exactly zero. It is sufficient to show that it is well-bounded in order to highlight the problem with Dr Frank’s approach.”
Being well-bounded is not a sufficient criteria. If the limits of the boundaries are greater than the what is being output then, again, the output is useless for projecting anything. Again, that is what Frank is showing in his paper.
You also raised the question of whether Dr Frank’s emulator is “good enough”.
One of its disadvantages is that it masks the relationship between net flux and temperature. Indeed, it converts that relationhip into a relationship between forcing and temperature which is a poorman approximation. There are much more accurate ways of emulating a AOGCM, notably by convolution (superposition) of the AOGCM’s step-forcing data. This has the advantage of a more accurate match to temperature but also offers a solution to net flux.
Having said that, if he could convince me that he really had found a source of massive accumulating uncertainty in net flux, then I would give Dr Frank’s emulator a “let”, since the resulting uncertainty is large enough on its own to declare the GCMs unreliable. However, I am not convinced he has found such a source.
Additionally, the errors and uncertainties which he decribes as errors in the internal energy state of the system, and which are real if there is an error in LCF, will tend to change the sensitivity of the system – in simple terms the gradient of his relationship between temperature and cumulative forcing. However, with this formulation of emulator, Dr Frank has no degree of freedom left, or indeed calculation basis, to assess the uncertainty in temperature introduced by this type of error in sensitivity.
Somewhere up above, I tried to explain why it is totally improper to treat a calibration error in LCF as though it translated into an uncertainty in forcing.
Perhaps, there is a very simple way to demonstrate why what Dr Frank is doing is silly.
Every GCM has at least 500 years of spin-up before any runs are carried out. This allows the GCM to reach a net flux balance. Except that this is, in fact, not perfectly true. There is at least one forcing series during the spin-up which assures a small variation around a zero net flux balance, and that is the TSI variation associated with the 11 year solar cycle. There are also small autocorrelated stochastic fluctuations in net flux and temperature. So, it is perfectly legitimate to say that the 500 year spin-up for initialisation is identical in character to the calculations which appear in the runs. If Dr Frank is correct that his calibration of LCF translates into a year-on-year uncertainty in forcing for the GCM projections, then it is equally valid to say that the same uncertainty calculation can be applied to the 500-year spin-up period, so let us do so.
If we apply the same propagation methodology, we find that after 500 years, the uncertainty of the absolute temperature of the planet in 1850 is (+/-) 50K. Those model labs who have run the spin-up for even longer periods are of course carrying even larger uncertainty estimates.
If, on the other hand, I adjust the LCF at the start of this run within an error-range of (+/-)4 W/m^2 the associated uncertainty in absolute temperature in 1850 is well-bounded at (+/-)3K irrespective of whether I run the spin-up for 500 years or 1000 years.
So perhaps Dr Frank can explain what interpretation I should put on the (+/-) 50K?
Thank you very much for your comments; it’s always heartening to see that not all the lucid thinkers have given up on this site. (I see “Jeff Id” is still lurking.)
I hasten to add that you could be misrepresenting Dr. Frank’s latest work for all I know. Having been unable to make sense of what he previously did on the subject, I haven’t yet been able to face slogging through this one.
If I do screw my courage to the sticking place, though, I’m sure your comments will make the ordeal easier. And I’ll admit I know what I expect to find.
You still haven’t accepted the difference between error and uncertanity.
You keep harping on “precision”, i.e., how much error is contained in a number output by a program. If you wish you could program your model to output temps out to one ten thousandth of a degree with an error of +/- 0.000001 degree. This does not effect the uncertainty of the result.
Yes, the uncertainty could be as large as +/- 50 degrees. What does this mean? It means that with the uncertainty of clouds used by Dr. Frank, you can not predict the temperature with any certainty regardless of how precise the output is.
Here is a question. Would you bet your life that the models give the correct output? If you have to think, even for a second, then you are uncertain. The precision of the prediction doesn’t matter does it.
Jim Gorman,
My first degree was in Maths and Statistics. Post-grad engineer for almost 40 years, with a very heavy slice of numerical modeling, error propagation, uncertaity analysis, importance analysis, integrating multisourced information calibrated over different scales, probabilistic characterisations and risked decision-making. I could throw in a bunch more, but you know what? I think I do understand the difference between error and uncertainty, precision and accuracy and even the Bayes-frequentist paradox, but, there again, I may have just been fooling a lot of people for a long time.
“Would you bet your life that the models give the correct output?” No, I think they are useless for informing decision-making. I can give you a long list of the reasons. But the fact that the models are useless does not make Dr Frank’s analysis correct.
Here are three questions for you in return. Dr Franks sets out an equation in his SI, labeled S10.2.
Can you see anything wrong with that equation?
Would you use that equation to accumulate uncertainty if there was a strong year-to-year autocorrelation in the data?
What is the variance of A-B if A and B co-vary with a correlation of 1?
kribaez
You remarked about “… that is the TSI variation associated with the 11 year solar cycle.” It is more than just the small variation in insolation. The spectral distribution of energy changes in what is probably a more significant way, with estimates of the increase in UV from 5-15% during high sunspot activity.
Clyde,
There is increasing independent evidence that the solar impact on climate goes beyond accounting for the relatively small TSI variation, I agree, and it certainly raises some questions about the completeness of the governing equations used in GCMs. However, it is not directly relevant to the question on the table, which is whether a calibration of one single input into the net flux is sufficient unto itself to justify Dr Frank’s methodology for the estimation of uncertainty, even if one accepts ad argumentum the validity of those governing equations.
I am a simple EE. Yet I understand the difference between error and uncertainty. I’ve designed a ton of multistage analog circuits. You don’t need to lecture me about error and uncertainty.
You sound young enough for me to expect you’ve never built a complicated analog computer which is what climate modelers should be using. Numerical solutions of diff eq aren’t always the best for identifying non-linarities and uncertainties.
Your bono fides don’t really impress me nor does your equations or programming skills. Have you ever worked with a machinist? My father was an outstanding one. Most of them can explain uncertainty to you, in concrete terms.
Jim Gorman,
I retired 9 years ago and have 3 grandchildren if that gives you a clue about my age, and my Dad was a toolmaker. I still have his measuring instruments.
No lecturing involved in my comment, I assure you. It was a response to your suggestion that I didn’t understand the difference between error and uncertainty.
My questions to you were not a test incidentally. I was drawing your attention to what I believe is a major conceptual error in Dr Frank’s paper. Did you look at S10.2?
I’m no scientist (which might soon become very obvious), but it seems to me that this analogy could be improved a little if a mirror (that reflected the heat) was placed under the pot. The heat source (representing the sun) should remain constant. If the analogy includes a lid to represent the warming effect of GHG, then shouldn’t it also include something to represent the shielding effect (by way of clouds)?
The mirror and the pot lid should move together. Don’t ask me what their relative speeds should be though. Does the mirror move in the same direction as the pot lid, or the opposite direction? Do they move at the same speeds? I have no idea.
Loydo September 13, 2019 at 9:19 pm
“I understand to be 3% of total CO2 in the atmosphere with 97 % contribution from natural sources”
This is incorrect but… human’s annual contribution might only be 3%, but it is cumulative”
______________________________________________________
Loydo, no matter if “human’s annual contribution [] is cumulative”;
The sum of, the proportion of CO2 in the atmosphere lags temperature differences in that atmosphere:
Is predetermined, determined by temperature of the atmosphere.