Guest essay by Mike Jonas
In this article, I take a look inside the workings of the climate computer models (“the models”), and explain how they are structured and how useful they are for prediction of future climate.
This article follows on from a previous article (here) which looked at the models from the outside. This article takes a look at their internal workings.
The Models’ Method
The models divide up the atmosphere and/or oceans and/or land surface into cells in a three-dimensional grid and assign initial conditions. They then calculate how each cell influences all its neighbours over a very short time. This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades. The IPCC (Intergovernmental Panel on Climate Change) describes it here. The WMO (World Meteorological Organization) describes it here.
[Enlarge]
Seriously powerful computers are needed, because even on a relatively coarse grid, the number of calculations required to predict just a few years ahead is mind-bogglingly massive.
Internal and External
At first glance, the way the models work may appear to have everything covered, after all, every single relevant part of the planet is (or can be) covered by a model for all of the required period. But there are three major things that cannot be covered by the internal workings of the model:
1 – The initial state.
2 – Features that are too small to be represented by the cells.
3 – Factors that are man-made or external to the planet, or which are large scale and not understood well enough to be generated via the cell-based system.
The workings of the cell-based system will be referred to as the internals of the models, because all of the inter-cell influences within the models are the product of the models’ internal logic. Factors 1 to 3 above will be referred to as externals.
Internals
The internals have to use a massive number of iterations to cover any time period of climate significance. Every one of those iterations introduces a small error into the next iteration. Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.
NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations. They are able to do this because unlike the climate models they are only required to predict local conditions a short time ahead, not regional and global conditions over many decades. Yet the weather models are still unable to accurately predict more than a few days ahead. The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods. Note that each climate cell is local, so the models build up their global views from local conditions. On the way from ‘local’ to ‘global’, the models pass through ‘regional’, and the models are very poor at predicting regional climate [1].
At this point, it is worth clearing up a common misunderstanding. The idea that errors compound exponentially does not necessarily mean that the climate model will show a climate getting exponentially hotter, or colder, or whatever, and shooting “off the scale”. The model could do that, of course, but equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.
An analogy : A clock which runs at irregular speed will always show a valid time of day, but even if it is reset to the correct time it very quickly becomes useless.
Initial State
It is clearly impossible for a model’s initial state to be set completely accurately, so this is another source of error. As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“. [2]
This NASA quote is about weather, not climate. But because the climate models’ internals are dealing with weather, ie. local conditions over a short time, they suffer from the same problem. I will return to this idea later.
Small Externals
External factor 2 concerns features that are too small to be represented in the models’ cell system. I call these the small externals. There are lots of them, and they include such things as storms, precipitation and clouds, or at least the initiation of them. These factors are dealt with by parameterisation. In other words, the models use special parameters to initiate the onset of rain, etc. On each use of these parameters, the exact situation is by definition not known because the cell involved is too large. The parameterisation therefore necessarily involves guesswork, which itself necessarily increases the amount of error in the model.
For example, suppose that the parameterisations (small externals) indicate the start of some rain in a particular cell at a particular time. The parameterisations and/or internals may then change the rate of rain over hours or days in that cell and/or its neighbours. The initial conditions of the cells were probably not well known, and if the model had progressed more than a few days the modelled conditions in those cells were certainly by then totally inaccurate. The modelled progress of the rain – how strong it gets, how long it lasts, where it goes – is therefore ridiculously unreliable. The entire rain event would be a work of fiction.
Large Externals
External factor 3 could include man-made factors such as CO2 emissions, pollution and land-use changes (including urban development), plus natural factors such as the sun, galactic cosmic rays (GCRs), Milankovitch cycles (variations in Earth’s orbit), ocean oscillations, ocean currents, volcanoes and, over extremely long periods, things like continental drift.
I covered some of these in my last article. One crucial problem is that while some of these factors are at least partially understood, none of them are understood well enough to predict their effect on future climate – with just one exception. Changes in solar activity, GCRs, ocean oscillations, ocean currents and volcanoes, for example, cannot be predicted at all accurately, and the effects of solar activity and Milankovitch cycles on climate are not at all well understood. The one exception is carbon dioxide (CO2) itself. It is generally-accepted that a doubling of atmospheric CO2 would by itself, over many decades, increase the global temperature by about 1 degree C. It is also generally accepted that CO2 levels can be reasonably accurately predicted for given future levels of human activity. But the effect of CO2 is woefully inadequate to explain past climate change over any time scale, even when enhanced with spurious “feedbacks” (see here, here, here, here).
Another crucial problem is that all the external factors have to be processed through the models’ internal cell-based system in order to be incorporated in the final climate predictions. But each external factor can only have a noticeable climate influence on time-scales that are way beyond the period (a few days at most) for which the models’ internals are capable of retaining any meaningful degree of accuracy. The internal workings of the models therefore add absolutely no value at all to the externals. Even if the externals and their effect on climate were well understood, there would be a serious risk of them being corrupted by the models’ internal workings, thus rendering the models useless for prediction.
Maths trumps science
The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.
From all of the above, it is clear that no matter how much more knowledge and effort is put into the climate models, and no matter how many more billions of dollars are poured into them, they can never be used for climate prediction while they retain the same basic structure and methodology.
The Solution
It should by now be clear that the models are upside down. The models try to construct climate using a bottom-up calculation starting with weather (local conditions over a short time). This is inevitably a futile exercise, as I have explained. Instead of bottom-up, the models need to be top-down. That is, the models need to work first and directly with climate, and then they might eventually be able to support more detailed calculations ‘down’ towards weather.
So what would an effective climate model look like? Well, for a start, all the current model internals must be put to one side. They are very inaccurate weather calculations that have no place inside a climate model. They could still be useful for exploring specific ideas on a small scale, but they would be a waste of space inside the climate model itself.
A climate model needs to work directly with the drivers of climate such as the large externals above. The work done by Wyatt and Curry [3] could be a good starting point, but there are others. Before such a climate model could be of any real use, however, much more research needs to be done into the various natural climate factors so that they and their effect on climate are understood.
Such a climate model is unlikely to need a super-computer and massive complex calculations. The most important pre-requisite would be research into the possible drivers of climate, to find out how they work, how relatively important they are, and how they have influenced climate in the past. Henrik Svensmark’s research into GCRs is an example of the kind of research that is needed. Parts of a climate model may well be developed alongside the research and assist with the research, but only when the science is reasonably well understood can the model deliver useful predictions. The research itself may well be very complex, but the model is likely to be relatively straightforward.
The first requirement is for a climate model to be able to reproduce past climate reasonably well over various time scales with an absolutely minimal number of parameters. (John van Neumann’s elephant). The first real step forward will be when a climate model’s predictions are verified in the real world. Right now, the models are a long way away from even the first requirement, and are heading in the wrong direction.
###
Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.
References
[1] A Literature Debate … R Pielke October 28, 2011 [Note : This article has been referenced instead of the original Demetris Koutsoyiannis paper, because it shows the subsequent criticism and rebuttal. Even the paper’s critic admits that the models “have no predictive skill whatsoever on the chronology of events beyond the annual cycle. A climate projection is thus not a prediction of climate [..]“. A link to the original paper is given.
[2] The Physics of Climate Modeling. NASA (By Gavin A. Schmidt), January 2007
[3] M.G. Wyatt and J.A. Curry, “Role for Eurasian Arctic shelf sea ice in a secularly varying hemispheric climate signal during the 20th century,” (Climate Dynamics, 2013). The best place to start is probably The Stadium Wave, by Judith Curry.
Abbreviations
CO2 – Carbon Dioxide
GCR – Galactic Cosmic Ray
IPCC – Intergovernmental Panel on Climate Change
NASA – National Aeronautics and Space Administration
WMO – World Meteorological Organization
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Maybe ‘van Neumann’ is right- but Johnny did not have spell check worked out yet.
Maths trump science. Models got even the basic physics wrong – a latent heat of water vaporization is off by almost 3% where it matters most. Remember, that the Earth’s surface average temperature is 288K, and a 1% error is 2.88 K = 5 degrees F.
http://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257
Dr David Evans at jonova site has some maths on this top dpwn climate modelling stuff. Seems to have a better handle on it than most, but especially IPCC.
Maths does not trump science.
Science is about observation and experimentation.
Absolutely nothing that exists in any branch of mathematics, is observable anywhere in the real universe. It is ALL fictional or fictitious if you will. And I say that not in any derogatory sense. It is fictional in the sense that it was all made up out of whole cloth in the mind of some ancient or modern mathematician.
Mathematics is an art form, and it is full of much ingenuity. It is used to describe (essentially exactly) the operational behavior of our MODELS of what we think to real universe looks and behaves like.
For example, the simple differential equation:
d2s/dt^2 = – ks exactly describes a model in which the acceleration (of something) is directly proportional to some displacement (s) and is opposite in direction to that displacement.
The steady state solutions to this equation are of the form:
s = Acos(omega.t) + Bsin(omega.t)
We call that simple harmonic motion. Absolutely nothing in the real universe moves according to simple harmonic motion. But lots of things in the real universe appear to behave in a very similar fashion; but never exactly matching the mathematical description of the model’s behavior.
The uncertainties in “Science” are in the adherence of the model description to the OBSERVED behavior of the real world system. There are no uncertainties in the mathematical description of the fictitious model that we made up to approximate what we think the real world observations are.
G.
And for the record; I do not have an MA in maths from Oxford UK I presume that is Master of Arts. Did I not say Maths is an Art form.
But I do have a BSc from the U of NZ, now referred to as U of Auckland, and that degree was in three Physics majors; Physics, Radio-physics, and Mathematical Physics; plus two Mathematics majors (Pure and Applied Maths). I did part of the MSc course work and a thesis project (in Neutron and other particle scintillation detection) but did not sit exams or write up thesis; but that was over 55 years ago, so not too relevant today.
One of these days, I’ll retire; or drop dead.
@george e. smith
How about a mass on a spring? Add a damping term and you’ve got it nailed.
Nope.
that “simple” damping term needs to change based on the air friction of the mass and the spring = which is proportional to the velocity of the mass (assuming completely still air whose density does not change over the lifetime of the experiment, who density needs to be proportional to the height of the mass and spring at each point, whose resistance will change as the velocity of the spring loops change (faster at the end of the spring, slower at the top (anchor point) of the spring, on the resistance of the anchor as it expands and contracts with the changing forces on its molecular structure).
Also will change based on temperature, pressure, humidity and swirling velocity of the air around every part of the spring and mass.
Minor effects include the changing temperature of the spring as it heats up due to molecular loading as it expands and contracts, and the spring itself (this loading) is itself damping the spring’s motion.
But, if your “perfect” experiment is in a “perfect” vacuum, the internal damping of the spring is a near-constant effect only proportional to the spring’s extension at each point is time.
george e. smith,
“Absolutely nothing that exists in any branch of mathematics, is observable anywhere in the real universe. It is ALL fictional or fictitious if you will.”
As an old logic/philosophy guy, this is music to my ears, so to speak. Many seem to have gotten the impression that math/stats are somehow realer than real itself . . and that leads to things like this (from the article);
“As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“.”
It’s simply not true that weather is chaotic, in the literal sense of chaos, he means the math he’s using to model the weather behaves “chaotically”, but that doesn’t really mean chaotically either, it means difficult to predict, essentially. It’s actually a form of complex order he’s referring to, and calling it chaos, because some math/stat heads were lazy about naming a type of mathematical discontinuity/shiftiness that can be generated, which is somewhat like the way something extremely complex like weather behaves, as far as I can tell.
From such sloppy language, I believe, comes things like; “We have just five hundred days to avoid climate chaos.”
I reject the naming of complex order: Chaos. It’s utterly nonsensical, and an invitation to manipulation of the publics perception of reality, to me. Pick a freaking word that does not turn discussion of the whole matter into gibberish, I say. Just do it, and quit going along with the laziness and “realer than real” BS that has been handed down to us.
(I suggest: Chaosh, after a bit of word searching and serious consideration, nobody that I am. It’s “available” it seems, and preserves the original intent)
Don’t try to trick me.
A mass on a spring presumably depends roughly on Hooke’s Law. No real physical material obeys Hooke’s law exactly over any significant range of strain.
Didn’t I say that many systems approximate what our models do exactly.
Simple harmonic motion has no beginning and no ending, it is continuous. If it is not continuous, then it has a transient start up or stop condition which is NOT SHM.
x^2 + y^2 + z^2 = r^2 describes a sphere.
Nowhere in that equation will you find an explanation for 8,000 metre high mountains on the earth; because the earth is NOT a sphere. and it’s not even close to a sphere.
So who was the first scientist to notice something growing in his garden, with ‘I’s and ‘V’s, and ‘X’s on it casting shadows, and he decided to use it for counting. So what the hell were the Romans up to that they decided they needed a symbol for ” C ” and another one for ” M “. Of what possible interest are C and M to Romans, unless they already have a decimal number system that they understand.
Somebody; excuse me that is somebody dumb as a box of rocks, invented the Roman system for counting, and left out zero, to boot. So what was he proposing to do for more things: MMMMMMMMMMMMMMMMMMMMMMMMIX ??
Well the dog (M$) ate my reply to D.J.Hawkins.
Simple harmonic motion is continuous and lasts for all time, so it has no start or stop. If it did I would no longer have a single frequency so it wouldn’t be SHM.
Also, Hooke’s Law is only an approximation, so no mass on a spring is going to move in SHM.
Don’t try to trick me.
Real systems only approximate what the maths describes as the exact behavior of the fictitious model of the real system..
g
“””””…..D.J. Hawkins
November 9, 2015 at 2:39 pm
@george e. smith
“
Absolutely nothing in the real universe moves according to simple harmonic motion.
How about a mass on a spring? Add a damping term and you’ve got it nailed…..”””””
You evidently didn’t read too closely.
I wrote a simple differential equation between two variables. One of them was (s) for Serendipity, and the other was (t) for Tolerance; i.e. ability to tolerate.
That said that the second derivative of serendipity with respect to tolerance, was directly proportional to serendipity by way of (k) and in the opposite sense (- k).
All of that is quite esoteric and is unrelated to any physical reality, and it is simply an act of faith that the claimed relationship is true. In any case, that equation does indeed have solutions in terms of a sin and cosine function, which themselves are simply mental creations of some fertile and artistic mind.
The fact that one might make an analog of that system in the form of a mass on a spring, where we assume, that Hooke’s law is obeyed to some level of fidelity, and also that Newton’s F = m.a is also approximately true.
So your analogous model can also be expected to have something approximating sine and cosine function motions; but if you want to add a ” damping ” term, whatever that is, to the differential equation, then you can not expect that new differential equation to have the same solutions.
Bessel functions are also solutions of a differential equation (Bessel’s equation), as are Legendre Polynomials or Tchebychev polynomials.
All can be studied in pure mathematics, without any reference to anything real or physical, but then we find; or experimental physicists do and other scientists, that the same solutions to those mathematical forms can approximate the behaviors of real physical systems.
But the discrepancies are in the models; not in the mathematics. The mathematical expressions are exact relations. But the model parameters may only approximate those exact relations; as in lack of Hooke’s Law fidelity, and a real system say with springs and masses, may have other less obvious discrepancies with the equations, such as for example having a real energy damping effect taking place.
So don’t blame the mathematics for failures due to improper model construction. And if the models don’t mimic the real world, then you have an even greater divergence.
g
I detect a little uncertainty here 🙂
IPCC isn’t looking for a climate model that predicts future climate and weather. IPCC just needs a climate model that is conform to the political established UNFCCC and its CAGW? The underlying theme is not climate change but more changing the western society?
Mike’s essay, should make clear to all, what one of the big problems of climate modeling is.
So they have this snapshot of the whole earth at grid points on a 1km x 1km grid, and their zigga-computer calculates what is at each of those grid nodes one second later. Then repeat until done.
An essential feature of this process is that the physical values at one instant of time, determine where the energies will go next, to produce the node values a the next instant of time.
You cannot calculate the energy flows, unless you have simultaneous values for the variables at all of the surrounding nodes.
That means that a ” snapshot ” must give the node values of ALL variables at ALL nodes SIMULTENEOUSLY, or else you cannot apply laws of physics an maths to calculate the next instant values for all of those nodes.
It’s a finite element analysis problem; and you can’t do Jack **** with a set of values that were all gathered at totally arbitrary time epochs.
To get the GCMs to fit the real earth behavior, you have to have the actual real earth node values for the entire earth ALL at the exact same instant of time. And the spacing and timing of each of those samples must conform to the Nyquist criterion for sampled data systems.
Otherwise, the real laws of physics cannot operate to produce the next valid value for each of those measurement nodes.
Just creating a large array of nodes on a computer is totally worthless, if you do not have a set of simultaneous measured values at each of those nodes.
The GCMs are a cruel joke; they don’t mimic anything real.
Wake me up, wen the GCMs start rotating this planet.
g
Chaos does not study mankind, Chaos does chaos.
Mankind’s study of chaos is done as chaos changes chaos.
Mankind needs to get over itself and get back to taking care of itself first.
Example is the bad thing of man made war. The war plan gets to the chaos state within seconds of the declaration documents and continues in that state down to the step of a private first class on his first night patrol when he miss steps a half step and sets off a buried mine and an ambush is sprung that changes the tide of a three year invasion plan.
Much to much about fixing something that man kind can not break or repair.
Forget the models or anything else. This;
http://technocracy.news/index.php/2015/10/30/former-president-of-greenpeace-scientifically-rips-climate-change-to-shreds/
is the definitive. Towards the end it gets real interesting. Exxon, Gulf, BP, etc. saved the world.
Expat, thanks for the link to the Moore speech, everyone should read it. His path echoes my own.
Agree, teriffic speech. All should not only read, but forward the link to any believers in CAGW that still are capable of reason.
Expat:
Thanks for the link. Never realized just how coal seams were created and why they stopped being made, the amount of sequestration of CO2 in shale or how close the earth has come to CO2 levels dropping below 150ppm. Terrific article.
Strictly speaking, Moore should have used the term “carbonate” instead of “carbonaceous”, as the rocks to which he referred sequester CO2 in the form of CaCO3 (limestone – calcium carbonate). To my way of thinking, carbonaceous rocks are mainly shales with high levels of organic carbon.
Agree !!!!
And if , as I believe co2 is a product of temperature , it can not be the cause of temperature change. Those that believe that co2 doubling over time can create a 1 deg rise are simply wrong, and have confused the cause and effect. Prove me wrong and show any empirical evidence where it is clear that co2 rise precedes temperature rise and you will stimulate me to reconsider this fraudulent science promoted buy fools and frauds
I suppose it depends on whether you believe the sun heats the atmosphere and everything beneath it or if the sun’s heat reradiates from the oceans/land and heats the atmosphere. Doesn’t the earth have an internal source of heat that is not well understood either?
Ah, what ‘drives climate’. I would start with the local burning object we call the ‘sun’.
ems
To ignore a near-million mile diameter nuclear reactor just a few minutes away by light would, indeed, be folly.
There are other drivers which we also need to consider.
And we haven’t more than an inkling of what their relative influence is.
The parrot-cry – ‘The science is settles’ is – at best – that fabulous chimera, rat-carp.
Auto
Try about 860,000 mile diameter. Well for some people that is close to 2 million.
g
Along with that, why do models make linear projections instead of cyclical predictions?
Instead of wasting trillions of dollars trying to stop climate change ( impossible ), the money would be better spent on ADJUSTING to climate change ( easy )…Mr. Mann should like this idea…he good at ADJUSTING things !!
They’ve spent a lot of time and money adjusting the data to generate climate change.
They could easily (and more cheaply) solve the climate problem by readjusting the date to erase it.
Mike Mc
Why adjust to eliminate, when unadjusted data indicates serious problems with forecasting for IPCC models.
Even given the – uhhh – interesting selection of land stations contributing data.
Ocean data has been discussed.
Whilst mariners do ‘the best they can’, some of the data is probably not accurate to the nearest hundredth of a degree, shall we say. I know – I’ve been there and done that.
### but probably better than no data at all.
Auto
>They’ve spent a lot of time and money adjusting the data to generate climate change.
>They could easily (and more cheaply) solve the climate problem by readjusting the date to erase it.
And what in the world makes you think they would want to spend LESS money, or that they would want to “solve” the problem.
Gadzooks! Can you imagine what that would mean for their careers? I mean seriously, there’s no glory or reward or accolades for putting yourself (and your countless thousands of colleagues) OUT of a job, or for being the cause of seeing their rationale for obtaining grant money (to study snails or whatever) cut back just because you’ve done something as foolish as to say “Oopsie, turns out there’s no problem at all.”
I mean you’d have to be “crazy” to even attempt something like that, right? (And yes, I do think that is one of the reasons that the “warmists” think and call out “deniers” as “crazy” — because from the perspectives of the warmists, anyone who does something that might END that proverbial gravy train HAS TO BE “crazy”.)
Not exactly.
M. Manniacal is incredible at ignoring things. He ignores inconvenient data, algorithms, data orientation, history…
The top down approach has much to commend it — but it may not allow us to predict the climate accurately enough. The climate seems to have at least two quasi-stationary states (ice ages and inter glacial). Would knowing all the top down elements allow you to predict the transition between such meta stable states? What about hysteresis and super-heating and super-cooling? The argument by the CAGW proponents is that there is another metastable (or run away) state at higher temperatures. How do you rule that out? Also, quite generally, are there other metastable states besides glacial and inter glacial?
You simply cannot predict CHAOS !!!
The climate the last 2 million years has been steady as a clock. Long Ice Ages with sudden, dramatic but very short Interglacials over and over and over again. Predicting something else would require a great deal of proof.
The theme is policy based chaos?
The only reason governments are trying to predict the climate is so they can scare the populace into giving them more power to tell people what to do.
Does anyone alive really care what the climate will be in 100yrs?
Ray you are correct. The whole “Climate Change” narrative is a narrative of fear (think WMD’s). Once you have people living in fear you can establish a saviour, and they will believe most anything this saviour tells them. More often than not this saviour will profit handsomely from the sheeple’s panic.
“there is another metastable (or run away) state at higher temperatures. How do you rule that out?”
They are arguing that there is a positive feedback mechanism that will cause an excursion from the metastable inter glacial we are in. The biggest problem with the argument is that if there were such a mechanism, it would have been triggered at some point in the past by some random fluctuation, such as the volcanic eruption that created the Deccan Traps. Absent a showing that the mechanism exists, CAGW theory cannot be maintained.
“The climate seems to have at least two quasi-stationary states (ice ages and inter glacial).”
+100
But you won’t get any climate modeller to admit that. Their grant money depends on them proving we are in a runaway vicious cycle.
Has this other “higher temperature” metastable state existed with the exact same earth orbital and axis tilt parameters as we currently have. Does not the earth axis, precess with something like a 65,000 year period ?
You cannot expect to have a climate regime identical to what we have today, with a totally different orbital state, or with totally different earth geology as far as continental locations go.
g
“The harsh reality is that any science is wrong if its mathematics is wrong.”
True enough. To paraphrase a mathematician I know, all you have to do is to be able to invert a matrix and you will get a result of some kind, whether it makes sense or not.
The mathematics describes the model. If the model does not emulate the scientific observations and measurements, as close to reality as we can make it, then no mathematical juggling can save it.
The discrepancies are in the models; not in the mathematics.
The earth does not appear to be an isothermal flat disc illuminated 24 hours a day at all points, by a sun that is directly overhead at a distance of 186 million miles.
That is what Kevin Trenberth’s model shows.
Our sun is only at 93 million miles so it produces a TSI irradiance of 136(2) W/m^2; not 342 w/m^2 like Kevin’s. And our planet is a roughly spherical body, that rotates once in about 24 hours, and it is nowhere near isothermal like Kevin’s model.
g
The following tables and observations are based on a fao.org (farm & agricultural.org of UN) global CO2 balance Bing image. This diagram is typical of many variations. Variations implies non-consensus.
How much carbon is there? Carbon, not CO2!
Reservoir…………………………..Gt C………%
Surface ocean……………………1,020……..2.2%
Deep Ocean…………………….38,100……81.2%
Marine Biota……………………….….3……..0.0%
Dissolved Organic Carbon………..700………1.5%
Ocean Sediments………………….150……..0.3%
Soils…………………………….…1,580……..3.4%
Vegetation…………………………….610……..1.3%
Fossil Fuel & Cement……………..4,000……..8.5%
Atmosphere…………………………..750……..1.6%
Total………………………………….46,913
Carbon moves back and forth between and among these reservoirs, the great fluxes.
Atmospheric Fluxes, Gt C/y………Source………Sink
Soils…………………………………….60
Vegetation……………………………..60…………121.3
Fires……………………………………1.6
Ocean Surface………………………..90…………..92
Forests…………………….…………………………….0.5
Fossil Fuel & Cement………..………5.5
Total…………………………………..217.1………..213.8
Net………………………………………3.3
The net of 3.3 (seen this before?) is exactly 60% of FF & C. How convenient. How dry labbed.
Now this is all carbon. Carbon is not always CO2. Carbon can be soot from fires, tail pipes, volcanoes. Carbon can be carbonates in limestone and coral. But let’s just say all of this fluxing carbon converts to CO2 at 3.67, 44/12, units of CO2 per unit of carbon. How many Gt of CO2?
Atmospheric Fluxes, Gt CO2/y…….Source……..….Sink
Soils…………………………………..220.2
Vegetation……………………………220.2……….445.171
Fires……………………………………5.872
Ocean Surface………………………330.3……….337.64
Forests………………………………………………….1.835
Fossil Fuel & Cement…………………20.185
Total………………………………….796.757……784.646
Net……………………………………12.111
Now is it ppm volume or ppm gram mole? If one is to compare the number of molecules it must be CO2/atmosphere ppm gram mole, 44/28.96.
Atmospheric Fluxes, ppm/y………Source……..Sink
Soils……………………………………28.42
Vegetation…………………………….28.42…….57.45
Fires……………………………………..0.76
Ocean Surface……………………….42.63……43.57
Forests……………………………………………..0.24
Fossil Fuel & Cement……………………………..2.60
Total………………………….………102.83…..101.26
Net…………………………….……….1.56 (Let’s just blame this all on FF & C.)
Per IPCC AR5 Table 6.1 some of these fluxing sources and sinks have uncertainties of +/- 40 & 50%!! IMHO anybody who claims that out of these enormous reservoirs and fluxes they can with certainty, accuracy, specifically assign 1.56 ppm/y to FF & C is flat blowing smoke.
1. Mankind’s contribution to the earth’s CO2 balance is trivial.
2. At 2 W/m^2 ( a watt is power not energy) CO2’s contribution to the earth’s heat balance is trivial.
3. The GCMs are useless.
Just wait till Ferdinand catches up with you. He uniquely and solely understands, in gory detail, all global (possibly galactic) carbon sources, sinks and fluxes. Natural components are in perfect harmonious balance, according to him, and anthropogenic emission accounts for all observed increase to atmospheric CO2. Just ask him!!
Here is the IPCC AR5 Table (Figure actually) 6.1:
Yirgach
Figure 6.1 about the same as the fao.org graphic.
Look at Table 6.1 for the uncertainties.
@Nicholas
Thanks for pointing that out. BTW, here’s the description of Table 6.1:
Global anthropogenic CO2 budget, accumulated since the Industrial Revolution (onset in 1750) and averaged over the 1980s, 1990s, 2000s, as well as the last 10 years until 2011. By convention, a negative ocean or land to atmosphere CO2 flux is equivalent to a gain of carbon by these reservoirs. The table does not include natural exchanges (e.g., rivers, weathering) between reservoirs. The uncertainty range of 90% confidence interval presented here differs from how uncertainties were reported in AR4 (68%).
Note that they are claiming a 90% confidence interval as compared to 68% in AR4.
Is that increase in confidence due to a change in the level of uncertainty in the fluxing sources between AR4 and AR5?
Thanks Ron, just ended the 20th round of back and forth discussion with Bart on the same item, quite exhausting…
The natural variability btw is not more that +/- 1 ppmv around a trend of 110 ppmv…
Nicholas,
1. Amounts in each reservoir are of no interest, as long as these don’t move from one reservoir to another.
2. Fluxes between reservoirs are of no interest, as long as the fluxes in and out are equal.
3. Flux differences are the only points of interest and that is what is exactly known:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/dco2_em2.jpg
We know with reasonable accuracy how much CO2 humans have emitted in de past decades.
We know with high accuracy with how much the CO2 level in the atmosphere increased in the same period.
Therefore we know that humans are responsible for near all increase, at least over the past 57 years, with a small addition from warming oceans (~10 ppmv).
The rest of the natural cycle is not of the slightest interest for the mass balance. It is of academic interest to know the detailed fluxes, but that doesn’t change the overall balance one gram.
The current imbalance is about 2.15 ppmv increase/year. It doesn’t matter if any individual flux doubled or halved compared to the year before or that vegetation was a net source one year and a net sink the next. It doesn’t make any difference that the natural carbon cycle was 100, 500 or 1000 GtC/year, as the net overall difference at the end of the year is all what matters…
Thus Ron:
1. Man’s contribution is the main cause of the CO2 increase in the atmosphere.
2. Man’s contribution to the earths heat balance may be trivial, but probably not zero.
3. The GCMs are useless.
Note: let’s stick to the item of interest here: the usefulness (or not) of GCM’s…
Ferdinand
“The rest of the natural cycle is not of the slightest interest for the mass balance.”
As I understand it CAGW theory posits that the globe’s natural CO2 balance is & has been steady state for multi-hundreds of years and any changes can be due to nothing but disruptive, unbalancing, external sources, i.e. mankind. Coincidence = cause. And the scientific evidence has been filtered and adjusted to support that assumption.
How does anybody know in these huge numbers and uncertainties how steady/unsteady state the atmosphere has been? Even minor changes in the natural cycle eclipse man’s puny contribution. I consider the notion of a perfectly balanced, steady state atmosphere totally bogus based on simple observations. Obviously ‘taint so and never has been.
Nicholas,
In the recent 57 years of accurate measurements the natural unbalance was not more than +/- 1 ppmv around the trend, all caused by temperature variability and the trend itself was not from nature: that was every year a net sink for CO2, not a source. Thus while even a small change in the main CO2 fluxes could dwarf human emissions, it didn’t.
Before Mauna Loa and South Pole measurements, we have ice core measurements with a resolution between 10 and 600 years and an accuracy of 1.2 ppmv (1 sigma), depending of the on the spot snow accumulation rates. That shows a global change of ~16 ppmv/°C from LIA-MWP to eight glacial-interglacial transitions and back over the past 800,000 years. That is in the ball park of Henry’s law which shows 4-17 ppmv/°C for a seawater-atmosphere steady state… The biosphere absorbs more CO2 at higher temperatures.
The only influence in the (far) past was temperature at a rather fixed, linear ratio, still visible today in the small influence of temperature on the variability around the trend.
“Razzle dazzle ’em.” Billy Flynn
Nicholas,
Vostok CO2-temperature ratio:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/Vostok_trends.gif
Most of the spread around the trend is from the difference in lag between the glacial-interglacial transition and reverse: 800 +/- 600 year during warming, several thousands of years during cooling.
Taking into account the about double change in temperature at the poles than global, that gives a near linear change of ~16 ppmv/°C, confirmed by more recent periods (like the MWP-LIA transition) and the much shorter seasonal (~5 ppmv/°C) and 1-3 year variability (4-5 ppmv/°C: Pinatubo, El Niño).
Ferdinand, it reads as though your saying; If human influence over the last 57 years was in fact zero, then the climate would not have changed one iota over the last 57 years. Have I got this right?
Does the climate change without human influence, and if so how much would it have changed in the last 57 years?
Ferdinand Engelbeen states:
What a preposterous claim!!! Please do clarify your “reasoning” on this one Ferdinand.
Ferdinand, you say “We know with reasonable accuracy how much CO2 humans have emitted in [the] past decades.” I am not disputing that humans have emitted and are emitting increasing amounts of CO2. I am not disputing the overall shape of that graph. I’m just wondering what “reasonable accuracy” is? Are the CO2 emission data any more accurate than world population figures or economic statistics? I’m particularly thinking of the recent report (http://www.theguardian.com/world/2015/nov/04/china-underreporting-coal-consumption-by-up-to-17-data-suggests) that China had been under-reporting its coal use by 17%. I do not suppose that China is the only large non-rich country to have difficulty gathering accurate statistics.
Please account for the CO2 emitted by volcanism. Or do you assume this is a constant? You imply that this source is irrelevant to atmospheric variability and only the increase from human contributions are what matter.
Now this is all carbon. Carbon is not always CO2. Carbon can be soot from fires, tail pipes, volcanoes. Carbon can be carbonates in limestone and coral. But let’s just say all of this fluxing carbon converts to CO2 at 3.67, 44/12, units of CO2 per unit of carbon. How many Gt of CO2?
Um, so where’s volcanoes on the list? I don’t see it.
Nicholas Schroeder:
Consider the carbonate sink on the sea floor. Somewhat less than seventy five percent of the sea floor has carbonate shell detritus literally raining down. Both the total sea floor carbonate layer and it’s yearly growth are minimalized in IPCC charts and understanding.
And on top of that, the fact that if one model is correct, the rest that use different parameter values must be wrong.
I’d further submit that, because you are using ‘parameterizations’ of not well understood processes and initial conditions of unknown accuracy, even if a particular model correctly predicts the current (and/or parts of past) climate, you cannot rely on it to correctly predict the future state of the climate, i.e., it may be right for the wrong reasons and thus has no predictive value. Too many of Donald Rumsfeld’s famous, ‘known unknowns’ and ‘unknown unknowns’ to relay on model output. Certainly you can’t refer to it as ‘data’ as I’ve seen the unscientific media refer to it as.
Does anyone know if (and how) the models take into account the cooling effect of evaporation from the extra water vapor? The AGW theory is that CO2 will cause a small harmless increase in temperature (0.5-1 degree) and that will then increase the amount of water vapor (because water vapor percentage is dependant on temperature), which causes a bigger increase in temperature, which then causes more evaporate and more water vapor and the loop goes on. Most of the predicated warming comes from the extra H2O. The problem is that the extra H2O comes via evaporation which is endothermic (cooling). And what I want to know is do the models take this cooling effect into account (and how) ? I would have thought that the amount of cooling produced by the evaporation of H2O MUST be equal to the amount of heating it receives (from CO2) – so there can be no feedback loop – otherwise you are creating energy from nothing ?? Do I have this wrong ?
jpr,
Models account for latent heat of evaporation by incorporating energy balance in the equations that they solve. Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed. So evaporation produces local cooling, but leads to local heating elsewhere. The resulting transfer of latent heat is the dominant means by which energy is redistributed in the climate system.
Changing atmospheric water vapor has three effects. More water vapor means more IR absorption, since water vapor is a greenhouse gas, that enhances warming. It also means more efficient latent heat transfer, that produces cooling by enhancing the transport of energy from the surface, where sunlight is absorbed, to the mid-troposphere, from whence most IR emission to space occurs. And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.
The above article by Mike Jonas is best ignored, since it does nothing to clarify the real problems with climate models. One of the biggest is that they can’t get cloud distributions right, so how can they correctly predict how those distributions will change? So they can’t get dependable results for cloud feedbacks and so can’t get climate sensitivities that can be believed.
“And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.”
IPCC AR5 credits clouds with a net -20 W/m^2. That’s cooling and lots of it.
Nicholas Schroeder wrote:
“IPCC AR5 credits clouds with a net -20 W/m^2. That’s cooling and lots of it.”
That is true. What puzzles me is that you wrote that in response to my statement: “And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.”
Which is also true. So what is your point? That you don’t understand the difference between the net effect of clouds and how that might change in response to a change in temperature?
I’d say that statement is disingenuous because “elsewhere” is generally a couple of kms up in the atmosphere where it is well on its way to leaving the planet.
TimTheToolMan wrote: “I’d say that statement is disingenuous because “elsewhere” is generally a couple of kms up in the atmosphere where it is well on its way to leaving the planet.”
And I’d say that TimTheToolMan is a silly twit since I wrote: “more efficient latent heat transfer, that produces cooling by enhancing the transport of energy from the surface, where sunlight is absorbed, to the mid-troposphere, from whence most IR emission to space occurs”.
I think the “real” problem with the GCMs is, as described, the errors inherent in the simplifications of physical processes, interactions between processes and parameterisations of the components of the models. They will feedback, amplify and swamp any climate signal that might be present. Fixing the clouds wont fix this. GCMs are doomed to failure on their current development course.
But why would anyone seriously try to address this from the modeller’s camp? Its tantamount to saying we’re wasting our time doing what we’re doing…
Mike writes “since I wrote:”
Then why say “elsewhere” when you know fully well it is going to result in a net cooling to the planet?
Mike writes “Note that evaporation of water produces no net cooling for the planet as a whole”
Disingenuous
“Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed.”
Yes, but the cooling occurs at the surface. The energy is released in the upper troposphere where it has a chance to radiate into space.
“Does anyone know if (and how) the models take into account the cooling effect of evaporation from the extra water vapor?”
They don’t because mankind has no role in water vapor as IPCC AR5 admits in FAQ 8.1
jpr – I discuss the effect of water vapour in an earlier article. I think it’s the fourth ‘here’ in the set of four ‘here’s. Mike M is quite wrong – the evaporation condensation precipitation cycle does produce net cooling, directly by sending more energy into space, and indirectly by creating clouds. I address clouds too, in the earlier article.
Mike M – There is so much wrong in climate science that it is mind-boggling. The models being upside down is a very important factor so it is worth examining (as structured the models can never work). Yes there are huge problems in the science, such as clouds, but the useless model structure is worth addressing too.
Mike Jonas,
“Mike M is quite wrong – the evaporation condensation precipitation cycle does produce net cooling, directly by sending more energy into space, and indirectly by creating clouds.”
You say I am wrong, then you agree with what I said. Weird.
What I was calling wrong was your statement “Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed. So evaporation produces local cooling, but leads to local heating elsewhere. The resulting transfer of latent heat is the dominant means by which energy is redistributed in the climate system.“. You make it sound like all the heat remains in the climate system and thus that there is no net heat gain or loss from the whole cycle. That’s wrong, because with an increased water cycle there is a net heat loss to space – more heat is lost to space from the atmosphere than is gained from space (the sun) by the ocean.
Clouds are not n computer models
I believe they are, but they are one of the many ‘parameterizations’ == guesses in the models since we don’t, even today have a good idea of how clouds function relative to heating or cooling in the atmosphere/oceans. The parameterizations potentially introduce huge uncertainties and potential for bias in the models. Pretty much the parameters can be ‘tuned’ to produce output that reinforces the bias of the modeller.
TimTheToolMan,
“Then why say “elsewhere” when you know fully well it is going to result in a net cooling to the planet?”
Because jpr seemed to imply that he thought that the simple act of evaporation produced net cooling. It does not.
When I wrote “Note that evaporation of water produces no net cooling for the planet as a whole” I was 100% correct.
You seem unable to understand such fine, but important distinctions.
Silly twit.
Mike M. writes “net cooling”
Do you know what “net” means Mike M.?
Mike M. Writes “You seem unable to understand such fine, but important distinctions.”
But lets not allow you to push the conversation away from the most important point. What do you have to say about the accumulation of errors in the models?
After all you wanted to simply dismiss the article based on insufficiently specified components (ie clouds) with an implication that it could get better. But the error issue is much deeper. Endemic and incurable to them all. Why not mention this in your dismissal?
Water vapor (evaporation) depends on WATER temperature, not on atmospheric Temperature, which only affects relative humidity.
CO2 (radiation capture) does not appreciably alter water Temperature.
Water vapor trapping of LWIR is perfectly capable of warming the atmosphere all by itself. It does not require any carbon dioxide “kindling wood” to get started evaporating.
So talk of water vapor amplification is nonsense.
The feedback from atmospheric warming is to the system input terminal which is the solar irradiance. More atmospheric water , in any of its three phases, results in less solar irradiance at the earth’s surface, which is 70+% water. And that leads to surface cooling; which leads to less evaporation and atmospheric water; which leads to more solar irradiance of the surface. That is stabilizing negative feedback in anybody’s feedback manual.
g
The initial state.
==========
Why should it matter?
If you pick lots of different initial states, and after 30 years they converge, then the initial state doesn’t matter.
==========
The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods.
==========
You’ve presented no justification that says that the error is exponential. I very much doubt that any of the published models are. If they were, the wouldn’t even get as far as being presented.
=========
So 2 major flaws with the article, and that’s coming from someone who thinks the climate scientists have got it wrong.
Soooooooooooooooooooooooooooooooooooooooooooo, Which climate models have gotten it correct?
…the one with the lowest climate sensitivity. Really Ron C, sorry, no link to his sight, has an excellent post on the best to observations model, which BTW is still to warm for what the satellite record shows.
That’s a pretty big IF you got there. The model runs do not converge, run to run. The models do not converge, model to model. The IPCC just averages everything and then assumes that the errors cancel out. This is exactly how CIMP 5 is arrived at.
The mathematical basis for an exponential error series is good enough for me.
A little bit of ad-hoc code to keep the model output in bounds can do wonders to make things look good.
The author is not the only one.
Or maybe the models are not wrong. They are, as we used to say, Not Even Wrong. So far divorced from reality, they do not even matter.
TonyL:
You say
Yes. That is certainly true for all the models except at most one because they each model a different climate system but there is only one Earth.
For the edification of LB (and others who don’t know), I again explain this matter.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:
And, importantly, Kiehl’s paper says:
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s Figure 2 can be seen here.
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Which, of course, means that as you say
Richard
TonyL
November 8, 2015 at 8:32 am
“That’s a pretty big IF you got there. The model runs do not converge, run to run. The models do not converge, model to model. The IPCC just averages everything and then assumes that the errors cancel out. This is exactly how CIMP 5 is arrived at.”
———————————
Hello TonyL
What you state as per above could be right and correct…..but let me express my understanding and my opinion in that particular point.
When I look at these model results in the run to run and model to model, all of them (presumably you were referring strictly to GCMs in this case) do get at the “same ball court” as the AGW-ACC “climate scientist” were saying and claiming for years now.
Regardless of what actual initial state or of what degree of forcing (the anthropogenic CO2 scenario) all models in run to run or model to model do converge at that particular “ball court”.
The “ball court” of ~200 to 220ppm up in concentrations and ~2-5 to 3C up in temps.
The only main difference in the model runs whether run to run or model to model is the time it takes to get there.
In some cases can take a half a century and in some other cases maybe even a half of millennium or more. in the model terms.
cheers
Whiten:
Richard nailed it with;
To your perspective, the models may end up in the same ‘ball court’; but it is definitive that the ‘ball court’ isn’t Earth. That ball court is illusory and completely useless from a practical, scientific or even a modeled need.
A broken clock reads the correct time twice a day if it is a twelve hour clock. Whether the clock’s accuracy is measured for one day, ten days or a century doesn’t make the clock’s accuracy useful or predictive.
I don’t know if I’m just becoming more sensitive to it, paying more attention to it or if my sense is real but I do get the sense that the quality of WUWT guest posters is going down or at least their arguments are.
all he would need to do is look at a control run to know that the errors are not exponential
Actually you appear to be appealing to the fact that the errors must involve a spiralling out of control for the model. Not so. The errors could easily be set to balance each other out in a control run (and indeed that’s what they must be set to do) as well as enabling the model to roughly produce past warming figures.
But the actual climate signal is certainly lost in the process through the accumulation of errors and instead, the GCM is behaving in a manner expected of it. By design if you like.
Has anyone else noticed that in the warming ‘literature’ the most frequently used words when referring to the future are ‘could’ and ‘might’ and ‘probably’ – so much for certainty.
Nearly everyone who is both paying attention, and has any objectivity whatsoever has noticed this.
Those words you mention are but a few of the “weasel words” which make most every warmista conclusion a smarmy bunch of lukewarm crap.
Climate is by definition a steady state. That steady state is modified by external influences and called climate change. Over a multi decadal (or more) period modelled by the climate change community, it is only the larger externals which impact – small scale events may only have a short term impact on climate equilibrium.
Rather than model climate change from an initial condition forward with all the potential errors outlined above, it may be better to hypothesise a future equilibrium state of the planet given changes in the larger “man made” externals – CO2, population, land use, urbanisation etc. Uncontrollable externals impact unavoidably – the question is timing, not magnitude.
For example – if C02 increases to 800 ppm what would the climate look like given the consequential changes to water vapour, cloud cover, plant growth, ice sheets, sea level, etc etc. The outcome should be a steady state. If the model is unable to resolve the different feedbacks, the outcome would be a runaway greenhouse or or freezer – unattractive!!.
This may crystallise the lack of knowledge that exists in some of the externals as they would be key to resolving a steady state. It would also help identify which assumptions made on feedback loops were material and which had limited impact on the steady state achieved. The model could be progressively simplified with the key assumptions made more usefully challenged.
Adopting this approach may also influence mitigation action which may be taken (CO2 reduction may not be the best answer), is less likely to require massive computing power, and avoids accumulated errors in current models.
Terry:
You assert:
Really?! Whose definition is that?
The IPCC definition is this
There is no mention of “a steady state” and if a steady state existed there would be no difference in climate between periods of time with different lengths.
Richard
So according to the IPCC, the earth has just one climate from pole to pole, and everywhere in between.
Their definition (definitive) of climate permits any kind op spatial variation, only temporal variations.
What unmitigated BS is that.
G
Steady state? Is this what you call a steady state?
Mike…you left out the obvious
” the number of calculations required”…..takes time
And by the time they’ve made a few runs……the past has been changed/adjusted/modified/F’ed with
Even if we understood it all…..models will never be right
http://realclimatescience.com/wp-content/uploads/2015/11/GISS1982_2002_2014_201522.gif
The models are trained on the past temperature to set the “parameterizations”. When the models are set to reflect a fictional past, the predictions will be for a fictional future.
Another possibility is that the models are accurate for the planet they represent, just not Earth.
well exactly…..there’s even the chance that the models are right
But when they are tuned to a fictional past that shows faster temp rise than actually happened….
…you would expect to get the exact same results they are showing
Faster temp projections than what is actually happening
Latitude: Hammer meet nail! I agree and think this is why the models are currently failing.
…well I predict that eventually they will all match perfectly! (really)
latitude’s post is excellent in that it demonstrates that not are the models fudged (via particulate affects having a tremendous variance) but the observations are massaged to match the models as well. You will not see a Mosher taking on the totality of these adjustments.
However none of the models, even the best, are in the ball park of the satellite observations.
Just wait till Ferdinand catches up with you. He uniquely and solely understands, in gory detail, all global (possibly galactic) carbon sources, sinks and fluxes. Natural components are in perfect harmonious balance, according to him, and anthropogenic emission accounts for all observed increase to atmospheric CO2. Just ask him!!
Ferdy Englebert reminds me of Lord Kelvin always correct on every detail and yet shown in the long term to be wrong on many of his well held beliefs. The inputs and outputs are not known accurately and are in constant change.
“… equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.”
My view, for what it is worth, is that models may well bear a relation to reality but only by accident. A philosophy lecturer summed it up years ago when he said that anyone who claimed to know for certain the winner of the next Preakness Stakes was flat wrong, even if that horse won!
What he then went on to say, if course, was that trying to convince that man he was wrong was a nearly impossible task.
[Snip. Fake screen name for beckleybud, Edward Richardson, David Socrates, Tracton, pyromancer, and at least a dozen other sockpuppet names. ~mod.]
Does anyone know how the models handle clouds?
Do the models “create” clouds – as does the real world – and do the models “extinguish” clouds as does the real world?
Does anyone know how do the models handle ocean currents, which all agree are very influential in affecting climate?
Do the models “create” rainfall and “end” rainfall, as we see in the real world? And does not rainfall cool the area on which it falls?
Any help will be appreciated.
JohnTyler:
You ask
I answer, yes, badly.
Please note that my answer is not facetious.
Ron Miller and Gavin Schmidt, both of NASA GISS, provided an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is not reported to have been updated since.
Its abstract says:
The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct.
There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – if are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.
Richard
Do I understand this to mean that NASA [admits] that if the models underestimate cloud cover by as little as 2%, they think it would be enough to counteract any global warming due to increased CO2?
katherine009:
Thankyou very much indeed for your post that asks me
No, your understanding is provided by a formatting error that I made and did recognise until your question drew my attention to it.
The quotation from NASA should have ended immediately after the sentence that says
I put the ‘close quotation’ in the wrong place. Sorry.
And thankyou for pointing out my error for me and others to see: I am genuinely grateful because it has enabled me to provide this corrigendum.
The data which indicates that only 2% cloud cover change would account for all observed warming derives from
Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854
I again apologise for my formatting error which is so misleading, and I again thank you for drawing my attention to it.
Richard
[What’s the proper full paragraph (with formatting of the bold) for quote that needs to be corrected? .mod]
aaargh” did NOT recognise
Sorry
Richard
Mod:
You ask me
The following is – I hope – what I intended. Also, I add the reference concerning observation of cloud cover.
***********************
JohnTyler:
You ask
I answer, yes, badly.
Please note that my answer is not facetious.
Ron Miller and Gavin Schmidt, both of NASA GISS, provided an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is not reported to have been updated since.
Its abstract says:
This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.” These are severe problems. For example, clouds reflect solar heat and a mere 2% increase to cloud cover would more than compensate for the maximum possible predicted warming due to a doubling of carbon dioxide in the air. Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid 1980s. But it appears that cloudiness decreased markedly between the mid 1980s and late 1990s.
(ref. Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854)
Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 Watts/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 Watts/sq metre). So, the fact that the NASA GISS GCM has problems representing clouds must call into question the entire performance of the GCM.
The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct.
There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.
Richard
IPCC credits clouds with a net -20 W/m^2 RF. That’s cooling and lots of it.
“Does anyone know how do the models handle ocean currents, which all agree are very influential in affecting climate?”
I’ve shown a video below. It works.
Currents for weather, yes, currents for long term climate and ENSO cycles, not a chance. Currents for ocean overturning and very long term climate, nope.
It is my contention that there has not been an experimental verification of the theoretical CO2 atmospheric energy transfer mechanism. Neither photonic nor collision nor the combination.
It is my contention further that the generally accepted 1 deg C increase is fiction.
I am very simple minded and can think only in first order approximations. But with so few CO2 molecules amongst so many others in a rapidly changing soup the calculation assumptions are simplistic and incorrect.
George,
Have a look at Modtran:
http://climatemodels.uchicago.edu/modtran/
That is based on line by line absorption of IR by CO2 and other GHGs from accurate laboratory measurements at very different conditions of pressure and water vapor (per “standard 1976 atmosphere”) at different heights. The possibility of collisions with other molecules before a new IR photon is released is incorporated and decreases with height.
The energy loss from a CO2 doubling can be compensated by an increase of about 1°C at ground level…
ModTran – modeled transmission, technically the official name is ‘A Moderate Resolution for LowTran’
Meaning that it is an expansion of a previous model. Moderate means that the model uses 5cm blocks of the light spectrum instead of 20cm. Perhaps HiTran would be a better resolution? Though it is still a model.
Model output used to verify what?
Modeled is modeled, period. Fantasy until proven by direct observations and independently replicated.
Not only is it fiction, but also a meaningless fiction. There is no general acceptance of how this “1 deg C increase” will be manifest in a world of widely varying, constantly changing, temperatures. Is it distributed uniformly by latitude or concentrated in the (nonexistent) tropical hot belt? Does it reside in daily or seasonal increased maxima, or minima, or both? And we’re just talking about temperature, miles away from discussing possible effects on “climate.”
skorrent1,
It is real for one and only one snapshot of a fixed atmosphere with fixed humidity, clouds and their distribution over the latitudes and altitudes. Lookup the “standard 1976 atmosphere”…
Of course in the real world there are a host of positive (according to models) and negative (according to the real world) feedbacks and changing conditions, but the about 1°C is based on solid physics…
That I believe.
No it isn’t. It is based on an imperfect understanding of physics as it applies to the real world. A very imperfect understanding.
“The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.” ~ from article
I could not agree more, but would add that if the basic science is wrong then the maths don’t really matter as you are dead wrong from the get-go.
For example, what does it matter what the math of the climate models say if they are programmed by delusional fools who have it wrong on what CO2 does? When I was trained in math in the 70s and then again in the 80s people kept telling me that if I put garbage into the computer I would most likely get garbage back out — but no matter what came back out it was not trustworthy if I put garbage in. Do professors still say such things? Do climate “scientists” not ever listen?
~ Mark
This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades.
And that is how these damn bottom-to-top models screw up. Get just one initial input wrong and it plays crack the whip with the end results, the result being meaningless mush. Then there’s “Fallacy of the parts”, wherein the parts add up to a greater sum than the whole (i.e., the observational evidence). Recurse you, Red Baron!
Top-down models are the only ones for we have sufficient tools and evidence. This requires a meataxe approach, but that is the best method available to us considering our current state of knowledge and the inherent uncertainties. They may be wrong, of course, but they are far more likely to be in the ballpark.
Another approach is to take a bottom to-top-model and try to dicker with inputs and algorithms. But being bottom-to-top, they are spaghetti monsters, so it’s a problematic undertaking. For example, CMIP does not have a direct input for CO2 sensitivity. It is derived at great length. Other (top-down) models, OTOH, allow a direct finger on the primary keys. Top-down is flexible and adaptable. And as straightforward as a punch in the face.
I have seen amazingly complex Excel spreadsheets used for budgets, man power estimates, project scheduling, etc. and some macro error in some minor cell in some distant tab can propagate through and wreck the entire sheet.
We’re back to when Doctorate Engineers agree that a proposed model actually models what it is expected to model, it is not a model.
It’s called verification. Verification is dependent upon a model meeting expected specified tests, not loons calling their model results ‘robust’.
Then again, no one seriously believes an extremely complex chaotic system can be effectively modeled.
As Nicholas points out; even very complex financial spreadsheets fail as soon as one component changes.
Those who model financial systems understand their model limitations and are prepared:
A) to explain any differences of reality to their model, explicitly. Unlike those free spending climate fools, real world finances involve very unforgiving hard nosed B%$%@*s. Hand waving, sophistry and falsehoods mean that one is quickly unemployed. With prejudice.
B) to immediately accept new real world corrections into their model. i.e. They don’t sit on their backsides claiming that many bad model runs are equal to good data. Sooner or late those financial B%$%@*s will be asking where their climate research monies went.
I missed the until; “We’re back to until when…”. Incorrect English, I know; caught out during an edit while listening to my Lady.
Mike’s approach is not new:
“The general approach is currently to describe the climate system from ‘the bottom up’ by accumulating vast amounts of data, observing how the data has changed over time, attributing a weighting to each piece or class of data and extrapolating forward”
and
“We need a New Climate Model (from now on referred to as NCM) that is created from ‘the top down’ by looking at the climate phenomena that actually occur and using deductive reasoning to decide what mechanisms would be required for those phenomena to occur without offending the basic laws of physics.
We have to start with the broad concepts first and use the detailed data as a guide only”
from here:
http://www.newclimatemodel.com/new-climate-model/
Yes. I have been loudly promoting the top-down modeling approach for years, now. Go basic and drill down. Anything else is a road going forward to nowhere.
“Mike’s approach is not new“. True. I didn’t claim to be new, but I should have stated that explicitly, as I did in my earlier series. Hopefully what I did do was to provide a helpful explanation.
Mike Jonas,
You wrote:
“Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.
NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations.”
A little knowledge is a dangerous thing. If you knew more than basic mathematics, you would realize that your conclusion is wrong. And if you knew anything about the relation of weather models to climate models you would spot your error.
First the basic mathematics. Whether or not errors compound depends on the properties of the system being computed. If it is a stable system, the numerical errors can self-correct, allowing the calculation to proceed for an arbitrary amount of time. In an unstable or chaotic system, trajectories will diverge and the specific state of the system will be uncalculable. Which occurs is a property of the mathematical system, not the numerical calculation, provided that one is using a suitable numerical algorithm.
The climate system is chaotic, so exact evolution of states can not be calculated. Every climate modeller knows as much and ADMITS IT, so for you to claim that you have found some fundamental flaw in the models is both ignorant and arrogant; a very nasty combination.
Lorentz noted that there are two types of predictions that can be made in chaotic systems. He called them predictions of the first kind and predictions of the second kind. The former requires predicting the specific sequence of states through which the system will evolve. That is what is done by weather models. Such predictions are highly dependent on initial conditions and can only extend for a limited time (days) into the future.
However, even if predictions of the first kind are impossible, it can still be possible to make predictions of the second kind. Those only predict the statistical properties of the system, such as average temperature. Climate models are designed to make such predictions. I am not sure of the exact mathematical requirements for such predictions to be possible, but the basic idea is that the system remain bounded; i.e., that it does not run off in some unrestrained manner (i.e., the oft repeated phrase “weather is an initial value problem, but climate is a boundary value problem”). The climate models most certainly meet the required conditions for predictions of the second kind and the evidence is strong that the actual climate system meets those conditions.
When predictions of the second kind are possible, initial conditions do not matter, except for how long it takes to arrive at a steady state.
There is much to criticize in climate models. But one should make valid criticisms. You fail to do so.
Mike M. – You could have added that the modellers state clearly that they do not predict climate, so there is no point in my saying that the models cannot predict climate. Well, I think it is worth saying, again and again if necessary, because it is important and not at all well understood. You say “ the system remain bounded; i.e., that it does not run off in some unrestrained manner“. Yes, I address that by reference to the irregular clock : if you force a model to stay within certain bounds, then it will always be within those bounds, but you might as well generate random numbers within those bounds for all the good it will do you. This part of the models – the cell-based system – adds no value and risks being destructive. Scrap it. Start again with a top-down system. It won’t be very accurate till we know more about climate’s drivers, but it would still be far better than our current random-number generators.
Mike Jonas,
What the models attempt is, in effect, determining the bounds and how those bounds change as a result of exogenous inputs, like changing CO2. That is adding something of very considerable value IF they can actually do it (I think they can’t).
You want them to scrap the approach they have been using. OK, but that is armchair quarterbacking unless you have something constructive to offer in its place. “Start again with a top-down system” says nothing constructive. Maybe you plan a future article in which you will offer something?
My suspicion is that the real problem with the models is not so much the approach as the fact that climate scientists have been insufficiently ruthless (understatement) in trying to invalidate them. Instead they are content to use multi-model means, as if that can somehow correct models that have not been validated.
Mike Jonas,
Your argument would be much more compelling if there weren’t abundant examples of successful bottom-up models of chaotic systems. The criticism about initial states that you apply to the GCM’s would apply equally to many of the techniques of Quantum Chemistry, Statistical Mechanics and Computational Fluid Dynamics (CFD) used to design modern wonders from Microchips to Airplanes. Yet, we have millions of passenger miles and Trillions of Gigaflops from the cell phone in everyone’s pocket and the computers across the globe, testifying that these models work splendidly.
In fact, the GCM’s are largely based on the same CFD principles used to design aircraft, cars and engines. The turbulent flows of these systems are just as chaotic and unpredictable as the weather, but modeling has lead to new designs with significant improvements in efficiency and performance. The modeling approach is sound, it’s the implementation and computational capabilities that are lacking.
You state that “any science is wrong if its mathematics is wrong”, but that’s not really true! A better statement is that a model is only as good as its worst approximations. We use Newtonian mechanics all the time and for a great many problems it is good enough. The trick is knowing when the simplifying approximations work and when they break down and relativistic or quantum mechanical corrections must be applied.
Were you a climate modeling expert, any commentary on the weakest model approximations would have been very illuminating. When are these approximations valid, when do they break down? You do touch on parametrizations, but do you really understand how they are implemented? Aren’t there different approaches? What are the strengths and weaknesses of each. How are they determined or evaluated and tested? How can they be improved, tested and verified? Now that would make for an interesting read.
Finally, regarding what you call “Large Externals”, I would argue that a detailed understanding of all such factors is unnecessary. For example, while Milankovitch cycles are important on geologic timescales, their impact over centuries or a few millennia are likely to be small. A lack of a detailed understanding these externals would not necessarily prevent understanding how CO2 impacts global temperatures in the near term. It is true that in some cases they do need to be accounted to “calibrate” the model based on historic data, but ideally a good model will have few or even no such calibration factors that can’t be determined independently of the model itself.
SemiChemE,
An excellent comment. In particular, you really cut to the heart of the issue when you wrote: “any commentary on the weakest model approximations would have been very illuminating. When are these approximations valid, when do they break down? You do touch on parametrizations, but do you really understand how they are implemented? Aren’t there different approaches? What are the strengths and weaknesses of each. How are they determined or evaluated and tested? How can they be improved, tested and verified? Now that would make for an interesting read.”
Mike M.
Bounds? Predetermined bounds? And they still get it wrong!? Is that the tail or the trunk wiggling there?
SemiChemE:
Every one of those used for real products are intensely tested and fully verified before committing serious dollars or any lives.
Mistakes using those models in the real world are backed by serious repercussions upon failure.
Perhaps computer modeler should be sued every time they publish or utilize a bad model run? Sooner or later, the world will come that same realization. Listing ‘climate modeler’ in a resume will be a career killer.
What you say is true. However, the main problem modelling climate with partial differential equations is that you have no idea whether the solution has converged.
In finance the Black Scholes option pricing formula can be modeled as a diffusion equation. For certain cases these equation have a closed form solution so we can test whether the numerical algorithm converges by applying it to a case we know. In General Relativity, certain special cases have a closed form solution. In reservoir simulation, the Buckley-Leverit equations have a closed form solution. However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists. Anyone who tells you otherwise is talking BS. They would claim the Clay Institute prize if they were able to solve this problem. CFD models are very useful for modeling the flow over aircraft wings but only if they are tested in a wind tunnel under quasi-real operating conditions.
We all know the current climate models are useless. Unfortunately, there is no quick fix to this problem round the corner. We can not even model turbulent flow in a pipe, or the behavior of a shower curtain, let alone the entire atmosphere.
Walt D. wrote: “However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists. Anyone who tells you otherwise is talking BS. They would claim the Clay Institute prize if they were able to solve this problem.”
Very interesting. There are several Clay Institute prizes, I assume Walt D. means this one: https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existence_and_smoothness
But that Wikipedia article notes that: “Jean Leray in 1934 proved the existence of so-called weak solutions to the Navier–Stokes equations, satisfying the equations in mean value, not pointwise.”
Such weak solutions might be all that are needed for climate models since they do not claim to predict specific future states (weather), only the statistical properties of such states (climate). I wonder if someone here (Nick Stokes?) actually knows if this is the case.
“However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists.”
You can’t prove in advance whether a unique solution exists. But when you have one, you can test it simply by seeing whether it satisfies the equations. Go back and substitute. More usually, you test whether the conserved quantities that you are transporting are actually conserved.
“We can not even model turbulent flow in a pipe”
Nonsense. Verification and validation of a pipe flow solution is a student exercise
“Such weak solutions might be all that are needed for climate models”
As said above, the question of whether you can prove in advance whether a solution exists and is non-singular rarely impinges on practical CFD. That provides algorithms which do generate candidate solutions. You can check afterwards whether they do satisfy the discretised equations, and do not have singularities. There is an issue there related to stability. As I mentioned, higher order systems do have (without boundary conditions) multiple solutions. You want the ones that satisfy the boundary conditions, but in explicit forms the boundaries can be a long way away, and so this uniqueness can drift in time. IOW spurious solutions can remain inadequately damped, even if they don’t grow rapidly causing blowup. You can analyse for all this; the solution is generally to be conservative with stability criteria, and use implicitness where necessary.
Having satisfied the discretised equations, there is then the question of whether the discretised equations adequately approximate the continuum de’s. That is the issue of mesh independence; ideally you test on a succession of mesh refinement. A possible criticism of climate GCM’s is that as a practicality you can only go so far with this. On the other hand, there is the experience with numerical forecasting, which works well, and does use finer meshes for shorter periods.
Remember, we can’t prove the solar system is stable either. But life goes on.
Nick: Good post. Thanks for the reference to the student exercise. Do the have a reference to the actual equations that are being solved?
Walt D,
They are using Ansys Fluent, a major engineering CFD package. It solves the Navier-Stokes equations, with various options for turbulence. The sort of thing people here say is impossible. The theory manual is here. The basic equations are in Sec 1.2.
Navier-Stokes equations “can almost be” solved for the very small volumes, very limited runs needed for engineering approximations of limited reactions occurring in very, very limited regions of fluid flow.
But! NONE of the models are “original” design-truths. They ALL are based from YEARS of measured data from real physical designs and real evidence (tuft plots”, photographs, turbulence wind tunnels, sight, and pitot tube measurements at each point near the fuselage or intake duct or piston or carburetor or fuel injection nozzle. And, even then, the “pretty photos” of the flow or heat exchange fluid are RECOGNIZED as approximations of limited volumes under very specific conditions.
Engineering 3D FEA and fluid flow model results are NOT “spread across the globe” extrapolating every billion-per-second approximation from every cell for the next two hundred years!
Interesting exchange. When Nick Stokes writes ” It solves the Navier-Stokes equations, with various options for turbulence. The sort of thing people here say is impossible”, presumably he means finds a stable numerical solution to a discrete dynamical system approximating NS. As NS experts point out, how far these have anything to do with solutions of the actual NS equation except in the very short term is (also) an open question.
Wikipedia’s version of Leray’s weak solutions to NS – not really ‘mean value’. More pointwise over a class of test functions. Worth mentioning because there are actual statistical solutions to NS which are apparently being considered by some (European) climate modellers. One obvious problem is they need an effectively subjective spatial probability.
At its very heart, a GCM doing a climate prediction is predicting energy accumulation by the planet. It is NOT simply playing a lot of weather that “evolves” and might be self correcting. It has a real “direction” which will certainly be swamped by the accumulation of those errors so pretty much everything you say is irrelevant because it comes from a fundamentally mistaken position.
The models are all wrong in the same direction – too warm (no bias there). Their purpose is propaganda, not accuracy, and to be a club for alarmists to scaremonger the public and to “inform” policy makers.
This could be good place to start with the top down analysis
http://static5.drsircus.com/wp-content/uploads/2015/10/global5.png
Over the history of the earth the global average temperature seems to be bounded within a range of 10 deg C. Not sure what you’d do next.
son of mulder,
“Not sure what you’d do next.”
There’s the rub. I think that climate modelling started with top-down, but they quickly ran into dead-ends. For instance: Increased temperature will produce increased atmospheric water vapor, but how much? You either make something up or you try to calculate it from basic principles. For the latter you need a bottom-up model.
Thanks for using the words “seems to be”. One of my pet peeves in this whole mess is the degree of accuracy that seems to be inferred from things like tree rings, ice cores, and the like. I mean, we are just now getting to the point where we can take detailed measurements of the temperature of the Earth. What, exactly are we comparing it to? Tree rings? I think not. If tree rings are so accurate then why did we spend zillions of dollars on satellites? Why didn’t we just continue with the tree rings? /SARC
I think can answer that one. Using the metrics they use, tree rings would indicate cooling from 1960?
son of mulder,
“Not sure what you’d do next.”
We should note that CO2 rise does not lead to a rise in temperature but that it is the other way around — and that in the past CO2 in the atmosphere has been much higher than at present. This CO2 causes warming delusion is the biggest mistake science has made in regards to climate.
1) Create computer model forecasting doom that is impossible to refute.
2) Alarm.
3) Profit.
If anything, that chart says CO2 and temperature are unrelated.
Bingo!
Well, if the chart can be believed, it shows that present day CO2 levels are in the ( roughly) lowest decile going back about 1 billion years .
It also shows that temperature and CO2 levels have a very poor correlation.
The chart provides ZERO information as to whether CO2 levels change as a RESULT of temperature changes (i.e., CO2 response lags temperature changes) or vice versa.
Seems the good chap NY Attorney General should be investigating the IPCC and WMO models whose “predictions, projections or forecasts” cannot be validated with real measurements, now, or on any time scale imaginable over past and future years. Now that IS a clear case of fraud and fraud at many levels and perpetrators.
After all, during Prohibition days Al Capone’s syndicate in Chicago kept two sets of accounting books, the real one showing Capone’s earnings and the one the syndicate gave to the Treasury Department showing Capone with no income.
So the IPCC’s and WMO’s fabulous models predictions, projections and forecasts depend of thousands of humans (salary, annual leave, sick leave, 401K, retirement benefits, etc.) writing and fiddling with millions of lines of code crunching inside of hundreds of massive main frames in buildings (server farms) ingesting billions of Euros (or what ever money denomination) whose output is not validatable have an accounting problem, i.e. two books. The real book showing who is benefiting by income and the fake book showing no income that is shared with the various government treasury departments. So if the various government treasury departments were to get hands on the real book showing the top officials benefiting royally from the billions of Euros income that was supposed to be paying for the running of the models in the server farms and serviced by thousands of human workers but not, somebody, like Capone, will have to go to jail.
Ha ha