# Inside the Climate Computer Models

Guest essay by Mike Jonas

In this article, I take a look inside the workings of the climate computer models (“the models”), and explain how they are structured and how useful they are for prediction of future climate.

This article follows on from a previous article (here) which looked at the models from the outside. This article takes a look at their internal workings.

The Models’ Method

The models divide up the atmosphere and/or oceans and/or land surface into cells in a three-dimensional grid and assign initial conditions. They then calculate how each cell influences all its neighbours over a very short time. This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades. The IPCC (Intergovernmental Panel on Climate Change) describes it here. The WMO (World Meteorological Organization) describes it here.

[Enlarge]

Seriously powerful computers are needed, because even on a relatively coarse grid, the number of calculations required to predict just a few years ahead is mind-bogglingly massive.

Internal and External

At first glance, the way the models work may appear to have everything covered, after all, every single relevant part of the planet is (or can be) covered by a model for all of the required period. But there are three major things that cannot be covered by the internal workings of the model:

1 – The initial state.

2 – Features that are too small to be represented by the cells.

3 – Factors that are man-made or external to the planet, or which are large scale and not understood well enough to be generated via the cell-based system.

The workings of the cell-based system will be referred to as the internals of the models, because all of the inter-cell influences within the models are the product of the models’ internal logic. Factors 1 to 3 above will be referred to as externals.

Internals

The internals have to use a massive number of iterations to cover any time period of climate significance. Every one of those iterations introduces a small error into the next iteration. Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.

NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations. They are able to do this because unlike the climate models they are only required to predict local conditions a short time ahead, not regional and global conditions over many decades. Yet the weather models are still unable to accurately predict more than a few days ahead. The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods. Note that each climate cell is local, so the models build up their global views from local conditions. On the way from ‘local’ to ‘global’, the models pass through ‘regional’, and the models are very poor at predicting regional climate [1].

At this point, it is worth clearing up a common misunderstanding. The idea that errors compound exponentially does not necessarily mean that the climate model will show a climate getting exponentially hotter, or colder, or whatever, and shooting “off the scale”. The model could do that, of course, but equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.

An analogy : A clock which runs at irregular speed will always show a valid time of day, but even if it is reset to the correct time it very quickly becomes useless.

Initial State

It is clearly impossible for a model’s initial state to be set completely accurately, so this is another source of error. As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“. [2]

This NASA quote is about weather, not climate. But because the climate models’ internals are dealing with weather, ie. local conditions over a short time, they suffer from the same problem. I will return to this idea later.

Small Externals

External factor 2 concerns features that are too small to be represented in the models’ cell system. I call these the small externals. There are lots of them, and they include such things as storms, precipitation and clouds, or at least the initiation of them. These factors are dealt with by parameterisation. In other words, the models use special parameters to initiate the onset of rain, etc. On each use of these parameters, the exact situation is by definition not known because the cell involved is too large. The parameterisation therefore necessarily involves guesswork, which itself necessarily increases the amount of error in the model.

For example, suppose that the parameterisations (small externals) indicate the start of some rain in a particular cell at a particular time. The parameterisations and/or internals may then change the rate of rain over hours or days in that cell and/or its neighbours. The initial conditions of the cells were probably not well known, and if the model had progressed more than a few days the modelled conditions in those cells were certainly by then totally inaccurate. The modelled progress of the rain – how strong it gets, how long it lasts, where it goes – is therefore ridiculously unreliable. The entire rain event would be a work of fiction.

Large Externals

External factor 3 could include man-made factors such as CO2 emissions, pollution and land-use changes (including urban development), plus natural factors such as the sun, galactic cosmic rays (GCRs), Milankovitch cycles (variations in Earth’s orbit), ocean oscillations, ocean currents, volcanoes and, over extremely long periods, things like continental drift.

I covered some of these in my last article. One crucial problem is that while some of these factors are at least partially understood, none of them are understood well enough to predict their effect on future climate – with just one exception. Changes in solar activity, GCRs, ocean oscillations, ocean currents and volcanoes, for example, cannot be predicted at all accurately, and the effects of solar activity and Milankovitch cycles on climate are not at all well understood. The one exception is carbon dioxide (CO2) itself. It is generally-accepted that a doubling of atmospheric CO2 would by itself, over many decades, increase the global temperature by about 1 degree C. It is also generally accepted that CO2 levels can be reasonably accurately predicted for given future levels of human activity. But the effect of CO2 is woefully inadequate to explain past climate change over any time scale, even when enhanced with spurious “feedbacks” (see here, here, here, here).

Another crucial problem is that all the external factors have to be processed through the models’ internal cell-based system in order to be incorporated in the final climate predictions. But each external factor can only have a noticeable climate influence on time-scales that are way beyond the period (a few days at most) for which the models’ internals are capable of retaining any meaningful degree of accuracy. The internal workings of the models therefore add absolutely no value at all to the externals. Even if the externals and their effect on climate were well understood, there would be a serious risk of them being corrupted by the models’ internal workings, thus rendering the models useless for prediction.

Maths trumps science

The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.

From all of the above, it is clear that no matter how much more knowledge and effort is put into the climate models, and no matter how many more billions of dollars are poured into them, they can never be used for climate prediction while they retain the same basic structure and methodology.

The Solution

It should by now be clear that the models are upside down. The models try to construct climate using a bottom-up calculation starting with weather (local conditions over a short time). This is inevitably a futile exercise, as I have explained. Instead of bottom-up, the models need to be top-down. That is, the models need to work first and directly with climate, and then they might eventually be able to support more detailed calculations ‘down’ towards weather.

So what would an effective climate model look like? Well, for a start, all the current model internals must be put to one side. They are very inaccurate weather calculations that have no place inside a climate model. They could still be useful for exploring specific ideas on a small scale, but they would be a waste of space inside the climate model itself.

A climate model needs to work directly with the drivers of climate such as the large externals above. The work done by Wyatt and Curry [3] could be a good starting point, but there are others. Before such a climate model could be of any real use, however, much more research needs to be done into the various natural climate factors so that they and their effect on climate are understood.

Such a climate model is unlikely to need a super-computer and massive complex calculations. The most important pre-requisite would be research into the possible drivers of climate, to find out how they work, how relatively important they are, and how they have influenced climate in the past. Henrik Svensmark’s research into GCRs is an example of the kind of research that is needed. Parts of a climate model may well be developed alongside the research and assist with the research, but only when the science is reasonably well understood can the model deliver useful predictions. The research itself may well be very complex, but the model is likely to be relatively straightforward.

The first requirement is for a climate model to be able to reproduce past climate reasonably well over various time scales with an absolutely minimal number of parameters. (John van Neumann’s elephant). The first real step forward will be when a climate model’s predictions are verified in the real world. Right now, the models are a long way away from even the first requirement, and are heading in the wrong direction.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

References

[1] A Literature Debate … R Pielke October 28, 2011 [Note : This article has been referenced instead of the original Demetris Koutsoyiannis paper, because it shows the subsequent criticism and rebuttal. Even the paper’s critic admits that the models “have no predictive skill whatsoever on the chronology of events beyond the annual cycle. A climate projection is thus not a prediction of climate [..]“. A link to the original paper is given.

[2] The Physics of Climate Modeling. NASA (By Gavin A. Schmidt), January 2007

[3] M.G. Wyatt and J.A. Curry, “Role for Eurasian Arctic shelf sea ice in a secularly varying hemispheric climate signal during the 20th century,” (Climate Dynamics, 2013). The best place to start is probably The Stadium Wave, by Judith Curry.

Abbreviations

CO2 – Carbon Dioxide

GCR – Galactic Cosmic Ray

IPCC – Intergovernmental Panel on Climate Change

NASA – National Aeronautics and Space Administration

WMO – World Meteorological Organization

## 256 thoughts on “Inside the Climate Computer Models”

1. Heisenburg says:

Maybe ‘van Neumann’ is right- but Johnny did not have spell check worked out yet.

• macha says:

Dr David Evans at jonova site has some maths on this top dpwn climate modelling stuff. Seems to have a better handle on it than most, but especially IPCC.

• george e. smith says:

Maths does not trump science.

Science is about observation and experimentation.

Absolutely nothing that exists in any branch of mathematics, is observable anywhere in the real universe. It is ALL fictional or fictitious if you will. And I say that not in any derogatory sense. It is fictional in the sense that it was all made up out of whole cloth in the mind of some ancient or modern mathematician.

Mathematics is an art form, and it is full of much ingenuity. It is used to describe (essentially exactly) the operational behavior of our MODELS of what we think to real universe looks and behaves like.

For example, the simple differential equation:

d2s/dt^2 = – ks exactly describes a model in which the acceleration (of something) is directly proportional to some displacement (s) and is opposite in direction to that displacement.

The steady state solutions to this equation are of the form:

s = Acos(omega.t) + Bsin(omega.t)

We call that simple harmonic motion. Absolutely nothing in the real universe moves according to simple harmonic motion. But lots of things in the real universe appear to behave in a very similar fashion; but never exactly matching the mathematical description of the model’s behavior.

The uncertainties in “Science” are in the adherence of the model description to the OBSERVED behavior of the real world system. There are no uncertainties in the mathematical description of the fictitious model that we made up to approximate what we think the real world observations are.

G.

And for the record; I do not have an MA in maths from Oxford UK I presume that is Master of Arts. Did I not say Maths is an Art form.

But I do have a BSc from the U of NZ, now referred to as U of Auckland, and that degree was in three Physics majors; Physics, Radio-physics, and Mathematical Physics; plus two Mathematics majors (Pure and Applied Maths). I did part of the MSc course work and a thesis project (in Neutron and other particle scintillation detection) but did not sit exams or write up thesis; but that was over 55 years ago, so not too relevant today.
One of these days, I’ll retire; or drop dead.

• D.J. Hawkins says:

@george e. smith

Absolutely nothing in the real universe moves according to simple harmonic motion.

How about a mass on a spring? Add a damping term and you’ve got it nailed.

• How about a mass on a spring? Add a damping term and you’ve got it nailed.

Nope.

that “simple” damping term needs to change based on the air friction of the mass and the spring = which is proportional to the velocity of the mass (assuming completely still air whose density does not change over the lifetime of the experiment, who density needs to be proportional to the height of the mass and spring at each point, whose resistance will change as the velocity of the spring loops change (faster at the end of the spring, slower at the top (anchor point) of the spring, on the resistance of the anchor as it expands and contracts with the changing forces on its molecular structure).
Also will change based on temperature, pressure, humidity and swirling velocity of the air around every part of the spring and mass.
Minor effects include the changing temperature of the spring as it heats up due to molecular loading as it expands and contracts, and the spring itself (this loading) is itself damping the spring’s motion.
But, if your “perfect” experiment is in a “perfect” vacuum, the internal damping of the spring is a near-constant effect only proportional to the spring’s extension at each point is time.

• JohnKnight says:

george e. smith,

“Absolutely nothing that exists in any branch of mathematics, is observable anywhere in the real universe. It is ALL fictional or fictitious if you will.”

As an old logic/philosophy guy, this is music to my ears, so to speak. Many seem to have gotten the impression that math/stats are somehow realer than real itself . . and that leads to things like this (from the article);

“As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“.”

It’s simply not true that weather is chaotic, in the literal sense of chaos, he means the math he’s using to model the weather behaves “chaotically”, but that doesn’t really mean chaotically either, it means difficult to predict, essentially. It’s actually a form of complex order he’s referring to, and calling it chaos, because some math/stat heads were lazy about naming a type of mathematical discontinuity/shiftiness that can be generated, which is somewhat like the way something extremely complex like weather behaves, as far as I can tell.

From such sloppy language, I believe, comes things like; “We have just five hundred days to avoid climate chaos.”

I reject the naming of complex order: Chaos. It’s utterly nonsensical, and an invitation to manipulation of the publics perception of reality, to me. Pick a freaking word that does not turn discussion of the whole matter into gibberish, I say. Just do it, and quit going along with the laziness and “realer than real” BS that has been handed down to us.

(I suggest: Chaosh, after a bit of word searching and serious consideration, nobody that I am. It’s “available” it seems, and preserves the original intent)

• george e. smith says:

Don’t try to trick me.

A mass on a spring presumably depends roughly on Hooke’s Law. No real physical material obeys Hooke’s law exactly over any significant range of strain.

Didn’t I say that many systems approximate what our models do exactly.

Simple harmonic motion has no beginning and no ending, it is continuous. If it is not continuous, then it has a transient start up or stop condition which is NOT SHM.

x^2 + y^2 + z^2 = r^2 describes a sphere.

Nowhere in that equation will you find an explanation for 8,000 metre high mountains on the earth; because the earth is NOT a sphere. and it’s not even close to a sphere.

So who was the first scientist to notice something growing in his garden, with ‘I’s and ‘V’s, and ‘X’s on it casting shadows, and he decided to use it for counting. So what the hell were the Romans up to that they decided they needed a symbol for ” C ” and another one for ” M “. Of what possible interest are C and M to Romans, unless they already have a decimal number system that they understand.

Somebody; excuse me that is somebody dumb as a box of rocks, invented the Roman system for counting, and left out zero, to boot. So what was he proposing to do for more things: MMMMMMMMMMMMMMMMMMMMMMMMIX ??

• george e. smith says:

Well the dog (M\$) ate my reply to D.J.Hawkins.

Simple harmonic motion is continuous and lasts for all time, so it has no start or stop. If it did I would no longer have a single frequency so it wouldn’t be SHM.

Also, Hooke’s Law is only an approximation, so no mass on a spring is going to move in SHM.

Don’t try to trick me.

Real systems only approximate what the maths describes as the exact behavior of the fictitious model of the real system..

g

• george e. smith says:

“””””…..D.J. Hawkins

November 9, 2015 at 2:39 pm

@george e. smith

Absolutely nothing in the real universe moves according to simple harmonic motion.

How about a mass on a spring? Add a damping term and you’ve got it nailed…..”””””

You evidently didn’t read too closely.

I wrote a simple differential equation between two variables. One of them was (s) for Serendipity, and the other was (t) for Tolerance; i.e. ability to tolerate.

That said that the second derivative of serendipity with respect to tolerance, was directly proportional to serendipity by way of (k) and in the opposite sense (- k).

All of that is quite esoteric and is unrelated to any physical reality, and it is simply an act of faith that the claimed relationship is true. In any case, that equation does indeed have solutions in terms of a sin and cosine function, which themselves are simply mental creations of some fertile and artistic mind.

The fact that one might make an analog of that system in the form of a mass on a spring, where we assume, that Hooke’s law is obeyed to some level of fidelity, and also that Newton’s F = m.a is also approximately true.

So your analogous model can also be expected to have something approximating sine and cosine function motions; but if you want to add a ” damping ” term, whatever that is, to the differential equation, then you can not expect that new differential equation to have the same solutions.

Bessel functions are also solutions of a differential equation (Bessel’s equation), as are Legendre Polynomials or Tchebychev polynomials.

All can be studied in pure mathematics, without any reference to anything real or physical, but then we find; or experimental physicists do and other scientists, that the same solutions to those mathematical forms can approximate the behaviors of real physical systems.

But the discrepancies are in the models; not in the mathematics. The mathematical expressions are exact relations. But the model parameters may only approximate those exact relations; as in lack of Hooke’s Law fidelity, and a real system say with springs and masses, may have other less obvious discrepancies with the equations, such as for example having a real energy damping effect taking place.

So don’t blame the mathematics for failures due to improper model construction. And if the models don’t mimic the real world, then you have an even greater divergence.

g

• robert_g says:

I detect a little uncertainty here :)

• IPCC isn’t looking for a climate model that predicts future climate and weather. IPCC just needs a climate model that is conform to the political established UNFCCC and its CAGW? The underlying theme is not climate change but more changing the western society?

• george e. smith says:

Mike’s essay, should make clear to all, what one of the big problems of climate modeling is.

So they have this snapshot of the whole earth at grid points on a 1km x 1km grid, and their zigga-computer calculates what is at each of those grid nodes one second later. Then repeat until done.

An essential feature of this process is that the physical values at one instant of time, determine where the energies will go next, to produce the node values a the next instant of time.

You cannot calculate the energy flows, unless you have simultaneous values for the variables at all of the surrounding nodes.

That means that a ” snapshot ” must give the node values of ALL variables at ALL nodes SIMULTENEOUSLY, or else you cannot apply laws of physics an maths to calculate the next instant values for all of those nodes.

It’s a finite element analysis problem; and you can’t do Jack **** with a set of values that were all gathered at totally arbitrary time epochs.

To get the GCMs to fit the real earth behavior, you have to have the actual real earth node values for the entire earth ALL at the exact same instant of time. And the spacing and timing of each of those samples must conform to the Nyquist criterion for sampled data systems.

Otherwise, the real laws of physics cannot operate to produce the next valid value for each of those measurement nodes.

Just creating a large array of nodes on a computer is totally worthless, if you do not have a set of simultaneous measured values at each of those nodes.

The GCMs are a cruel joke; they don’t mimic anything real.

Wake me up, wen the GCMs start rotating this planet.

g

2. Chaos does not study mankind, Chaos does chaos.
Mankind’s study of chaos is done as chaos changes chaos.

Mankind needs to get over itself and get back to taking care of itself first.

Example is the bad thing of man made war. The war plan gets to the chaos state within seconds of the declaration documents and continues in that state down to the step of a private first class on his first night patrol when he miss steps a half step and sets off a buried mine and an ambush is sprung that changes the tide of a three year invasion plan.

Much to much about fixing something that man kind can not break or repair.

• Ted Getzel says:

Expat, thanks for the link to the Moore speech, everyone should read it. His path echoes my own.

• Ed says:

Agree, teriffic speech. All should not only read, but forward the link to any believers in CAGW that still are capable of reason.

• BFL says:

Expat:
Thanks for the link. Never realized just how coal seams were created and why they stopped being made, the amount of sequestration of CO2 in shale or how close the earth has come to CO2 levels dropping below 150ppm. Terrific article.

• Jan Christoffersen says:

Strictly speaking, Moore should have used the term “carbonate” instead of “carbonaceous”, as the rocks to which he referred sequester CO2 in the form of CaCO3 (limestone – calcium carbonate). To my way of thinking, carbonaceous rocks are mainly shales with high levels of organic carbon.

• Gary M says:

Agree !!!!

• latecommer2014 says:

And if , as I believe co2 is a product of temperature , it can not be the cause of temperature change. Those that believe that co2 doubling over time can create a 1 deg rise are simply wrong, and have confused the cause and effect. Prove me wrong and show any empirical evidence where it is clear that co2 rise precedes temperature rise and you will stimulate me to reconsider this fraudulent science promoted buy fools and frauds

• Pete J. says:

I suppose it depends on whether you believe the sun heats the atmosphere and everything beneath it or if the sun’s heat reradiates from the oceans/land and heats the atmosphere. Doesn’t the earth have an internal source of heat that is not well understood either?

3. emsnews says:

Ah, what ‘drives climate’. I would start with the local burning object we call the ‘sun’.

• Auto says:

ems
To ignore a near-million mile diameter nuclear reactor just a few minutes away by light would, indeed, be folly.
There are other drivers which we also need to consider.
And we haven’t more than an inkling of what their relative influence is.
The parrot-cry – ‘The science is settles’ is – at best – that fabulous chimera, rat-carp.

Auto

• george e. smith says:

Try about 860,000 mile diameter. Well for some people that is close to 2 million.

g

• Dawtgtomis says:

Along with that, why do models make linear projections instead of cyclical predictions?

4. Marcus says:

Instead of wasting trillions of dollars trying to stop climate change ( impossible ), the money would be better spent on ADJUSTING to climate change ( easy )…Mr. Mann should like this idea…he good at ADJUSTING things !!

• Mike McMillan says:

They’ve spent a lot of time and money adjusting the data to generate climate change.

They could easily (and more cheaply) solve the climate problem by readjusting the date to erase it.

• Auto says:

Mike Mc
Why adjust to eliminate, when unadjusted data indicates serious problems with forecasting for IPCC models.

Even given the – uhhh – interesting selection of land stations contributing data.

Ocean data has been discussed.
Whilst mariners do ‘the best they can’, some of the data is probably not accurate to the nearest hundredth of a degree, shall we say. I know – I’ve been there and done that.
### but probably better than no data at all.

Auto

• JB Harshaw says:

>They’ve spent a lot of time and money adjusting the data to generate climate change.

>They could easily (and more cheaply) solve the climate problem by readjusting the date to erase it.

And what in the world makes you think they would want to spend LESS money, or that they would want to “solve” the problem.

Gadzooks! Can you imagine what that would mean for their careers? I mean seriously, there’s no glory or reward or accolades for putting yourself (and your countless thousands of colleagues) OUT of a job, or for being the cause of seeing their rationale for obtaining grant money (to study snails or whatever) cut back just because you’ve done something as foolish as to say “Oopsie, turns out there’s no problem at all.”

I mean you’d have to be “crazy” to even attempt something like that, right? (And yes, I do think that is one of the reasons that the “warmists” think and call out “deniers” as “crazy” — because from the perspectives of the warmists, anyone who does something that might END that proverbial gravy train HAS TO BE “crazy”.)

• Not exactly.

M. Manniacal is incredible at ignoring things. He ignores inconvenient data, algorithms, data orientation, history…

5. jhrose says:

The top down approach has much to commend it — but it may not allow us to predict the climate accurately enough. The climate seems to have at least two quasi-stationary states (ice ages and inter glacial). Would knowing all the top down elements allow you to predict the transition between such meta stable states? What about hysteresis and super-heating and super-cooling? The argument by the CAGW proponents is that there is another metastable (or run away) state at higher temperatures. How do you rule that out? Also, quite generally, are there other metastable states besides glacial and inter glacial?

• Marcus says:

You simply cannot predict CHAOS !!!

• emsnews says:

The climate the last 2 million years has been steady as a clock. Long Ice Ages with sudden, dramatic but very short Interglacials over and over and over again. Predicting something else would require a great deal of proof.

• The theme is policy based chaos?

• Ray Boorman says:

The only reason governments are trying to predict the climate is so they can scare the populace into giving them more power to tell people what to do.

Does anyone alive really care what the climate will be in 100yrs?

• Sun Spot says:

Ray you are correct. The whole “Climate Change” narrative is a narrative of fear (think WMD’s). Once you have people living in fear you can establish a saviour, and they will believe most anything this saviour tells them. More often than not this saviour will profit handsomely from the sheeple’s panic.

• Walter Sobchak says:

“there is another metastable (or run away) state at higher temperatures. How do you rule that out?”

They are arguing that there is a positive feedback mechanism that will cause an excursion from the metastable inter glacial we are in. The biggest problem with the argument is that if there were such a mechanism, it would have been triggered at some point in the past by some random fluctuation, such as the volcanic eruption that created the Deccan Traps. Absent a showing that the mechanism exists, CAGW theory cannot be maintained.

• Hivemind says:

“The climate seems to have at least two quasi-stationary states (ice ages and inter glacial).”

+100

But you won’t get any climate modeller to admit that. Their grant money depends on them proving we are in a runaway vicious cycle.

• george e. smith says:

Has this other “higher temperature” metastable state existed with the exact same earth orbital and axis tilt parameters as we currently have. Does not the earth axis, precess with something like a 65,000 year period ?

You cannot expect to have a climate regime identical to what we have today, with a totally different orbital state, or with totally different earth geology as far as continental locations go.

g

6. GP Hanner says:

“The harsh reality is that any science is wrong if its mathematics is wrong.”
True enough. To paraphrase a mathematician I know, all you have to do is to be able to invert a matrix and you will get a result of some kind, whether it makes sense or not.

• george e. smith says:

The mathematics describes the model. If the model does not emulate the scientific observations and measurements, as close to reality as we can make it, then no mathematical juggling can save it.

The discrepancies are in the models; not in the mathematics.

The earth does not appear to be an isothermal flat disc illuminated 24 hours a day at all points, by a sun that is directly overhead at a distance of 186 million miles.

That is what Kevin Trenberth’s model shows.

Our sun is only at 93 million miles so it produces a TSI irradiance of 136(2) W/m^2; not 342 w/m^2 like Kevin’s. And our planet is a roughly spherical body, that rotates once in about 24 hours, and it is nowhere near isothermal like Kevin’s model.

g

7. The following tables and observations are based on a fao.org (farm & agricultural.org of UN) global CO2 balance Bing image. This diagram is typical of many variations. Variations implies non-consensus.

How much carbon is there? Carbon, not CO2!

Reservoir…………………………..Gt C………%
Surface ocean……………………1,020……..2.2%
Deep Ocean…………………….38,100……81.2%
Marine Biota……………………….….3……..0.0%
Dissolved Organic Carbon………..700………1.5%
Ocean Sediments………………….150……..0.3%
Soils…………………………….…1,580……..3.4%
Vegetation…………………………….610……..1.3%
Fossil Fuel & Cement……………..4,000……..8.5%
Atmosphere…………………………..750……..1.6%
Total………………………………….46,913

Carbon moves back and forth between and among these reservoirs, the great fluxes.

Atmospheric Fluxes, Gt C/y………Source………Sink
Soils…………………………………….60
Vegetation……………………………..60…………121.3
Fires……………………………………1.6
Ocean Surface………………………..90…………..92
Forests…………………….…………………………….0.5
Fossil Fuel & Cement………..………5.5
Total…………………………………..217.1………..213.8
Net………………………………………3.3

The net of 3.3 (seen this before?) is exactly 60% of FF & C. How convenient. How dry labbed.

Now this is all carbon. Carbon is not always CO2. Carbon can be soot from fires, tail pipes, volcanoes. Carbon can be carbonates in limestone and coral. But let’s just say all of this fluxing carbon converts to CO2 at 3.67, 44/12, units of CO2 per unit of carbon. How many Gt of CO2?

Atmospheric Fluxes, Gt CO2/y…….Source……..….Sink
Soils…………………………………..220.2
Vegetation……………………………220.2……….445.171
Fires……………………………………5.872
Ocean Surface………………………330.3……….337.64
Forests………………………………………………….1.835
Fossil Fuel & Cement…………………20.185
Total………………………………….796.757……784.646
Net……………………………………12.111

Now is it ppm volume or ppm gram mole? If one is to compare the number of molecules it must be CO2/atmosphere ppm gram mole, 44/28.96.

Atmospheric Fluxes, ppm/y………Source……..Sink
Soils……………………………………28.42
Vegetation…………………………….28.42…….57.45
Fires……………………………………..0.76
Ocean Surface……………………….42.63……43.57
Forests……………………………………………..0.24
Fossil Fuel & Cement……………………………..2.60
Total………………………….………102.83…..101.26
Net…………………………….……….1.56 (Let’s just blame this all on FF & C.)

Per IPCC AR5 Table 6.1 some of these fluxing sources and sinks have uncertainties of +/- 40 & 50%!! IMHO anybody who claims that out of these enormous reservoirs and fluxes they can with certainty, accuracy, specifically assign 1.56 ppm/y to FF & C is flat blowing smoke.

1. Mankind’s contribution to the earth’s CO2 balance is trivial.
2. At 2 W/m^2 ( a watt is power not energy) CO2’s contribution to the earth’s heat balance is trivial.
3. The GCMs are useless.

• Just wait till Ferdinand catches up with you. He uniquely and solely understands, in gory detail, all global (possibly galactic) carbon sources, sinks and fluxes. Natural components are in perfect harmonious balance, according to him, and anthropogenic emission accounts for all observed increase to atmospheric CO2. Just ask him!!

• Yirgach says:

Here is the IPCC AR5 Table (Figure actually) 6.1:

• Yirgach

Figure 6.1 about the same as the fao.org graphic.

Look at Table 6.1 for the uncertainties.

• Yirgach says:

@Nicholas

Thanks for pointing that out. BTW, here’s the description of Table 6.1:
Global anthropogenic CO2 budget, accumulated since the Industrial Revolution (onset in 1750) and averaged over the 1980s, 1990s, 2000s, as well as the last 10 years until 2011. By convention, a negative ocean or land to atmosphere CO2 flux is equivalent to a gain of carbon by these reservoirs. The table does not include natural exchanges (e.g., rivers, weathering) between reservoirs. The uncertainty range of 90% confidence interval presented here differs from how uncertainties were reported in AR4 (68%).

Note that they are claiming a 90% confidence interval as compared to 68% in AR4.
Is that increase in confidence due to a change in the level of uncertainty in the fluxing sources between AR4 and AR5?

• Thanks Ron, just ended the 20th round of back and forth discussion with Bart on the same item, quite exhausting…
The natural variability btw is not more that +/- 1 ppmv around a trend of 110 ppmv…

Nicholas,

1. Amounts in each reservoir are of no interest, as long as these don’t move from one reservoir to another.
2. Fluxes between reservoirs are of no interest, as long as the fluxes in and out are equal.
3. Flux differences are the only points of interest and that is what is exactly known:

We know with reasonable accuracy how much CO2 humans have emitted in de past decades.
We know with high accuracy with how much the CO2 level in the atmosphere increased in the same period.
Therefore we know that humans are responsible for near all increase, at least over the past 57 years, with a small addition from warming oceans (~10 ppmv).

The rest of the natural cycle is not of the slightest interest for the mass balance. It is of academic interest to know the detailed fluxes, but that doesn’t change the overall balance one gram.

The current imbalance is about 2.15 ppmv increase/year. It doesn’t matter if any individual flux doubled or halved compared to the year before or that vegetation was a net source one year and a net sink the next. It doesn’t make any difference that the natural carbon cycle was 100, 500 or 1000 GtC/year, as the net overall difference at the end of the year is all what matters…

Thus Ron:

1. Man’s contribution is the main cause of the CO2 increase in the atmosphere.
2. Man’s contribution to the earths heat balance may be trivial, but probably not zero.
3. The GCMs are useless.

Note: let’s stick to the item of interest here: the usefulness (or not) of GCM’s…

• Ferdinand
“The rest of the natural cycle is not of the slightest interest for the mass balance.”

As I understand it CAGW theory posits that the globe’s natural CO2 balance is & has been steady state for multi-hundreds of years and any changes can be due to nothing but disruptive, unbalancing, external sources, i.e. mankind. Coincidence = cause. And the scientific evidence has been filtered and adjusted to support that assumption.

How does anybody know in these huge numbers and uncertainties how steady/unsteady state the atmosphere has been? Even minor changes in the natural cycle eclipse man’s puny contribution. I consider the notion of a perfectly balanced, steady state atmosphere totally bogus based on simple observations. Obviously ‘taint so and never has been.

• Nicholas,

In the recent 57 years of accurate measurements the natural unbalance was not more than +/- 1 ppmv around the trend, all caused by temperature variability and the trend itself was not from nature: that was every year a net sink for CO2, not a source. Thus while even a small change in the main CO2 fluxes could dwarf human emissions, it didn’t.

Before Mauna Loa and South Pole measurements, we have ice core measurements with a resolution between 10 and 600 years and an accuracy of 1.2 ppmv (1 sigma), depending of the on the spot snow accumulation rates. That shows a global change of ~16 ppmv/°C from LIA-MWP to eight glacial-interglacial transitions and back over the past 800,000 years. That is in the ball park of Henry’s law which shows 4-17 ppmv/°C for a seawater-atmosphere steady state… The biosphere absorbs more CO2 at higher temperatures.

The only influence in the (far) past was temperature at a rather fixed, linear ratio, still visible today in the small influence of temperature on the variability around the trend.

• Nicholas,

Vostok CO2-temperature ratio:

Most of the spread around the trend is from the difference in lag between the glacial-interglacial transition and reverse: 800 +/- 600 year during warming, several thousands of years during cooling.

Taking into account the about double change in temperature at the poles than global, that gives a near linear change of ~16 ppmv/°C, confirmed by more recent periods (like the MWP-LIA transition) and the much shorter seasonal (~5 ppmv/°C) and 1-3 year variability (4-5 ppmv/°C: Pinatubo, El Niño).

• Greg Cavanagh says:

Ferdinand, it reads as though your saying; If human influence over the last 57 years was in fact zero, then the climate would not have changed one iota over the last 57 years. Have I got this right?

Does the climate change without human influence, and if so how much would it have changed in the last 57 years?

• Dave A says:

Ferdinand Engelbeen states:

We know with reasonable accuracy how much CO2 humans have emitted in de past decades.
We know with high accuracy with how much the CO2 level in the atmosphere increased in the same period.
Therefore we know that humans are responsible for near all increase, at least over the past 57 years, with a small addition from warming oceans (~10 ppmv).

What a preposterous claim!!! Please do clarify your “reasoning” on this one Ferdinand.

• Ferdinand, you say “We know with reasonable accuracy how much CO2 humans have emitted in [the] past decades.” I am not disputing that humans have emitted and are emitting increasing amounts of CO2. I am not disputing the overall shape of that graph. I’m just wondering what “reasonable accuracy” is? Are the CO2 emission data any more accurate than world population figures or economic statistics? I’m particularly thinking of the recent report (http://www.theguardian.com/world/2015/nov/04/china-underreporting-coal-consumption-by-up-to-17-data-suggests) that China had been under-reporting its coal use by 17%. I do not suppose that China is the only large non-rich country to have difficulty gathering accurate statistics.

• Barry says:

Please account for the CO2 emitted by volcanism. Or do you assume this is a constant? You imply that this source is irrelevant to atmospheric variability and only the increase from human contributions are what matter.

• Peter Sable says:

Now this is all carbon. Carbon is not always CO2. Carbon can be soot from fires, tail pipes, volcanoes. Carbon can be carbonates in limestone and coral. But let’s just say all of this fluxing carbon converts to CO2 at 3.67, 44/12, units of CO2 per unit of carbon. How many Gt of CO2?

Um, so where’s volcanoes on the list? I don’t see it.

• Nicholas Schroeder:
Consider the carbonate sink on the sea floor. Somewhat less than seventy five percent of the sea floor has carbonate shell detritus literally raining down. Both the total sea floor carbonate layer and it’s yearly growth are minimalized in IPCC charts and understanding.

8. urederra says:

And on top of that, the fact that if one model is correct, the rest that use different parameter values must be wrong.

• dccowboy says:

I’d further submit that, because you are using ‘parameterizations’ of not well understood processes and initial conditions of unknown accuracy, even if a particular model correctly predicts the current (and/or parts of past) climate, you cannot rely on it to correctly predict the future state of the climate, i.e., it may be right for the wrong reasons and thus has no predictive value. Too many of Donald Rumsfeld’s famous, ‘known unknowns’ and ‘unknown unknowns’ to relay on model output. Certainly you can’t refer to it as ‘data’ as I’ve seen the unscientific media refer to it as.

9. jpr says:

Does anyone know if (and how) the models take into account the cooling effect of evaporation from the extra water vapor? The AGW theory is that CO2 will cause a small harmless increase in temperature (0.5-1 degree) and that will then increase the amount of water vapor (because water vapor percentage is dependant on temperature), which causes a bigger increase in temperature, which then causes more evaporate and more water vapor and the loop goes on. Most of the predicated warming comes from the extra H2O. The problem is that the extra H2O comes via evaporation which is endothermic (cooling). And what I want to know is do the models take this cooling effect into account (and how) ? I would have thought that the amount of cooling produced by the evaporation of H2O MUST be equal to the amount of heating it receives (from CO2) – so there can be no feedback loop – otherwise you are creating energy from nothing ?? Do I have this wrong ?

• Mike M. (period) says:

jpr,

Models account for latent heat of evaporation by incorporating energy balance in the equations that they solve. Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed. So evaporation produces local cooling, but leads to local heating elsewhere. The resulting transfer of latent heat is the dominant means by which energy is redistributed in the climate system.

Changing atmospheric water vapor has three effects. More water vapor means more IR absorption, since water vapor is a greenhouse gas, that enhances warming. It also means more efficient latent heat transfer, that produces cooling by enhancing the transport of energy from the surface, where sunlight is absorbed, to the mid-troposphere, from whence most IR emission to space occurs. And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.

The above article by Mike Jonas is best ignored, since it does nothing to clarify the real problems with climate models. One of the biggest is that they can’t get cloud distributions right, so how can they correctly predict how those distributions will change? So they can’t get dependable results for cloud feedbacks and so can’t get climate sensitivities that can be believed.

• “And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.”

IPCC AR5 credits clouds with a net -20 W/m^2. That’s cooling and lots of it.

• Mike M. (period) says:

Nicholas Schroeder wrote:

“IPCC AR5 credits clouds with a net -20 W/m^2. That’s cooling and lots of it.”

That is true. What puzzles me is that you wrote that in response to my statement: “And finally, water vapor affects the formation of clouds. No one really knows if that produces increased or decreased warming, though the modellers all plunk for the former.”

Which is also true. So what is your point? That you don’t understand the difference between the net effect of clouds and how that might change in response to a change in temperature?

• TimTheToolMan says:

Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed. So evaporation produces local cooling, but leads to local heating elsewhere.

I’d say that statement is disingenuous because “elsewhere” is generally a couple of kms up in the atmosphere where it is well on its way to leaving the planet.

• Mike M. (period) says:

TimTheToolMan wrote: “I’d say that statement is disingenuous because “elsewhere” is generally a couple of kms up in the atmosphere where it is well on its way to leaving the planet.”

And I’d say that TimTheToolMan is a silly twit since I wrote: “more efficient latent heat transfer, that produces cooling by enhancing the transport of energy from the surface, where sunlight is absorbed, to the mid-troposphere, from whence most IR emission to space occurs”.

• TimTheToolMan says:

The above article by Mike Jonas is best ignored, since it does nothing to clarify the real problems with climate models.

I think the “real” problem with the GCMs is, as described, the errors inherent in the simplifications of physical processes, interactions between processes and parameterisations of the components of the models. They will feedback, amplify and swamp any climate signal that might be present. Fixing the clouds wont fix this. GCMs are doomed to failure on their current development course.

But why would anyone seriously try to address this from the modeller’s camp? Its tantamount to saying we’re wasting our time doing what we’re doing…

• TimTheToolMan says:

Mike writes “since I wrote:”

Then why say “elsewhere” when you know fully well it is going to result in a net cooling to the planet?

Mike writes “Note that evaporation of water produces no net cooling for the planet as a whole”

Disingenuous

• Walter Sobchak says:

“Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed.”

Yes, but the cooling occurs at the surface. The energy is released in the upper troposphere where it has a chance to radiate into space.

• “Does anyone know if (and how) the models take into account the cooling effect of evaporation from the extra water vapor?”

They don’t because mankind has no role in water vapor as IPCC AR5 admits in FAQ 8.1

• Mike Jonas says:

jpr – I discuss the effect of water vapour in an earlier article. I think it’s the fourth ‘here’ in the set of four ‘here’s. Mike M is quite wrong – the evaporation condensation precipitation cycle does produce net cooling, directly by sending more energy into space, and indirectly by creating clouds. I address clouds too, in the earlier article.

Mike M – There is so much wrong in climate science that it is mind-boggling. The models being upside down is a very important factor so it is worth examining (as structured the models can never work). Yes there are huge problems in the science, such as clouds, but the useless model structure is worth addressing too.

• Mike M. (period) says:

Mike Jonas,

“Mike M is quite wrong – the evaporation condensation precipitation cycle does produce net cooling, directly by sending more energy into space, and indirectly by creating clouds.”

You say I am wrong, then you agree with what I said. Weird.

• Mike Jonas says:

What I was calling wrong was your statement “Note that evaporation of water produces no net cooling for the planet as a whole, since all the water that evaporates must eventually condense and release the heat absorbed. So evaporation produces local cooling, but leads to local heating elsewhere. The resulting transfer of latent heat is the dominant means by which energy is redistributed in the climate system.“. You make it sound like all the heat remains in the climate system and thus that there is no net heat gain or loss from the whole cycle. That’s wrong, because with an increased water cycle there is a net heat loss to space – more heat is lost to space from the atmosphere than is gained from space (the sun) by the ocean.

• dccowboy says:

I believe they are, but they are one of the many ‘parameterizations’ == guesses in the models since we don’t, even today have a good idea of how clouds function relative to heating or cooling in the atmosphere/oceans. The parameterizations potentially introduce huge uncertainties and potential for bias in the models. Pretty much the parameters can be ‘tuned’ to produce output that reinforces the bias of the modeller.

• Mike M. (period) says:

TimTheToolMan,

“Then why say “elsewhere” when you know fully well it is going to result in a net cooling to the planet?”

Because jpr seemed to imply that he thought that the simple act of evaporation produced net cooling. It does not.

When I wrote “Note that evaporation of water produces no net cooling for the planet as a whole” I was 100% correct.

You seem unable to understand such fine, but important distinctions.

Silly twit.

• TimTheToolMan says:

Mike M. writes “net cooling”

Do you know what “net” means Mike M.?

• TimTheToolMan says:

Mike M. Writes “You seem unable to understand such fine, but important distinctions.”

But lets not allow you to push the conversation away from the most important point. What do you have to say about the accumulation of errors in the models?

After all you wanted to simply dismiss the article based on insufficiently specified components (ie clouds) with an implication that it could get better. But the error issue is much deeper. Endemic and incurable to them all. Why not mention this in your dismissal?

• george e. smith says:

Water vapor (evaporation) depends on WATER temperature, not on atmospheric Temperature, which only affects relative humidity.

CO2 (radiation capture) does not appreciably alter water Temperature.

Water vapor trapping of LWIR is perfectly capable of warming the atmosphere all by itself. It does not require any carbon dioxide “kindling wood” to get started evaporating.

So talk of water vapor amplification is nonsense.

The feedback from atmospheric warming is to the system input terminal which is the solar irradiance. More atmospheric water , in any of its three phases, results in less solar irradiance at the earth’s surface, which is 70+% water. And that leads to surface cooling; which leads to less evaporation and atmospheric water; which leads to more solar irradiance of the surface. That is stabilizing negative feedback in anybody’s feedback manual.

g

10. LB says:

The initial state.
==========

Why should it matter?

If you pick lots of different initial states, and after 30 years they converge, then the initial state doesn’t matter.

==========
The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods.
==========

You’ve presented no justification that says that the error is exponential. I very much doubt that any of the published models are. If they were, the wouldn’t even get as far as being presented.

=========

So 2 major flaws with the article, and that’s coming from someone who thinks the climate scientists have got it wrong.

• Soooooooooooooooooooooooooooooooooooooooooooo, Which climate models have gotten it correct?

• David A says:

…the one with the lowest climate sensitivity. Really Ron C, sorry, no link to his sight, has an excellent post on the best to observations model, which BTW is still to warm for what the satellite record shows.

• TonyL says:

If you pick lots of different initial states, and after 30 years they converge, then the initial state doesn’t matter.

That’s a pretty big IF you got there. The model runs do not converge, run to run. The models do not converge, model to model. The IPCC just averages everything and then assumes that the errors cancel out. This is exactly how CIMP 5 is arrived at.

You’ve presented no justification that says that the error is exponential.

The mathematical basis for an exponential error series is good enough for me.

the wouldn’t even get as far as being presented

A little bit of ad-hoc code to keep the model output in bounds can do wonders to make things look good.

someone who thinks the climate scientists have got it wrong

The author is not the only one.

Or maybe the models are not wrong. They are, as we used to say, Not Even Wrong. So far divorced from reality, they do not even matter.

• richardscourtney says:

TonyL:

You say

Or maybe the models are not wrong. They are, as we used to say, Not Even Wrong. So far divorced from reality, they do not even matter.

Yes. That is certainly true for all the models except at most one because they each model a different climate system but there is only one Earth.

For the edification of LB (and others who don’t know), I again explain this matter.

None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.

Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which were greater than observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.

And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.

He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.

Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at http://www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

Kiehl’s Figure 2 can be seen here.

Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.

In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.

Which, of course, means that as you say

Or maybe the models are not wrong. They are, as we used to say, Not Even Wrong. So far divorced from reality, they do not even matter.

Richard

• whiten says:

TonyL
November 8, 2015 at 8:32 am

“That’s a pretty big IF you got there. The model runs do not converge, run to run. The models do not converge, model to model. The IPCC just averages everything and then assumes that the errors cancel out. This is exactly how CIMP 5 is arrived at.”
———————————
Hello TonyL

What you state as per above could be right and correct…..but let me express my understanding and my opinion in that particular point.

When I look at these model results in the run to run and model to model, all of them (presumably you were referring strictly to GCMs in this case) do get at the “same ball court” as the AGW-ACC “climate scientist” were saying and claiming for years now.
Regardless of what actual initial state or of what degree of forcing (the anthropogenic CO2 scenario) all models in run to run or model to model do converge at that particular “ball court”.
The “ball court” of ~200 to 220ppm up in concentrations and ~2-5 to 3C up in temps.
The only main difference in the model runs whether run to run or model to model is the time it takes to get there.
In some cases can take a half a century and in some other cases maybe even a half of millennium or more. in the model terms.

cheers

• Whiten:
Richard nailed it with;

“…None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1. the assumed degree of forcings resulting from human activity that produce warming
and
2. the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature…”

To your perspective, the models may end up in the same ‘ball court’; but it is definitive that the ‘ball court’ isn’t Earth. That ball court is illusory and completely useless from a practical, scientific or even a modeled need.

A broken clock reads the correct time twice a day if it is a twelve hour clock. Whether the clock’s accuracy is measured for one day, ten days or a century doesn’t make the clock’s accuracy useful or predictive.

• Dinostratus says:

I don’t know if I’m just becoming more sensitive to it, paying more attention to it or if my sense is real but I do get the sense that the quality of WUWT guest posters is going down or at least their arguments are.

• all he would need to do is look at a control run to know that the errors are not exponential

• TimTheToolMan says:

all he would need to do is look at a control run to know that the errors are not exponential

Actually you appear to be appealing to the fact that the errors must involve a spiralling out of control for the model. Not so. The errors could easily be set to balance each other out in a control run (and indeed that’s what they must be set to do) as well as enabling the model to roughly produce past warming figures.

But the actual climate signal is certainly lost in the process through the accumulation of errors and instead, the GCM is behaving in a manner expected of it. By design if you like.

11. Philihippos says:

Has anyone else noticed that in the warming ‘literature’ the most frequently used words when referring to the future are ‘could’ and ‘might’ and ‘probably’ – so much for certainty.

• Menicholas says:

Nearly everyone who is both paying attention, and has any objectivity whatsoever has noticed this.
Those words you mention are but a few of the “weasel words” which make most every warmista conclusion a smarmy bunch of lukewarm crap.

12. Terry says:

Climate is by definition a steady state. That steady state is modified by external influences and called climate change. Over a multi decadal (or more) period modelled by the climate change community, it is only the larger externals which impact – small scale events may only have a short term impact on climate equilibrium.

Rather than model climate change from an initial condition forward with all the potential errors outlined above, it may be better to hypothesise a future equilibrium state of the planet given changes in the larger “man made” externals – CO2, population, land use, urbanisation etc. Uncontrollable externals impact unavoidably – the question is timing, not magnitude.

For example – if C02 increases to 800 ppm what would the climate look like given the consequential changes to water vapour, cloud cover, plant growth, ice sheets, sea level, etc etc. The outcome should be a steady state. If the model is unable to resolve the different feedbacks, the outcome would be a runaway greenhouse or or freezer – unattractive!!.

This may crystallise the lack of knowledge that exists in some of the externals as they would be key to resolving a steady state. It would also help identify which assumptions made on feedback loops were material and which had limited impact on the steady state achieved. The model could be progressively simplified with the key assumptions made more usefully challenged.

Adopting this approach may also influence mitigation action which may be taken (CO2 reduction may not be the best answer), is less likely to require massive computing power, and avoids accumulated errors in current models.

• richardscourtney says:

Terry:

You assert:

Climate is by definition a steady state.

Really?! Whose definition is that?

The IPCC definition is this

Climate
Climate in a narrow sense is usually defined as the average weather, or more rigorously, as the statistical description in terms of the mean and variability of relevant quantities over a period of time ranging from months to thousands or millions of years. The classical period for averaging these variables is 30 years, as defined by the World Meteorological Organization. The relevant quantities are most often surface variables such as temperature, precipitation and wind. Climate in a wider sense is the state,
including a statistical description, of the climate system. {WGI, II, III}

There is no mention of “a steady state” and if a steady state existed there would be no difference in climate between periods of time with different lengths.

Richard

• george e. smith says:

So according to the IPCC, the earth has just one climate from pole to pole, and everywhere in between.

Their definition (definitive) of climate permits any kind op spatial variation, only temporal variations.

What unmitigated BS is that.

G

13. Latitude says:

Mike…you left out the obvious

” the number of calculations required”…..takes time

And by the time they’ve made a few runs……the past has been changed/adjusted/modified/F’ed with

Even if we understood it all…..models will never be right

• TonyL says:

The models are trained on the past temperature to set the “parameterizations”. When the models are set to reflect a fictional past, the predictions will be for a fictional future.
Another possibility is that the models are accurate for the planet they represent, just not Earth.

• Latitude says:

well exactly…..there’s even the chance that the models are right
But when they are tuned to a fictional past that shows faster temp rise than actually happened….
…you would expect to get the exact same results they are showing

Faster temp projections than what is actually happening

• K-Bob says:

Latitude: Hammer meet nail! I agree and think this is why the models are currently failing.

• David A says:

…well I predict that eventually they will all match perfectly! (really)

latitude’s post is excellent in that it demonstrates that not are the models fudged (via particulate affects having a tremendous variance) but the observations are massaged to match the models as well. You will not see a Mosher taking on the totality of these adjustments.

However none of the models, even the best, are in the ball park of the satellite observations.

14. Just wait till Ferdinand catches up with you. He uniquely and solely understands, in gory detail, all global (possibly galactic) carbon sources, sinks and fluxes. Natural components are in perfect harmonious balance, according to him, and anthropogenic emission accounts for all observed increase to atmospheric CO2. Just ask him!!

• Frank Karvv says:

Ferdy Englebert reminds me of Lord Kelvin always correct on every detail and yet shown in the long term to be wrong on many of his well held beliefs. The inputs and outputs are not known accurately and are in constant change.

15. newminster says:

“… equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.”

My view, for what it is worth, is that models may well bear a relation to reality but only by accident. A philosophy lecturer summed it up years ago when he said that anyone who claimed to know for certain the winner of the next Preakness Stakes was flat wrong, even if that horse won!
What he then went on to say, if course, was that trying to convince that man he was wrong was a nearly impossible task.

16. Gordon Ford says:

[Snip. Fake screen name for beckleybud, Edward Richardson, David Socrates, Tracton, pyromancer, and at least a dozen other sockpuppet names. ~mod.]

17. JohnTyler says:

Does anyone know how the models handle clouds?
Do the models “create” clouds – as does the real world – and do the models “extinguish” clouds as does the real world?
Does anyone know how do the models handle ocean currents, which all agree are very influential in affecting climate?
Do the models “create” rainfall and “end” rainfall, as we see in the real world? And does not rainfall cool the area on which it falls?
Any help will be appreciated.

• richardscourtney says:

JohnTyler:

Does anyone know how the models handle clouds?

Ron Miller and Gavin Schmidt, both of NASA GISS, provided an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is not reported to have been updated since.

Its abstract says:

This preliminary investigation evaluated the performance of three versions of the NASA Goddard Institute for Space Studies’ recently updated General Circulation Model E (GCM). This effort became necessary when certain Fortran code was rewritten to speed up processing and to better represent some of the interactions (feedbacks) of climate variables in the model. For example, the representation of clouds in the model was made to agree more with the satellite observational data thus affecting the albedo feedback mechanism. The versions of the GCM studied vary in their treatments of the ocean. In the first version, the Fixed-SST, the sea surface temperatures are prescribed from the obsevered seasonal cycle and the atmospheric response is calculated by the model. The second, the Q-Flux model, computes the SST and its response to atmospheric changes, but assumes the transport of heat by ocean currents is constant. The third treatment, called a coupled GCM (CGCM) is a version where an ocean model is used to simulate the entire ocean state including SST and ocean currents, and their interaction with the atmosphere. Various datasets were obtained from satellite, ground-based and sea observations. Observed and simulated climatologies of surface air temperature sea level pressure (SLP) total cloud cover (TCC), precipitation (mm/day), and others were produced. These were analyzed for general global patterns and for regional discrepancies when compared to each other. In addition, difference maps of observed climatologies compared to simulated climatologies (model minus observed) and for different versions of the model (model version minus other model version) were prepared to better focus on discrepant areas and regions. T-tests were utilized to reveal significant differences found between the different treatments of the model. It was found that the model represented global patterns well (e.g. ITCZ, mid-latitude storm tracks, and seasonal monsoons). Divergence in the model from observations increased with the introduction of more feedbacks (fewer prescribed variables) progressing from the Fixed–SST, to the coupled model. The model had problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief. It was hypothesized that these problems arose from the way the model calculates the effects of vegetation, sea ice and cloud cover. The problem with relief stems from the model’s coarse resolution. These results have implications for modeling climate change based on global warming scenarios. The model will lead to better understanding of climate change and the further development of predictive capability. As a direct result of this research, the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation.
This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.” These are severe problems. For example, clouds reflect solar heat and a mere 2% increase to cloud cover would more than compensate for the maximum possible predicted warming due to a doubling of carbon dioxide in the air. Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid 1980s. But it appears that cloudiness decreased markedly between the mid 1980s and late 1990s. Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 Watts/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 W/sq metre). So, the fact that the NASA GISS GCM has problems representing clouds must call into question the entire performance of the GCM.

The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct.

There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – if are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.

Richard

• katherine009 says:

Do I understand this to mean that NASA [admits] that if the models underestimate cloud cover by as little as 2%, they think it would be enough to counteract any global warming due to increased CO2?

• richardscourtney says:

katherine009:

Do I understand this to mean that NASA admists that if the models underestimate cloud cover by as little as 2%, they think it would be enough to counteract any global warming due to increased CO2?

No, your understanding is provided by a formatting error that I made and did recognise until your question drew my attention to it.

The quotation from NASA should have ended immediately after the sentence that says

As a direct result of this research, the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation.

I put the ‘close quotation’ in the wrong place. Sorry.

And thankyou for pointing out my error for me and others to see: I am genuinely grateful because it has enabled me to provide this corrigendum.

The data which indicates that only 2% cloud cover change would account for all observed warming derives from
Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854

I again apologise for my formatting error which is so misleading, and I again thank you for drawing my attention to it.

Richard

[What’s the proper full paragraph (with formatting of the bold) for quote that needs to be corrected? .mod]

• richardscourtney says:

aaargh” did NOT recognise

Sorry

Richard

• richardscourtney says:

Mod:

[What’s the proper full paragraph (with formatting of the bold) for quote that needs to be corrected? .mod]

The following is – I hope – what I intended. Also, I add the reference concerning observation of cloud cover.

***********************

JohnTyler:

Does anyone know how the models handle clouds?

Ron Miller and Gavin Schmidt, both of NASA GISS, provided an evaluation of the leading US GCM. They are U.S. climate modelers who use the NASA GISS GCM and they strongly promote the AGW hypothesis. Their paper tiltled ‘Ocean & Climate Modeling: Evaluating the NASA GISS GCM’ was updated on 2005-01-10 and is not reported to have been updated since.

Its abstract says:

This preliminary investigation evaluated the performance of three versions of the NASA Goddard Institute for Space Studies’ recently updated General Circulation Model E (GCM). This effort became necessary when certain Fortran code was rewritten to speed up processing and to better represent some of the interactions (feedbacks) of climate variables in the model. For example, the representation of clouds in the model was made to agree more with the satellite observational data thus affecting the albedo feedback mechanism. The versions of the GCM studied vary in their treatments of the ocean. In the first version, the Fixed-SST, the sea surface temperatures are prescribed from the observed seasonal cycle and the atmospheric response is calculated by the model. The second, the Q-Flux model, computes the SST and its response to atmospheric changes, but assumes the transport of heat by ocean currents is constant. The third treatment, called a coupled GCM (CGCM) is a version where an ocean model is used to simulate the entire ocean state including SST and ocean currents, and their interaction with the atmosphere. Various datasets were obtained from satellite, ground-based and sea observations. Observed and simulated climatologies of surface air temperature sea level pressure (SLP) total cloud cover (TCC), precipitation (mm/day), and others were produced. These were analyzed for general global patterns and for regional discrepancies when compared to each other. In addition, difference maps of observed climatologies compared to simulated climatologies (model minus observed) and for different versions of the model (model version minus other model version) were prepared to better focus on discrepant areas and regions. T-tests were utilized to reveal significant differences found between the different treatments of the model. It was found that the model represented global patterns well (e.g. ITCZ, mid-latitude storm tracks, and seasonal monsoons). Divergence in the model from observations increased with the introduction of more feedbacks (fewer prescribed variables) progressing from the Fixed–SST, to the coupled model. The model had problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief. It was hypothesized that these problems arose from the way the model calculates the effects of vegetation, sea ice and cloud cover. The problem with relief stems from the model’s coarse resolution. These results have implications for modeling climate change based on global warming scenarios. The model will lead to better understanding of climate change and the further development of predictive capability. As a direct result of this research, the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation.

This abstract was written by strong proponents of AGW but admits that the NASA GISS GCM has “problems representing variables in geographic areas of sea ice, thick vegetation, low clouds and high relief.” These are severe problems. For example, clouds reflect solar heat and a mere 2% increase to cloud cover would more than compensate for the maximum possible predicted warming due to a doubling of carbon dioxide in the air. Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid 1980s. But it appears that cloudiness decreased markedly between the mid 1980s and late 1990s.
(ref. Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854)

Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 Watts/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 Watts/sq metre). So, the fact that the NASA GISS GCM has problems representing clouds must call into question the entire performance of the GCM.

The abstract says; “the representation of cloud cover in the model has been brought into agreement with the satellite observations by using radiance measured at a particular wavelength instead of saturation” but this adjustment is a ‘fiddle factor’ because both the radiance and the saturation must be correct if the effect of the clouds is to be correct.

There is no reason to suppose that the adjustment will not induce the model to diverge from reality if other changes – e.g. alterations to GHG concentration in the atmosphere – are introduced into the model. Indeed, this problem of erroneous representation of low level clouds could be expected to induce the model to provide incorrect indication of effects of changes to atmospheric GHGs because changes to clouds have much greater effect on climate than changes to GHGs.

Richard

• IPCC credits clouds with a net -20 W/m^2 RF. That’s cooling and lots of it.

• “Does anyone know how do the models handle ocean currents, which all agree are very influential in affecting climate?”
I’ve shown a video below. It works.

• David A says:

Currents for weather, yes, currents for long term climate and ENSO cycles, not a chance. Currents for ocean overturning and very long term climate, nope.

18. George Steiner says:

It is my contention that there has not been an experimental verification of the theoretical CO2 atmospheric energy transfer mechanism. Neither photonic nor collision nor the combination.
It is my contention further that the generally accepted 1 deg C increase is fiction.
I am very simple minded and can think only in first order approximations. But with so few CO2 molecules amongst so many others in a rapidly changing soup the calculation assumptions are simplistic and incorrect.

• George,

Have a look at Modtran:
http://climatemodels.uchicago.edu/modtran/

That is based on line by line absorption of IR by CO2 and other GHGs from accurate laboratory measurements at very different conditions of pressure and water vapor (per “standard 1976 atmosphere”) at different heights. The possibility of collisions with other molecules before a new IR photon is released is incorporated and decreases with height.

The energy loss from a CO2 doubling can be compensated by an increase of about 1°C at ground level…

• ModTran – modeled transmission, technically the official name is ‘A Moderate Resolution for LowTran’
Meaning that it is an expansion of a previous model. Moderate means that the model uses 5cm blocks of the light spectrum instead of 20cm. Perhaps HiTran would be a better resolution? Though it is still a model.

Model output used to verify what?
Modeled is modeled, period. Fantasy until proven by direct observations and independently replicated.

• skorrent1 says:

Not only is it fiction, but also a meaningless fiction. There is no general acceptance of how this “1 deg C increase” will be manifest in a world of widely varying, constantly changing, temperatures. Is it distributed uniformly by latitude or concentrated in the (nonexistent) tropical hot belt? Does it reside in daily or seasonal increased maxima, or minima, or both? And we’re just talking about temperature, miles away from discussing possible effects on “climate.”

• skorrent1,

It is real for one and only one snapshot of a fixed atmosphere with fixed humidity, clouds and their distribution over the latitudes and altitudes. Lookup the “standard 1976 atmosphere”…
Of course in the real world there are a host of positive (according to models) and negative (according to the real world) feedbacks and changing conditions, but the about 1°C is based on solid physics…

• “…It is real for one and only one snapshot of a fixed atmosphere with fixed humidity, clouds and their distribution over the latitudes and altitudes…”

That I believe.

“…but the about 1°C is based on solid physics.”

No it isn’t. It is based on an imperfect understanding of physics as it applies to the real world. A very imperfect understanding.

19. “The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.” ~ from article

I could not agree more, but would add that if the basic science is wrong then the maths don’t really matter as you are dead wrong from the get-go.

For example, what does it matter what the math of the climate models say if they are programmed by delusional fools who have it wrong on what CO2 does? When I was trained in math in the 70s and then again in the 80s people kept telling me that if I put garbage into the computer I would most likely get garbage back out — but no matter what came back out it was not trustworthy if I put garbage in. Do professors still say such things? Do climate “scientists” not ever listen?

~ Mark

20. evanmjones says:

This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades.

And that is how these damn bottom-to-top models screw up. Get just one initial input wrong and it plays crack the whip with the end results, the result being meaningless mush. Then there’s “Fallacy of the parts”, wherein the parts add up to a greater sum than the whole (i.e., the observational evidence). Recurse you, Red Baron!

Top-down models are the only ones for we have sufficient tools and evidence. This requires a meataxe approach, but that is the best method available to us considering our current state of knowledge and the inherent uncertainties. They may be wrong, of course, but they are far more likely to be in the ballpark.

Another approach is to take a bottom to-top-model and try to dicker with inputs and algorithms. But being bottom-to-top, they are spaghetti monsters, so it’s a problematic undertaking. For example, CMIP does not have a direct input for CO2 sensitivity. It is derived at great length. Other (top-down) models, OTOH, allow a direct finger on the primary keys. Top-down is flexible and adaptable. And as straightforward as a punch in the face.

• I have seen amazingly complex Excel spreadsheets used for budgets, man power estimates, project scheduling, etc. and some macro error in some minor cell in some distant tab can propagate through and wreck the entire sheet.

• We’re back to when Doctorate Engineers agree that a proposed model actually models what it is expected to model, it is not a model.

It’s called verification. Verification is dependent upon a model meeting expected specified tests, not loons calling their model results ‘robust’.

Then again, no one seriously believes an extremely complex chaotic system can be effectively modeled.

As Nicholas points out; even very complex financial spreadsheets fail as soon as one component changes.

Those who model financial systems understand their model limitations and are prepared:
A) to explain any differences of reality to their model, explicitly. Unlike those free spending climate fools, real world finances involve very unforgiving hard nosed B%\$%@*s. Hand waving, sophistry and falsehoods mean that one is quickly unemployed. With prejudice.
B) to immediately accept new real world corrections into their model. i.e. They don’t sit on their backsides claiming that many bad model runs are equal to good data. Sooner or late those financial B%\$%@*s will be asking where their climate research monies went.

• I missed the until; “We’re back to until when…”. Incorrect English, I know; caught out during an edit while listening to my Lady.

21. Mike’s approach is not new:

“The general approach is currently to describe the climate system from ‘the bottom up’ by accumulating vast amounts of data, observing how the data has changed over time, attributing a weighting to each piece or class of data and extrapolating forward”

and

“We need a New Climate Model (from now on referred to as NCM) that is created from ‘the top down’ by looking at the climate phenomena that actually occur and using deductive reasoning to decide what mechanisms would be required for those phenomena to occur without offending the basic laws of physics.
We have to start with the broad concepts first and use the detailed data as a guide only”

from here:

http://www.newclimatemodel.com/new-climate-model/

• evanmjones says:

Yes. I have been loudly promoting the top-down modeling approach for years, now. Go basic and drill down. Anything else is a road going forward to nowhere.

• Mike Jonas says:

Mike’s approach is not new“. True. I didn’t claim to be new, but I should have stated that explicitly, as I did in my earlier series. Hopefully what I did do was to provide a helpful explanation.

22. Mike M. (period) says:

Mike Jonas,

You wrote:

“Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.

NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations.”

A little knowledge is a dangerous thing. If you knew more than basic mathematics, you would realize that your conclusion is wrong. And if you knew anything about the relation of weather models to climate models you would spot your error.

First the basic mathematics. Whether or not errors compound depends on the properties of the system being computed. If it is a stable system, the numerical errors can self-correct, allowing the calculation to proceed for an arbitrary amount of time. In an unstable or chaotic system, trajectories will diverge and the specific state of the system will be uncalculable. Which occurs is a property of the mathematical system, not the numerical calculation, provided that one is using a suitable numerical algorithm.

The climate system is chaotic, so exact evolution of states can not be calculated. Every climate modeller knows as much and ADMITS IT, so for you to claim that you have found some fundamental flaw in the models is both ignorant and arrogant; a very nasty combination.

Lorentz noted that there are two types of predictions that can be made in chaotic systems. He called them predictions of the first kind and predictions of the second kind. The former requires predicting the specific sequence of states through which the system will evolve. That is what is done by weather models. Such predictions are highly dependent on initial conditions and can only extend for a limited time (days) into the future.

However, even if predictions of the first kind are impossible, it can still be possible to make predictions of the second kind. Those only predict the statistical properties of the system, such as average temperature. Climate models are designed to make such predictions. I am not sure of the exact mathematical requirements for such predictions to be possible, but the basic idea is that the system remain bounded; i.e., that it does not run off in some unrestrained manner (i.e., the oft repeated phrase “weather is an initial value problem, but climate is a boundary value problem”). The climate models most certainly meet the required conditions for predictions of the second kind and the evidence is strong that the actual climate system meets those conditions.

When predictions of the second kind are possible, initial conditions do not matter, except for how long it takes to arrive at a steady state.

There is much to criticize in climate models. But one should make valid criticisms. You fail to do so.

• Mike Jonas says:

Mike M. – You could have added that the modellers state clearly that they do not predict climate, so there is no point in my saying that the models cannot predict climate. Well, I think it is worth saying, again and again if necessary, because it is important and not at all well understood. You say “ the system remain bounded; i.e., that it does not run off in some unrestrained manner“. Yes, I address that by reference to the irregular clock : if you force a model to stay within certain bounds, then it will always be within those bounds, but you might as well generate random numbers within those bounds for all the good it will do you. This part of the models – the cell-based system – adds no value and risks being destructive. Scrap it. Start again with a top-down system. It won’t be very accurate till we know more about climate’s drivers, but it would still be far better than our current random-number generators.

• Mike M. (period) says:

Mike Jonas,

What the models attempt is, in effect, determining the bounds and how those bounds change as a result of exogenous inputs, like changing CO2. That is adding something of very considerable value IF they can actually do it (I think they can’t).

You want them to scrap the approach they have been using. OK, but that is armchair quarterbacking unless you have something constructive to offer in its place. “Start again with a top-down system” says nothing constructive. Maybe you plan a future article in which you will offer something?

My suspicion is that the real problem with the models is not so much the approach as the fact that climate scientists have been insufficiently ruthless (understatement) in trying to invalidate them. Instead they are content to use multi-model means, as if that can somehow correct models that have not been validated.

• SemiChemE says:

Mike Jonas,

Your argument would be much more compelling if there weren’t abundant examples of successful bottom-up models of chaotic systems. The criticism about initial states that you apply to the GCM’s would apply equally to many of the techniques of Quantum Chemistry, Statistical Mechanics and Computational Fluid Dynamics (CFD) used to design modern wonders from Microchips to Airplanes. Yet, we have millions of passenger miles and Trillions of Gigaflops from the cell phone in everyone’s pocket and the computers across the globe, testifying that these models work splendidly.

In fact, the GCM’s are largely based on the same CFD principles used to design aircraft, cars and engines. The turbulent flows of these systems are just as chaotic and unpredictable as the weather, but modeling has lead to new designs with significant improvements in efficiency and performance. The modeling approach is sound, it’s the implementation and computational capabilities that are lacking.

You state that “any science is wrong if its mathematics is wrong”, but that’s not really true! A better statement is that a model is only as good as its worst approximations. We use Newtonian mechanics all the time and for a great many problems it is good enough. The trick is knowing when the simplifying approximations work and when they break down and relativistic or quantum mechanical corrections must be applied.

Were you a climate modeling expert, any commentary on the weakest model approximations would have been very illuminating. When are these approximations valid, when do they break down? You do touch on parametrizations, but do you really understand how they are implemented? Aren’t there different approaches? What are the strengths and weaknesses of each. How are they determined or evaluated and tested? How can they be improved, tested and verified? Now that would make for an interesting read.

Finally, regarding what you call “Large Externals”, I would argue that a detailed understanding of all such factors is unnecessary. For example, while Milankovitch cycles are important on geologic timescales, their impact over centuries or a few millennia are likely to be small. A lack of a detailed understanding these externals would not necessarily prevent understanding how CO2 impacts global temperatures in the near term. It is true that in some cases they do need to be accounted to “calibrate” the model based on historic data, but ideally a good model will have few or even no such calibration factors that can’t be determined independently of the model itself.

• Mike M. (period) says:

SemiChemE,

An excellent comment. In particular, you really cut to the heart of the issue when you wrote: “any commentary on the weakest model approximations would have been very illuminating. When are these approximations valid, when do they break down? You do touch on parametrizations, but do you really understand how they are implemented? Aren’t there different approaches? What are the strengths and weaknesses of each. How are they determined or evaluated and tested? How can they be improved, tested and verified? Now that would make for an interesting read.”

• Mike M.
Bounds? Predetermined bounds? And they still get it wrong!? Is that the tail or the trunk wiggling there?

SemiChemE:

“…there weren’t abundant examples of successful bottom-up models of chaotic systems…”

Every one of those used for real products are intensely tested and fully verified before committing serious dollars or any lives.
Mistakes using those models in the real world are backed by serious repercussions upon failure.

Perhaps computer modeler should be sued every time they publish or utilize a bad model run? Sooner or later, the world will come that same realization. Listing ‘climate modeler’ in a resume will be a career killer.

• Walt D. says:

What you say is true. However, the main problem modelling climate with partial differential equations is that you have no idea whether the solution has converged.
In finance the Black Scholes option pricing formula can be modeled as a diffusion equation. For certain cases these equation have a closed form solution so we can test whether the numerical algorithm converges by applying it to a case we know. In General Relativity, certain special cases have a closed form solution. In reservoir simulation, the Buckley-Leverit equations have a closed form solution. However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists. Anyone who tells you otherwise is talking BS. They would claim the Clay Institute prize if they were able to solve this problem. CFD models are very useful for modeling the flow over aircraft wings but only if they are tested in a wind tunnel under quasi-real operating conditions.
We all know the current climate models are useless. Unfortunately, there is no quick fix to this problem round the corner. We can not even model turbulent flow in a pipe, or the behavior of a shower curtain, let alone the entire atmosphere.

• Mike M. (period) says:

Walt D. wrote: “However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists. Anyone who tells you otherwise is talking BS. They would claim the Clay Institute prize if they were able to solve this problem.”

Very interesting. There are several Clay Institute prizes, I assume Walt D. means this one: https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_existence_and_smoothness

But that Wikipedia article notes that: “Jean Leray in 1934 proved the existence of so-called weak solutions to the Navier–Stokes equations, satisfying the equations in mean value, not pointwise.”

Such weak solutions might be all that are needed for climate models since they do not claim to predict specific future states (weather), only the statistical properties of such states (climate). I wonder if someone here (Nick Stokes?) actually knows if this is the case.

• “However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists.”
You can’t prove in advance whether a unique solution exists. But when you have one, you can test it simply by seeing whether it satisfies the equations. Go back and substitute. More usually, you test whether the conserved quantities that you are transporting are actually conserved.

“We can not even model turbulent flow in a pipe”
Nonsense. Verification and validation of a pipe flow solution is a student exercise

• “Such weak solutions might be all that are needed for climate models”

As said above, the question of whether you can prove in advance whether a solution exists and is non-singular rarely impinges on practical CFD. That provides algorithms which do generate candidate solutions. You can check afterwards whether they do satisfy the discretised equations, and do not have singularities. There is an issue there related to stability. As I mentioned, higher order systems do have (without boundary conditions) multiple solutions. You want the ones that satisfy the boundary conditions, but in explicit forms the boundaries can be a long way away, and so this uniqueness can drift in time. IOW spurious solutions can remain inadequately damped, even if they don’t grow rapidly causing blowup. You can analyse for all this; the solution is generally to be conservative with stability criteria, and use implicitness where necessary.

Having satisfied the discretised equations, there is then the question of whether the discretised equations adequately approximate the continuum de’s. That is the issue of mesh independence; ideally you test on a succession of mesh refinement. A possible criticism of climate GCM’s is that as a practicality you can only go so far with this. On the other hand, there is the experience with numerical forecasting, which works well, and does use finer meshes for shorter periods.

Remember, we can’t prove the solar system is stable either. But life goes on.

• Walt D. says:

Nick: Good post. Thanks for the reference to the student exercise. Do the have a reference to the actual equations that are being solved?

• Walt D,
They are using Ansys Fluent, a major engineering CFD package. It solves the Navier-Stokes equations, with various options for turbulence. The sort of thing people here say is impossible. The theory manual is here. The basic equations are in Sec 1.2.

• Navier-Stokes equations “can almost be” solved for the very small volumes, very limited runs needed for engineering approximations of limited reactions occurring in very, very limited regions of fluid flow.

But! NONE of the models are “original” design-truths. They ALL are based from YEARS of measured data from real physical designs and real evidence (tuft plots”, photographs, turbulence wind tunnels, sight, and pitot tube measurements at each point near the fuselage or intake duct or piston or carburetor or fuel injection nozzle. And, even then, the “pretty photos” of the flow or heat exchange fluid are RECOGNIZED as approximations of limited volumes under very specific conditions.

Engineering 3D FEA and fluid flow model results are NOT “spread across the globe” extrapolating every billion-per-second approximation from every cell for the next two hundred years!

• basicstats says:

Interesting exchange. When Nick Stokes writes ” It solves the Navier-Stokes equations, with various options for turbulence. The sort of thing people here say is impossible”, presumably he means finds a stable numerical solution to a discrete dynamical system approximating NS. As NS experts point out, how far these have anything to do with solutions of the actual NS equation except in the very short term is (also) an open question.

Wikipedia’s version of Leray’s weak solutions to NS – not really ‘mean value’. More pointwise over a class of test functions. Worth mentioning because there are actual statistical solutions to NS which are apparently being considered by some (European) climate modellers. One obvious problem is they need an effectively subjective spatial probability.

• TimTheToolMan says:

If it is a stable system, the numerical errors can self-correct, allowing the calculation to proceed for an arbitrary amount of time.

At its very heart, a GCM doing a climate prediction is predicting energy accumulation by the planet. It is NOT simply playing a lot of weather that “evolves” and might be self correcting. It has a real “direction” which will certainly be swamped by the accumulation of those errors so pretty much everything you say is irrelevant because it comes from a fundamentally mistaken position.

23. RD says:

The models are all wrong in the same direction – too warm (no bias there). Their purpose is propaganda, not accuracy, and to be a club for alarmists to scaremonger the public and to “inform” policy makers.

24. son of mulder says:

This could be good place to start with the top down analysis

Over the history of the earth the global average temperature seems to be bounded within a range of 10 deg C. Not sure what you’d do next.

• Mike M. says:

son of mulder,

“Not sure what you’d do next.”

There’s the rub. I think that climate modelling started with top-down, but they quickly ran into dead-ends. For instance: Increased temperature will produce increased atmospheric water vapor, but how much? You either make something up or you try to calculate it from basic principles. For the latter you need a bottom-up model.

• Proud Skeptic says:

Thanks for using the words “seems to be”. One of my pet peeves in this whole mess is the degree of accuracy that seems to be inferred from things like tree rings, ice cores, and the like. I mean, we are just now getting to the point where we can take detailed measurements of the temperature of the Earth. What, exactly are we comparing it to? Tree rings? I think not. If tree rings are so accurate then why did we spend zillions of dollars on satellites? Why didn’t we just continue with the tree rings? /SARC

• evanmjones says:

I think can answer that one. Using the metrics they use, tree rings would indicate cooling from 1960?

• son of mulder,

“Not sure what you’d do next.”

We should note that CO2 rise does not lead to a rise in temperature but that it is the other way around — and that in the past CO2 in the atmosphere has been much higher than at present. This CO2 causes warming delusion is the biggest mistake science has made in regards to climate.

• Peterg says:

1) Create computer model forecasting doom that is impossible to refute.
2) Alarm.
3) Profit.

• Greg Cavanagh says:

If anything, that chart says CO2 and temperature are unrelated.

• Menicholas says:

Bingo!

• JohnTyler says:

Well, if the chart can be believed, it shows that present day CO2 levels are in the ( roughly) lowest decile going back about 1 billion years .
It also shows that temperature and CO2 levels have a very poor correlation.
The chart provides ZERO information as to whether CO2 levels change as a RESULT of temperature changes (i.e., CO2 response lags temperature changes) or vice versa.

25. 601nan says:

Seems the good chap NY Attorney General should be investigating the IPCC and WMO models whose “predictions, projections or forecasts” cannot be validated with real measurements, now, or on any time scale imaginable over past and future years. Now that IS a clear case of fraud and fraud at many levels and perpetrators.

After all, during Prohibition days Al Capone’s syndicate in Chicago kept two sets of accounting books, the real one showing Capone’s earnings and the one the syndicate gave to the Treasury Department showing Capone with no income.

So the IPCC’s and WMO’s fabulous models predictions, projections and forecasts depend of thousands of humans (salary, annual leave, sick leave, 401K, retirement benefits, etc.) writing and fiddling with millions of lines of code crunching inside of hundreds of massive main frames in buildings (server farms) ingesting billions of Euros (or what ever money denomination) whose output is not validatable have an accounting problem, i.e. two books. The real book showing who is benefiting by income and the fake book showing no income that is shared with the various government treasury departments. So if the various government treasury departments were to get hands on the real book showing the top officials benefiting royally from the billions of Euros income that was supposed to be paying for the running of the models in the server farms and serviced by thousands of human workers but not, somebody, like Capone, will have to go to jail.

Ha ha

26. ScienceABC123 says:

I think the modelers are trying to do too much with too many unknowns of a dynamic system using fixed geographic cells. I would argue that since climate can be viewed as “long term weather” and weather is carried by clouds and winds, the priority should be to accurately model clouds and winds over the globe. Thoughts?

• TonyL says:

It is noted above, this time by richardscourtney, that a 2% increase in cloud cover will compensate for the warming caused by a doubling of CO2.
Some people take the approach that a model must include all variables, all at once, to be valid. OK, that is a valid point of view.
I take the opposite view, start with the most important variable, get a first approximation. Add a second variable, get a second approximation. And so on. At least you start getting results.
From this point of view, cloud cover would be a major climate driver, and CO2 wold be a bit player at the bottom of the list. {I know, this begs the question, what drives cloud cover, but I doubt it is CO2, and you must start somewhere}
But this fixation on CO2 cannot help but to mislead, when there seem to be more important factors, even by the models.

• Marcus says:

Recent studies are suggesting Cosmic Rays and the strength of the Solar Winds affect the aerosols in the atmosphere to create more or less clouds !!!!

• richardscourtney says:

TonyL:

You say

From this point of view, cloud cover would be a major climate driver, and CO2 wold be a bit player at the bottom of the list. {I know, this begs the question, what drives cloud cover, but I doubt it is CO2, and you must start somewhere}

Yes, cloud cover is a “major climate driver” but nobody really knows what changes cloud cover.

Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid-1980s. But it appears that cloudiness decreased markedly between the mid-1980s and late-1990s
(ref. Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854.)

Over that period, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 Watts/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 Watts/sq metre).

So, changes to cloud cover alone could be responsible for all the temperature rise from the Little Ice Age but nobody knows the factors that control changes to cloud cover.

Richard

• son of mulder says:

Has anyone ever tried to correlate cloud cover with CO2? Remember correlation does not imply causation but causation does imply correlation. ie no-correlation implies no-causation.

• richardscourtney says:

son of mulder:

Has anyone ever tried to correlate cloud cover with CO2? Remember correlation does not imply causation but causation does imply correlation. ie no-correlation implies no-causation.

Firstly, a correction.
Lack of correlation indicates lack of direct causation but correlation does NOT imply causation.

There is insufficient data to determine if there is correlation of atmospheric CO2 with cloud cover.

Clouds are a local effect so if atmospheric CO2 variations affect variations in cloud formation and cessation then it is the local CO2 which induces the alteration. Cloud cover is monitored but little recent evidence of local CO2 exists. And the OCO-2 satellite data of atmospheric CO2 concentration lacks sufficient spatial resolution.

One could adopt unjustifiable assumptions to make estimates of the putative correlation. There is good precedent for such unscientific practice in studies of atmospheric CO2; e.g. there are people who claim changes in atmospheric CO2 are entirely caused by anthropogenic emissions because if one assumes nothing changes without the anthropogenic emissions then a pseudo mass balance indicates the anthropogenic emissions are causing the change!

Richard

27. DDP says:

“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible. ”

https://www.ipcc.ch/ipccreports/tar/wg1/501.htm

And yet confidence is high. But not as high as anyone who claims to be confident in modelling the impossible.

28. Kevin O'Neill says:

GCMs use solar forcing that is based on the Earth’s obliquity, axial tilt, and precesssion – i.e.,the exact factors that create Milankovitch cycles. We can calculate these very precisely. The author appears unaware of this fact.

• TonyL says:

The models also use the Solar Flux Factor. Yes, that is as bad as it sounds. The Solar Flux Factor is typically set at around 70% of actual, the models run way to hot otherwise. Very precise, so much for accuracy.

• Calculation of the Solar Flux Factor is a routine factor in determining insolation for a planet. To obtain the insolation at any given time of year, this flux factor is multiplied by the solar constant at the time of perihelion.

Your claim that “The Solar Flux Factor is typically set at around 70% of actual…” in GCMs is incorrect.

• Mike Jonas says:

Kevin O’Neill – I address Milankovitch cycles in my previous article (linked above). In part : The most important-looking cycles don’t show up in the climate, and for the one that does seem to show up in the climate (orbital inclination) we just don’t know how or even whether it affects climate.. So you see, knowing everything about Earth’s orbit doesn’t help because we don’t know how it affects climate.

• Kevin O'Neill says:

Mike Jonas – you have said in your previous article and in this one that Milankovitch cycles are not included in GCMs. This is untrue. Milankovitch cycles *are* included thru the orbital forcings.

How Milankovitch cycles affect climate is obvious. One only needs to compare them to the glacial-interglacials to see how they affect climate. Changes in these orbital forcings will have noticeable effects over longer timescales with little noticeable effect on short timescales.

• Mike – there is a long history of analyzing orbital forcings (Milankovitch Cycles) in GCMs on past climate. This is in contrast to your view that GCMs do not incorporate Milankovitch Cycles. I’ll quote from just one paper, Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, He, Shakun, Clark, Carlson, Liu, Otto-Bliesner & Kutzbach, Nature (2013):

…our transient simulation with only orbital changes supports the Milankovitch theory in showing that the last deglaciation was initiated by rising insolation during spring and summer in the mid-latitude to high-latitude Northern Hemisphere and by terrestrial snow–albedo feedback.

• Mike Jonas says:

Kevin O’Neill, you say “ One only needs to compare them to the glacial-interglacials to see how they affect climate.“. But we have no mechanism, so the “seeing how” cannot be coded into the models. In any case, we don’t “see how”, because the past climate hasn’t matched up with the orbital cycles, other than the orbital inclination. That’s the “100,000 year” cycle that matches the glacial-interglacial cycle, and it has people scratching their heads because it is the only cycle that doesn’t change Earth’s position wrt the sun. A lot of attention has been paid to the 65N idea without success, and various other ideas have been tried without success, The only workable hypothesis I have seen involves dust in space and there is no direct evidence that such dust exists.

• Mike Jonas writes: “A lot of attention has been paid to the 65N idea without success, and various other ideas have been tried without success…

As the paper above quoted shows, you are unfamiliar with the scientific literature.

• Mike Jonas says:

oneillinwisconsin – the He et al paper you cite looks at only the first half of the last deglaciation which represents one small part of one type of Milankovitrch cycle. It suffers from the problem that there were other such Milankovitch cycles that were not associated with deglaciation. Their model, which is based on insolation forcing, would certainly have predicted deglaciation for each such cycle. So we still don’t know. My suspicion – just a suspicion don’t take it too seriously – is that virtually everyone is looking in the wrong place namely the northern hemisphere while it is the southern hemisphere that principally drives climate on those timescales.

• Mike – you said that GCMs do *not* include Milankovitch Cycles. I have shown that they do. The history of using GCMs to examine orbital forcings (Milankovitch Cycles) goes back to at least the 1980s. So any claim that GCMs do not include Milankovitch cycles (as you have made) shows a complete lack of familiarity with the actual science behind GCMs.

Now you are morphing your argument without admitting your initial claim was wrong. This is par for the course on the internet, but needs to be pointed out since you are unwilling to admit the error.

Now you write: “… it has people scratching their heads because it is the only cycle that doesn’t change Earth’s position wrt the sun..” This displays a misconception on your part with regards to these orbital forcings and their effect. The Milankovitch cycles most important changes are to the latitudinal distribution of solar forcing – not necessarily the total amount. This is why the insolation at 65N becomes important. It does not have scientists scratching their heads – just people that don’t understand the science.

• Mike Jonas says:

oneillinwisconsin – you make it sound like the effect of Milankovitch cycles on climate are well understood. I don’t think they are. NOAA says “What does The Milankovitch Theory say about future climate change?
Orbital changes occur over thousands of years, and the climate system may also take thousands of years to respond to orbital forcing. Theory suggests that the primary driver of ice ages is the total summer radiation received in northern latitude zones where major ice sheets have formed in the past, near 65 degrees north. Past ice ages correlate well to 65N summer insolation (Imbrie 1982). Astronomical calculations show that 65N summer insolation should increase gradually over the next 25,000 years, and that no 65N summer insolation declines sufficient to cause an ice age are expected in the next 50,000 – 100,000 years.
” (http://www.ncdc.noaa.gov/paleo/milankovitch.html). [my bold]
But Hays Imbrie and Shackleton say : “A model of future climate based on the observed orbital-climate relationships, but ignoring anthropogenic effects, predicts that the long-term trend over the next sevem thousand years is toward extensive Northern Hemisphere glaciation. “. (http://www.sciencemag.org/content/194/4270/1121) [my bold] [I assume that’s the same Imbrie, but I haven’t checked].

So what is it to be, warmer or cooler?

Hays et al also say “an explanation of the correlation between climate and eccentricity probably requires an assumption of nonlinearity“. In other words, they have no mechanism, they have some correlation but they can’t quite get it to fit, and they do think they are on the right track. They may well be, but trying to put any of that into bottom-up climate models is an exercise in futility.

• Mike – Quote-mining is rarely an intellectually honest endeavor. The sentence immediately preceding the quote you selected from Hays et al says: “ It is concluded that changes in the earth’s orbital geometry are the fundamental cause of the succession of Quaternary ice ages.” Now, does that sound familiar? It’s the argument *I* made, that you dismissed.

As to your question: “So what is it to be, warmer or cooler?” You need to read harder. First, Hays et al say, “… the results indicate that the longterm trend over the next 20,000 years is toward extensive Northern Hemisphere glaciation and cooler climate .”

If we track down the references from the NOAA page you quoted we find “No soon Ice Age, says Astronomy,” by Jan Holland. The figure from Hollan for future insolation also shows numerous warming/cooling periods over the next 20 thousand years. Per Hollan summer insolation is both increasing *and* decreasing over the next 20ky – the peaks are higher than today and the troughs are lower.

Now, Hays et al was written in 1976 and Holland’s webpage says December 18, 2000. We can also note they were using different astronomical data sources. I don’t believe either would be used today as definitive of the state of scientific knowledge vis a vis orbital forcings. Still, it is a only very uncharitable reader who would claim they contradict each other.

• Mike – both sources you quote believe that orbital changes are the basis for past glaciations. Where they differ is in their prediction for the next glaciation. Hays et al, working in the mid-70s, did not have access to the same orbital calculations that became available though the work of Andre Berger. Here are just three quotes from Climate change: from the geological past to the uncertain future – a symposium honouring Andre Berger:

Investigators interested in the problem of ice ages, such as L. Pilgrim (1904), M. Milankovitch (1941) and, later, Sharaf et al. (1967) and A. Vernekar (1972) computed timeseries of elements expressed relative to the vernal equinox and the ecliptic plane, hence more relevant for climatic theories. However, the characteristic periods of these elements were not known, and there was no extensive assessment of their accuracy. In particular, there was nothing like a “double” precession peak in the orbital frequencies known at the time.” Note that Hays et al were using astronomical data from Vernekar.

… his [Berger’s] finding of an analytical expression for the climatic astronomical elements, under the form a sum of sines and cosines (Berger 1978b). In this way, the periods characterising them could be listed in tables for the first time, and insolation could be computed at very little numerical cost. This very patient and obstinate work yielded the demonstration that the spectrum of climatic precession is dominated by three periods of 19, 22 and 24 kyr, that of obliquity, by a period 41 kyr, and that of eccentricity has periods of 400 kyr, 125 kyr and 96 kyr (the Berger periods). The origin of the 19 and 23 kyr peaks in the Hays et al. spectra was thus elucidated (Berger, 1977b). According to John Imbrie, this result constituted the most delicate proof that orbital elements have a sizable effect on the slow evolution of climate.”

Hays et al was a seminal paper in that it found these orbital periods in proxy data. Berger provided subsequent researchers even better orbital data through precise calculations.

Berger was also the first to introduce and compute the longterm variations of daily, monthly and seasonal insolation and demonstrated the large amplitude of daily insolation changes (Berger, 1976a), which is now used in all climate models. His prediction in the early 1990s about the shortening of astronomical periods back in remote times (Berger et al., 1989,1992) has been confirmed by palaeoclimate data of Cretaceous ages, and constitutes thus an important validation and development of the Milankovitch theory for Pre-Quaternary climates.

• ” I’ll quote from just one paper, Northern Hemisphere forcing of Southern Hemisphere climate during the last deglaciation, ”

Sorry Kevin, this paper is just model hoopla that studiously avoids plotting their output against the ice core data.

The only reasonable inference is that the AMPLITUDE of 65N insolation VARIABILITY correlates reasonably well, with high amplitude variability corresponding to stadials.

This is not surprising since when you zoom out from the Pleistocene time scale to the Phanerozoic time scale and its three glacial periods (which don’t correspond to any Milankovitch scale oscillation), it is clear that the transition to the Pleistocene is associated with extreme variability as well as general cooling.

It’s like a stabilizing feedback gets dampened or delayed. The variability could cause the cooling, result from the cooling, or both could be forced by something else. We just don’t know.

• Whoops, got it upside down. 65N variability corresponds with interstadials, which IS sort of surprising as it is contrary to the trend into the Pleistocene.

Too long since looking at that graphic. 65N is a composite of M components, but 65N variability seems also associated with extremes of eccentricity, which is not at all surprising.

• Mike Jonas says:

Boiling it all down : there are major changes in climate which correlate to some extent with Milankovitch cycles, but there is still no known mechanism. Lots of work has been done on changes in insolation (and yes these are in the models, see below) but they don’t explain the changes. Particular attention has been given to 65N because it looks most hopeful but it still doesn’t cut the mustard. Without a mechanism they can’t code anything into the models’ internal workings.

There’s a parallel here with solar activity on shorter time scales. There are very small variations in solar activity, which are coded into the models, but which have trivial effect on climate. Yet if you look back at past climate there is abundant evidence that the sun had more influence on climate. But there is no mechanism, so it can’t be coded into the models’ internal workings.

Worse than that, in both cases they don’t code anything else into the models. They could put ideas into the large externals but they don’t. They do put untested ideas in there when it suits them (eg. the water vapour and cloud feedbacks). Put simply, they ignore Milankovitch cycles in the same way that they ignore solar activity – it looks like the models are handling them but they haven’t tried to emulate them.

And this brings me back to my article above : OK so they can’t code solar activity and Milankovitch cycles into the models internal workings because there isn’t a mechanism. But they could put them into the large externals along with the various other real or imaginary things that are there (They might then have to protect them from the vagaries of the internal workings, but that is something they are doing already.). So what would you have then? You would have a large complex cell-based computer model driven almost entirely by factors fed into it from outside that cell-based system. A top-down system. While you are about it, chuck in a few other things that you don’t have a mechanism for. Given that in this system the internal cell-based logic is contributing absolutely nothing of value and has at times to be prevented from mucking things up, you might as well just ditch it and suddenly you are working with a cheap fast flexible model that can easily be understood by others when you explain your results to them. The Occam’s Razor of climate models. Now that would be a real advance.

29. From above:

The Solution

It should by now be clear that the models are upside down. The models try to construct climate using a bottom-up calculation starting with weather (local conditions over a short time). This is inevitably a futile exercise, as I have explained. Instead of bottom-up, the models need to be top-down. That is, the models need to work first and directly with climate, and then they might eventually be able to support more detailed calculations ‘down’ towards weather.

So what would an effective climate model look like? Well, for a start, all the current model internals must be put to one side. They are very inaccurate weather calculations that have no place inside a climate model. They could still be useful for exploring specific ideas on a small scale, but they would be a waste of space inside the climate model itself.

Well, think again, but invert that series of thoughts.

The “global climate models” really are not either “global”, nor “climate” models. Rather, they started as “general circulation models” (also GCM by the convenient initials) but as LOCAL models of dust and aerosols in a single local area. (Particularly the LA Basin, the dust and nickle refining “acid rain” flumes from the Toronto area refineries for the Adirondacks acid rain studies.

THEN – only after the acid rain legislation and ozone hole legislation hysteria showed that the enviro movement could indeed get international laws passed in a general hysterical propaganda cloud of “immediate action is needed immediately” against a nebulous far-reaching problem that could be endlessly studied by government-paid “scientists” in government-paid glamorous campuses holding tens of billions of government-paid bureaucrats and programmers – only then did the CO2 global warming scenario begin its power play.

But the original local models were fairly good. It is why the GCM (general circulation models) look at the aerosol sizes and particle shapes and particle densities and the individual characteristics of every different size of particle so carefully. Entire papers are dedicated to particle estimates and models as a subset of the modeling.

Now, admittedly, the LOCAL wind and temperature conditions over short ranges of time began to be modestly accurate in these elaborate NCAR and Boulder campus computers – and the rest of the academia began demanding “their share” of the money. And the attention. And the papers. And the programmers. . So they simply expanded the model boundaries to the world’s sphere.

30. Thanks, Mike Jonas. Very good article, an excellent continuation for your previous article.

31. “The mathematics of the climate models’ internal workings is wrong.”

For an “inside” article, this is remarkably superficial. You have not shown any mathematics is wrong. You have not shown any mathematics. You have said nothing about the equations being solved. It’s just argument from incredulity. “Gosh, it’s hard – no-one could solve that”.

“In other words, the errors compound exponentially.”
No, that is wrong. They can – that is the mode of failure, It’s very obvious – they “explode”. In CFD (computational fluid dynamics), people put a lot of effort into finding stable algorithms. They succeed – CFD works. Planes fly.

That’s the problem with these sweeping superficial dismissals – they aren’t specific to climate. This is basically an assertion that differential equations can’t be solved. But they have been, for many years.

“Initial state”
Yes, the models do not solve de’s as initial value problems. That is very well understood. Neither does most CFD. They produce a working system that “models” the atmosphere.

“Small Externals”
Actually internals. But sub-grid issues are unavoidable when equations are discretised. Turbulence in CFD is the classic. And the treatment by Reynolds averaging goes back to 1895.

“Large Externals”
Yes, some future influences can’t be predicted from within the model. That’s why we have scenarios.

Here is an example of what AOGCM’s do produce, just from an ocean with bottom topography and solar energy flux etc. It isn’t just the exponential accumulation of error.

• Mike Jonas says:

Nick Stokes – You say I have “not shown any mathematics“. You clearly do not understand what mathematics is. Mathematics at its most basic is simply the study of patterns (eg. http://www.amazon.com/Mathematics-Science-Patterns-Search-Universe/dp/0805073442To most people, mathematics means working with numbers. But [..] this definition has been out of date for nearly 2,500 years. Mathematicians now see their work as the study of patterns—real or imagined, visual or mental, arising from the natural world or from within the human mind.“). I do explain the pattern that leads to exponential error in the models.

You go on with “That’s the problem with these sweeping superficial dismissals – they aren’t specific to climate. “. But that is why I supported the argument with a climate-specific example. See : “NB. This assertion is just basic mathematics, but it is also directly supportable […].“.

The problems may well have been resolved in fluid dynamics, as you say, but they have not been resolved in climate. wrt temperature, the models’ output consists of a trend plus wiggles. The trend basically fits the formula
δTcy = k * Rcy
(see https://wattsupwiththat.com/2015/07/25/the-mathematics-of-carbon-dioxide-part-1/)
and is in effect protected from the models’ internal cell-based calculations while the wiggles are generated. The wiggles are meaningless, and no-one, not even the modellers, expects them to actually match future climate. So we can profitably throw away the mechanism that provides the wiggles, and concentrate on the things that give us trends. Currently, the models have one trend and one trend only – from CO2. It is clearly inadequate because modelled temperature deviates substantially from real temperature both now and in the past. All I am arguing is that the expensive part of the models is totally useless and can be thrown away while we work on the major drivers of climate. Those are all that we need in order to model climate. After we have nailed the climate, we can maybe – maybe – start to look at future weather.

• “I do explain the pattern that leads to exponential error in the models.”

No, you don’t. There is a pattern which leads to exponential growth, but you explain none of it. Differential equations have a number of solutions, and there is just one that you want. Error will appear as some combination of the others. The task of stable solution is to ensure that, when discretised, the other solutions decay relative to the one you want. People who take the trouble to find out understand how to ensure that. Stability conditions like Courant are part of it. You explain none of this, and I see no evidence that you have tried to find out.

“The problems may well have been resolved in fluid dynamics, as you say, but they have not been resolved in climate”
Climate models are regular CFD. The fact that you claim to have identified a pattern in the output in no way says they are unstable; in fact, with instability there would be no such pattern.

“So we can profitably throw away the mechanism that provides the wiggles, and concentrate on the things that give us trends.”
Actually, you can’t. That is the lesson of the Reynolds averaging I cited earlier. There you can average to get rid of the wiggles, but a residual remains, which has the very important effect of increasing the momentum transport, or viscosity. And so in climate. The eddies in that video I showed are not predictions. They won’t happen at that time and place. But they are a vital part of the behaviour of the sea. The purpose of modelling them is to account for their net effect.

• Mike Jonas says:

They won’t happen at that time and place.“. Says it all, really.

I don’t mean that the eddies for example that you mention are unimportant. But you can’t model them together with everything else over an extended period using the models’ cell-based structure. It simply cannot work. In the article, I said “The research itself may well be very complex, but the model is likely to be relatively straightforward.“. Using your example, eddy research could identify how the eddies work in detail, and then work out how they influence climate in general. The general part could then be included in the climate models. There’s no point in trying to be more accurate than that, given all the other uncertainties. It’s the same as the relationship between pure mathematics and applied mathematics – the research is the pure, and what should go into the models is the applied.

You argue much about stability. I explain that the exponential errors in the models need not create instability. But they do still create errors that render them useless. Work on eddies in an eddy model until you really understand them, then put the results into a climate model. Don’t put mythical eddies, whose time and place are as you say unknown, into a climate model, it just adds to cost while doing nothing for results.

• Speaks for itself:
«When initialized with states close to the observations, models ‘drift’ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. … The bias adjustment itself is another important source of uncertainty in climate predictions. There may be nonlinear relationships between the mean state and the anomalies, that are neglected in linear bias adjustment techniques. … »
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )

• This is Decadal Prediction – quite different. It is an attempt to actually predict from initial conditions on a decadal scale. It’s generally considered to have not succeeded yet, and they are describing why. Climate modelling does not rely on initial conditions.

• richardscourtney says:

Nick Stokes:

You say

Climate modelling does not rely on initial conditions.

That is an outrageous falsehood!

Modeling evolution of climate consists of a series of model runs where the output of each run is used as the initial conditions of the next run with other change(s) – e.g. increased atmospheric GHG concentration – also input to the next run.

Each individual run may be claimed to “not rely on initial conditions” but the series of model runs does. This is because each run can be expected to amplify errors resulting from uncertainty of the data used as input for the first run in the series.

Richard

• Climate modelling does not rely on initial conditions.

So climate is not a chaotic system? This is very big news. You ought to do a paper.

• TimTheToolMan says:

No, that is wrong. They can – that is the mode of failure, It’s very obvious – they “explode”. In CFD (computational fluid dynamics), people put a lot of effort into finding stable algorithms. They succeed – CFD works. Planes fly.

You’re missing the point Nick. The issue isn’t whether there is an acceptable solution at any time step in the model run, the problem is that at every time step an error is generated which itself will have a resulting positive or negative bias on the projected temperature. And it won’t have an average of zero over time either. Why should it?

32. The Truth About SuperComputer Models

SuperComputer models used by environmentalists have such complex programs that no person can understand them.
.
So it’s impossible for any scientist to double-check SuperComputer projections with a pencil, paper and slide rule — which is mandatory for good science.
.
I am a fan of all models: Plastic ships assembled with glue, tall women who strut on runways, and computer models too.
.
During my intensive research on SuperComputer models, I discovered something so amazing, so exciting, that I fell off my bar stool.
.
It seems that no matter what process was being modeled — DDT, acid rain, the hole in the ozone layer, alar, silicone breast implants, CO2 levels, etc. — the SuperComputer output is the same — a brief prediction typed on a small piece of paper — and that piece of paper always says the same thing: “Life on Earth will end as we know it”.
.
That’s exactly what the general public has been told, again and again, for various environmental “crises”, starting with scary predictions about DDT in the 1960s.
.
.
I also have an alternate SuperComputer theory, developed while I worked in a different laboratory (sitting on a bar stool in a different bar):

— Scientists INPUT their desired conclusion to the SuperComputer: “Life on Earth will end as we know it”, followed by a description of the subject being studied, such as “CO2”.

—- Then the SuperComputer internally develops whatever physics model is required to reach the desired conclusion.
.
My alternative theory best explains how the current climate physics model was created.

• By Jove, I think you have hit the nail on the head!

Now can we discus the tall, strutting women more?

• Menicholas says:

I am glad I was not sitting on a bar stool when I read Richard Greene’s comment, or I would have fell off it from laughing.
Because it is true.
Who can take seriously the gloomy, monotonous, and never-correct-yet pronouncements of scaremongering end–of-the-world alarmists?
Such predictions are as old as language, it would seem.
The difference between “then” and now is that it did not used to be a hugely profitable business model to predict the end of the world.
Or did it?

• Old Church: Do as we say or you will go to hell !

New Church: Do as we say or life on Earth will become hell (from the global warming catastrophe) !

None of this matters, because we will soon all be dead from DDT, acid rain, hole in the ozone layer, alar, or popping silicone breast implants. The latter would be a true catastrophe. I’m not sure which threat will do us in first. All these threats were said to end life on Earth as we know it, according to leftists, but they were not sure of the year.

Who’d want to live, anyway, if we all had to take rowboats, canoes and gondolas to get to work each morning, because of all the urban flooding from global warming. And think of the huge taxes required to replace subway cars with submarines, because the subway tunnels would be filled with water.

I suppose this would make a good science fiction movie … but it’s painful to hear this nonsense from people who actually believe it.

33. charplum says:

This is not the first article that I have read on the problems with the climate models. This one does it with brevity and is understandable.

All we have to do to better understand things with regard to models is to look back in time only about a year and the failure of the more precise regional models to get the measure of the coming blizzard in NYC. They shut the city down including the subways. The models proved to be wrong I think as close as 12 hours out.

The present climate models lack fidelity with current measurements. How is it then possible that they can be used to project temperatures out to the end of the century. This can only be called folly.

I spent a career on pump design. Given the same circumstances, I would never have stepped in front of my managers and use a program such as this and say you can believe its results. Yet folks are doing that. What standards are they using?

If it had been a model I was developing, management would have never known about it. Upon reviewing the results and sensing how well it matched measured data I would have concluded that I missed something fundamental and it would be back at the drawing board for further consideration and evaluation. I think it is that simple.

In my work on pump designs I worked with a very knowledgeable hydraulics designer. Years ago we started to use Computational Fluid Dynamics (CFD) in evaluating impeller designs. That initial effort was not successful. My experienced hydraulics designer said CFD stands for “Colorful Fluid Deception”. In the meantime, less sophisticated design tools worked quite well.

I can report that issue has been resolved and that improved CFD tools became available.

I came to this work as a “gear head” but became a “sparky” part time. My manger was an experienced designer of induction motors. We developed a model using TKsolver that married the hydraulic design with the motor. Over time it became more sophisticated. I became familiar with induction motor design and the equivalent circuit. It was not difficult for me to get the hydraulic design into the model using pump affinity laws.

When finished, it matched measured results within 2 to 3 percent. In a manner of minutes, I could resize an impeller, change the operating fluid temperature, change the motor winding and get accurate predictions including the full head flow curve.

This was all done without resorting to FEA. Later I learned how to use 2D magnetic FEA for the motor design. That was truly only necessary when we wanted to look at esoteric aspects of the design.

This is running a little longer than I wanted it to but I wanted to present a complete picture.

I also became quite familiar with the benefits of FFT in solving vibration issues from instrumented pump measurements.

It is that aspect that I will now delve into.

I kept reading in articles on this site about a 60-year cycle and I had seen other information about solar cycles so using a canned program that I had available I tried to fit these cycles to the measured Hadcrut4 data. In the figures that follow you will see a fit to the yearly H4 data that only employs seven cycles. I included a contribution from CO2 as well. In the end I was able to use natural cycles and accommodate a contribution from CO2 that fits the yearly data quite nicely.

Note the low value of ECS that was required from the table. The green line in the figure is the contribution of CO2 all by itself that is already part of the red fit line in the figure. The 60-year cycle became 66. The 350-year cycle and the 85-year cycle are from known solar cycles. Others are too.

Some, I am sure, will want to call it curve fitting. That does not bother me; I did many things just like this for 35 years and solved a lot of problems. Don’t try tell me the climate models could do this.

I started a quest to add more cycles and improve fidelity to the measured data. In these cases, I went to the monthly data and used additional datasets.

I was having problems with finding additional frequencies to add to the analysis. That is when I came upon the Optimized Fourier Transform developed by Dr. Evans. That changed things.

The OFT is contained in Dr. Evans spreadsheet and you can locate it here.

http://sciencespeak.com/climate-nd-solar.html

In my analysis I used the results of the OFT as input guesses into my program the tries to minimize the Sum of the Squares Error in fitting the data. Here are the results of my evaluations on various datasets using the most recent added data points. The RSS evaluations use my approximation for the Mauna Loa CO2 measurements that are included in the first link.

https://onedrive.live.com/redir?resid=A14244340288E543!12224&authkey=!ALVk9XRS9vmK6p8&ithint=folder%2c

https://onedrive.live.com/redir?resid=A14244340288E543!12226&authkey=!ALBywJBN8U-nY3c&ithint=folder%2c

https://onedrive.live.com/redir?resid=A14244340288E543!12227&authkey=!AMZTz5ITX7Yunic&ithint=folder%2c

https://onedrive.live.com/redir?resid=A14244340288E543!12229&authkey=!AJfJzy5TqCI0dqk&ithint=folder%2c

I will add a bonus evaluation that was completed in a similar manner
.
NINO 3.0

https://onedrive.live.com/redir?resid=A14244340288E543!12231&authkey=!AHwCAjgenmKflao&ithint=folder%2c

The number of natural cycles included in the above are approximately 90 sinusoids. With the exception of the NINO 3.0 evaluation it should be noted that all the evaluations have a low value of ECS. That was required to accommodate CO2 with the combined cycles and still match the measured data.

I think these evaluations fully support a much lower value of ECS than currently being contemplated in the climate models. I would then ask which method is doing a better job.

Only recently there are reasons to support a much lower value of ECS. They can be found here.

http://joannenova.com.au/2015/11/new-science-18-finally-climate-sensitivity-calculated-at-just-one-tenth-of-official-estimates/#more-44879

Therein, Dr. Evans concludes the following based upon his architectural changes he has proposed for the climate model:

Conclusions
There is no strong basis in the data for favoring any scenario in particular, but the A4, A5, A6, and B4 scenarios are the ones that best reflect the input data over longer periods. Hence we conclude that:
• The ECS might be almost zero, is likely less than 0.25 °C, and most likely less than 0.5 °C.
• The fraction of global warming caused by increasing CO2 in recent decades, μ,is likely less than 20%.
• The CO2 sensitivity, λC, is likely less than 0.15 °C W−1 m2.
Given a descending WVEL, it is difficult to construct a scenario consistent with the observed data in which the influence of CO2 is greater than this.
BTW, I have projected these results forward in time. It is a little problematical with the short number of years for the RSS dataset. If I don’t go too far I think the projections will be satisfactory. That is a story for another time.

34. Forget the models! I don’t know how reliable they are, how a computer can take into account all the possible variables…. Insted, let’s take a look at the past, at the way in which climate had evolved, what are the influences in this situation. Here are some ideas on that: http://oceansgovernclimate.com/.

35. Most models are not any better that the simplest model possible: take the main factors involved (CO2, human and natural aerosols and the solar “constant”) and calculate their influence in a spreadsheet. That is called an “Energy Balance Model” or EBM, top-down approach.

The result is as good, or better than from the multi-million-dollar “state of the art” GCM’s. See the report of Robert Kaufmann and David Stern at:
http://www.economics.rpi.edu/workingpapers/rpi0411.pdf

No wonder that report couldn’t find a publisher…

• FE – and does the simple model have any spatial detail? Does it tell us anything about the hydrological cycle? How many other outputs does a GCM have besides surface temperature? Perhaps you should learn what a GCM is actually used *for* before you make such pronouncements. As the authors of the paper you cite say:

It does appear that there is a trade-off between the greater spatial detail and number of variables provided by the GCMs and more accurate predictions generated by simple time series models.”

How well does the simple energy balance model do on a five-day forecast for thousands of cities scattered across 7 continents?

• richardscourtney says:

oneillsinwisconsin:

FE – and does the simple model have any spatial detail? Does it tell us anything about the hydrological cycle? How many other outputs does a GCM have besides surface temperature? Perhaps you should learn what a GCM is actually used *for* before you make such pronouncements.

A simple model does better than GCMs at providing spatial detail, the hydrological cycle and cannot be misused to misrepresent such information because
(a) a simple model does not provide those indications
but
(b) GCMs provide wrong indications of those indications.
No information is preferable to wrong information.

And you would know these facts if you were to “learn what a GCM is actually used *for*”.

The GCMs represent temperature reasonably well but fail to indicate precipitation accurately. This failure probably results from the imperfect representation of cloud effects in the models. And it leads to different models indicating different local climate changes.

For example, the 2008 US Climate Change Science Program (CCSP) report provided indications of precipitation over the continental U.S. ‘projected’ by a Canadian GCM and a British GCM. Where one GCM ‘projected’ greatly increased precipitation (with probable increase to flooding) the other GCM ‘projected’ greatly reduced precipitation (with probable increase to droughts), and vice versa. It is difficult to see how provision of such different ‘projections’ “can help inform the public and private decision making at all levels” (which the CCSP report said was its “goal”).

Richard

36. indefatigablefrog says:

It seems quite impressive that climate modellers have created climate models that give an output that APPEARS to some to be meaningful or even predictive.
But, I suspect that that is all that they have so far achieved.
If I was building a model of climate then I suspect that most of my time would be spent trying to stop my model from lurching from one extreme to the other.
Or trying to stop my model from doing something which is clearly ludicrous or counter to what we witness in reality.
If I was really good at making computer models of climate, then maybe my model could behave itself reasonably well for much of the time. And possibly even do a reasonable impression of reality.
In which case I could then go on to present the output as meaningful and predictive.
I could even suggest that people come and “study” various things by experimenting with the conditions inside my model.
In looking at this topic, I notice that modellers often reveal that this is exactly what they are doing. For example in this quote:

“By extending the convective timescale to understand how quickly storm clouds impact the air around them, the model produced too little rain,” said Dr. William I. Gustafson, Jr., lead researcher and atmospheric scientist at PNNL. “When we used a shorter timescale it improved the rain amount but caused rain to occur too early in the day.”

Keep tweaking Willy. And at some point you’ll maybe have a set of tweaks which do all the right things.
Though not necessarily for any of the right reasons…
Quote from: http://phys.org/news/2015-04-higher-resolution-climate-daytime-precipitation.html#jCp

• emsnews says:

That works only for a steady state system. We are definitely living on a planet that yo-yos between hot and cold mainly cold.

37. Dinostratus says:

“This assertion is just basic mathematics, but it is also directly supportable”

I believe you are correct, at least in an analogous sense. Unfortunately though, it is just a belief of mine. The “direct support” you provide is a rhetorical argument an not one of “basic mathematics”. This article would have been better if the equations were provided. No rhetoric should be required if it “is just basic mathematics.”

38. emsnews says:

Any computer model that ignores the obvious periodical, regular Ice Age cycles is useless. We can only guess as to what causes these regular, long, cold events. We don’t absolutely know how these happen, no absolute causation but we do know they happen and happen suddenly, not slowly.

Fine tuned models that ignore this fact are useless for predictions since they all assume the ‘normal’ is the warm part of the periodic, short Interglacials. This presumption is a huge problem.

39. TonyN says:

Dear Mike,

To chase down your mathematical assertion, it would be interesting to examine how the supercomputer programmers treat the rounding problem. E.g. a rounding to the nearest billionth of a number in a programme that does a billion arithmetic operations per second with that number, could show an error in the most significant digit.

.

• North of 43 and south of 44 says:

While everyone is piling on, don’t forget truncation due to format conversion and radix differences. One can really get off the wall results if those are ignored.

40. Gunga Din says:

Mr. Layman here.
I said this some time ago regarding long range “climate models”. It seems to fit.

https://wattsupwiththat.com/2012/05/12/tisdale-an-unsent-memo-to-james-hansen/#comment-985181

Gunga Din says:
May 14, 2012 at 1:21 pm
joeldshore says:
May 13, 2012 at 6:10 pm
Gunga Din: The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
As for me, I would rather hedge my bets on the idea that most of the scientists are right than make a bet that most of the scientists are wrong and a very few scientists plus lots of the ideologues at Heartland and other think-tanks are right…But, then, that is because I trust the scientific process more than I trust right-wing ideological extremism to provide the best scientific information.
=========================================================
What will the price of tea in China be each year for the next 100 years? If Chinese farmers plant less tea, will the replacement crop use more or less CO2? What values would represent those variables? Does salt water sequester or release more or less CO2 than freshwater? If the icecaps melt and increase the volume of saltwater, what effect will that have year by year on CO2? If nations build more dams for drinking water and hydropower, how will that impact CO2? What about the loss of dry land? What values do you give to those variables? If a tree falls in the woods allowing more growth on the forest floor, do the ground plants have a greater or lesser impact on CO2? How many trees will fall in the next 100 years? Values, please. Will the UK continue to pour milk down the drain? How much milk do other countries pour down the drain? What if they pour it on the ground instead? Does it make a difference if we’re talking cow milk or goat milk? Does putting scraps of cheese down the garbage disposal have a greater or lesser impact than putting in the trash or composting it? Will Iran try to nuke Israel? Pakistan India? India Pakistan? North Korea South Korea? In the next 100 years what other nations might obtain nukes and launch? Your formula will need values. How many volcanoes will erupt? How large will those eruptions be? How many new ones will develop and erupt? Undersea vents? What effect will they all have year by year? We need numbers for all these things. Will the predicted “extreme weather” events kill many people? What impact will the erasure of those carbon footprints have year by year? Of course there’s this little thing called the Sun and its variability. Year by year numbers, please. If a butterfly flaps its wings in China, will forcings cause a tornado in Kansas? Of course, the formula all these numbers are plugged into will have to accurately reflect each ones impact on all of the other values and numbers mentioned so far plus lots, lots more. That amounts to lots and lots and lots of circular references. (And of course the single most important question, will Gilligan get off the island before the next Super Moon? Sorry. 8-)
There have been many short range and long range climate predictions made over the years. Some of them are 10, 20 and 30 years down range now from when the trigger was pulled. How many have been on target? How many are way off target?
Bet your own money on them if want, not mine or my kids or their kids or their kids etc.

41. Paul Westhaver says:

Confession of a computer modelling grad student:

As part of my MASc programme I wrote a huge simulation platform for a personal computer. The engine of the simulator used finite difference methods, since time I had, and memory was limited. The engine calculated the potential flow field of a non-newtonian fluid flowing around mobile solid elements, boundarylessly. I wanted to simulate a resin injection into a gas filled cavity partially filled with fiber reinforcements. I wanted to simulate the initial stages, and final result to maximize mixing and minimize gas entrapment. It was the hardest thing I ever had to do. It took me months to write the code circa 1990. Subsequently, have used significant numerical analysis tools for any number of problems.

My rule:

Garbage in = Gorgeous garbage out that can be used to convince anyone of anything.

I do not believe in the veracity of the gorgeous simulations. Though climate simulators may look convincing and real, the out put has to be compared with historical before and after data. That has NEVER happened. This programme can’t, with any kind of certainty, predict what will happen anywhere 24 hrs into the future.

So… computer models and fancy renderings routinely conceal the lack of precision and accuracy of the models by dithering the post analysis results to a pretty, and fake, higher precision. That is called marketing… not proper science. I now, and I did both.

Ask Willis E.. He knows. Ask him how many pixels represents a “cell” in his ditherings and how much atmosphere and land area is represented by an image pixel compared with the actual thermometer that existed in the cell providing the initial conditions. He is not alone. All computer wonks do this.

42. Latitude says:

Which temperature data set are these computer games using?

The ones that had 1880 at one temp?….or the ones that had it 2 degrees colder?
..the one that showed the pause?…or the one that erased the pause?
The ones that had the 1930’s hotter?…or the ones that erased it?
…and on and on

What a waste of perfectly good brain cells, debating whether a computer game is right or not. No one will ever know.

• The odds on it being right differ infinitesimally from zero.

• Latitude says:

I know if I was running thousands of computer games….after all that time, effort, and money…and someone told me they changed my numbers

I would blow the roof off!!

43. Louis says:

“Every one of those iterations introduces a small error into the next iteration. Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.”

Someone should explain this to Steve Mosher. He recently made the claim that while climate-model projections may not be accurate in the short term, they will prove accurate at about 50 years out. If I’m remembering that correctly, I’d like to know the reasoning behind his claim. He must believe that all those compounding errors will magically cancel each other out over time. I sure wish that would happen with all the errors I have made over the years.

• Latitude says:

to do that….within the next 10 years global temperatures will have to jump up 1 degree almost over night

• Curious George says:

Easy – Steve hopes not to be around in 50 years. That’s also why IPCC concentrates on year 2100, not on year 2020.

• Menicholas says:

Assuming those models will average out to a correct value seems to me like standing on a golf course and not being able to see where the hole is, but assuming that after one hits a few hundred balls down towards where one supposes it might be, that the hole will at the mathematical average position of all of the balls that were hit.
Such belief assumes a lot, including that one is even hitting the ball in the proper direction.
The hole may be behind you
Temperatures may fall.

• I’d suggest it’s a random walk and nothing to do with fluid dynamics.

44. 601nan says:

The lack of transparency of organizations like the UN (UNFCCC, IPCC, WMO), university government contractor groups on the dole (NCAR and UEA et al.) and government groups like ECMWF, NOAA et al. lends suspicion of chicanery when such groups give out “information” about “here is how we do it” like the IPCC and WMO “models” when these models produce unvalidatable results, which are well known, as exemplified by their reports. I tend to think the “here is how we do it” by such groups is just disinformation designed to deceive.

Not long ago Christopher Monckton of Brenchley and other authors showed in their paper a very simple climate model which can be benched marked and had good results.

So perhaps Monckton of Benchley and others should start a modeling initiative, an online “center for validatable climate modeling” with their simple climate model and funded through KickStarter, for instance, to stand as an alternative to modeling groups like the IPCC, WMO, UEA et al. whose honesty is in doubt.

:-)

• Greg Cavanagh says:

I guess simple models that give good results (and show the world is doing just fine) aren’t giving value to those who want different answers.

Whereas a complex model with lots of parameters that show the world will soon end, has value to those who can use these answers for their own ends.

It’s all about whether the model is useful; in this case: can be marketed to make money or control.

45. indefatigablefrog says:

Here the explanation for the failure of Climate Models:
“making a solid prediction of what’s going to happen in the next few decades is much tougher than modeling what’s going to happen to the whole globe a century from now,”
William Collins, IPCC lead author and climate modeller.
So – there you have it. Climate model projections will all converge upon reality. But only long AFTER the retirement and death of all currently existing climate modellers.
I know this may sound like skepticism – BUT – is this not just a feeble attempt to explain away the failure of computer assisted guessing to make any useful predictions whatsoever over the last two decades.
The predictions will all come to pass, we promise – but only after everyone currently alive – is dead.
From: http://newscenter.lbl.gov/2008/09/17/impacts-on-the-threshold-of-abrupt-climate-changes/

• Seems there’s a simple solution to this problem that could save the day for Climate Science? We’ll just need to ask the current batch of climate modelers to “take one for the team” and jump of the nearest tall building?

• indefatigablefrog says:

First we could show them our projections. Before we project them from the parapet.
Assuming that a climate modeller is spherical. And that gravity g is invariant with height. And ignoring air resistance. And providing an arbitrarily selected mass for an average climate scientist taken from WHO tables of mass for the average sedentary person in highly paid academic employment.
Then we could commence with obtaining empirical validation.
Let the fun begin…

46. John of Cloverdale, WA, Australia. says:

So how much money and resources have been thrown at climate modelling? All for nothing, if the methodology is wrong. It must have been the biggest waste ever in the history of science. Think of all worthwhile engineering projects that could have completed to help the third world, if the same effort and money was applied.

47. So, after all this herein above and all the prior graphs, data, points of order, yelling, attacking, back stabbing, lies, fraud, misplaced honor, fudging, it seems to some that what we have is to the point where we get in many of the Senate, House, and U.N. meet ups and assorted other hearings

In a back room somewhere there is a “fly keeper of a billion flies, and a guy with 100 lbs of fine ground black pepper, they have mixed it up the fly dung and black pepper on and in papers to be reviewed and now every one is hotly debating

Fly dung, 2 ,Pepper,,,6,,,, no!!!,,, no!!!!!! Pepper three, Fly dung 6,,,,,,

48. Berényi Péter says:

First of all, one needs a basic understanding of non reproducible non equilibrium thermodynamic systems, which is lacking. For the climate system, due to the chaotic nature of its dynamics, is a member of this class.

note: A system is said to be reproducible if microstates belonging to the same macrostate can’t develop into different macrostates in a short time. The climate system is not like this, see butterfly effect.

For systems with non reproducible dynamics, not even their Jaynes entropy can be defined, which makes any progress difficult.

No wonder the climate system as a whole does not lend itself to the principle of maximum entropy production, in spite of the fact all reproducible non equilibrium quasi steady state thermodynamic systems are proven to belong to this class.

However, most of entropy production in the climate system happens when incoming short wave radiation gets absorbed and thermalized, therefore it could be increased easily by making Earth darker. But its albedo has a specific (rather high) value, which is the same for the two hemispheres, in spite of the huge difference between their clear sky albedoes.

This latter property is not replicated by any computational climate model, so lack of reproducibility is an essential property, with far reaching, but ill understood consequences.

By the way, unlike the climate system, some non reproducible non equilibrium thermodynamic systems can be replicated under lab conditions, so the missing physics could be studied and perhaps nailed down.

I have no idea why this line of investigation is neglected. It’s nothing else but the traditional approach to unsolved problems in physics.

49. otsar says:

I have noticed that no one has addressed the time step size and damping factors in these models.

50. David Young says:

There is something that is overlooked I think in this article. The argument that GCM’s are accurate for climate is not based on time discretizaiton error at all. It is based on the attractor which at least in principle would be the basis for the climate being computable. I personally have my doubts about this argument, but correctly stating it would be helpful.

• Curious George says:

That’s what Dame Slingo of the MetOffice peddles. She assumes that there is only one attractor, independent of whatever value you choose for the latent heat of water vaporization.

51. David M. Lallatin says:

This thread has been a marvelous presentation of Finagle’s Law.

52. tex says:

There is much more the models cannot handle such as the important interfaces between atmosphere & ocean & atmosphere & land and ocean & land all of which are important. There is much more as well.

53. Tom S. says:

Wow this is great, I’m a programmer with climate science as more of a hobby. I always wondered if they took a “ray-trace” approach. Seems like a variation on this method could be good for GPU (highly parallel) processing. Shoot a ray from the sun, soon as it hits a cell, calculate attenuation of the light, energy transfer etc.

54. LarryFine says:

From what I’ve seen, these people always predict the temperatures increasing on a 45 degree angle, even while measurements remain flat. If engineers built cars like that, we’d all be dead.

I’d like to see them plug in the initial conditions from 50 years ago and “predict” the climate of 40, 30, 20, 10 years ago and today accurately before any policy makers use their model results. In fact, let’s see them do that AND predict the climate 10 or 20 years from now with accuracy before we make any economic decisions based on their work.

55. Why include ocean oscillations and ocean currents in the “Externals” ?
Aren’t they dependant from the other Externals, like winds and precipitations, albeit at a much longer time period?

56. Alx says:

Does not matter how seriously powerful computers are or how many calculations they can perform per nano-second if they are calculating 2+2=4.371

• North of 43 and south of 44 says:

+1 makes it 1.326. Group says so.

• What number base are you using?

57. Man Bearpig says:

sarc >
I have a computer model that gets the winner of horse races, if you all pay me lots of money I will constantly reprogram it every year and will tell you the winner of the derby in 2100, it works perfectly using hindcasting, but has been making a loss for the last 18 or 19 years – a hiatus or a pause if you will, but it must come good at the end, mustn’t it?
<sarc

58. LarryFine says:

Some of the unknowable inputs to their models include the amounts of various anthropogenic gasses released, which depend on future economic activity and technologies.

Nobody knows where the DOW will be in 25 or 50 years, or even whether there will be a DOW. The way things are going, a UN dictator will have banned the DOW and other remnants of “evil” Capitalism.

What will future technologies and manufacturing processes be? Nobody knows, but their emission will affect model variables greatly.

When you think about, they don’t really know the values of many variables used in their models and don’t know the proper algorithms to use. Yet they’re pretending to solve for the variable temperature, and they demand that government’s all over the world change their economic systems based on model results. That’s nuts.

59. Solomon Green says:

Nick Stokes:

” ‘However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists.’
You can’t prove in advance whether a unique solution exists. But when you have one, you can test it simply by seeing whether it satisfies the equations. Go back and substitute. More usually, you test whether the conserved quantities that you are transporting are actually conserved.”

Actually it may satisfy the equations and be a solution but still not be a unique solution.

60. Greg Cavanagh – 11/8/15
“Does the climate change without human influence, and if so how much would it have changed in the last 57 years?”
If you work the numbers on IPCC AR5 Figure 6.1 you will discover that anthro CO2 is partitioned 57/43 between natural sequestration and atmospheric retention. (555 – 240 = 315 PgC) World Bank 4C (AR4) said 50/50, IGSS said 55/45. So much for consensus. This arbitrary partition was “assumed” in order to “prove” (i.e. make the numbers work) that anthro CO2 was solely responsible for the 112 ppmv increase between 1750 – 2011.
This implies that without FF CO2 (3.3 ppmv/y) the natural cycle is a net sink of about 4.0 ppmv/y. Drawing a simplistic straight line extrapolation (see, nothing to this “climate science”) in 69.5 years (278/4) or year 1680 atmos CO2 would be 0, zero, nadda, nowhere to be found.
Oh, what a tangled web we weave!
FE considers my points off topic for this thread which is the worth of the models. How much CO2 in stores and fluxes and the 2 W/m^2 RF climate sensitivity ( which is too high see Climate change in 12 minutes) are critical to the initial conditions (garbage in) and if the models have those incorrect the internal mechanisms are irrelevant. (garbage out)

61. Mod:

When I post a comment and it appears to disappear I post again & get a pop up that says I already said that.

So is the site suffering from the Mondays or have I achieved double secret probation?

62. Not Chicken Little says:

Just as an interested layman, it seems to me the models must:

1. Know all the factors that affect the climate
2. Accurately model all the factors and their interactions
3. Predict results that match reality, at least most of the time

But all the models are all over the map (literally, too). They fail all three points (and I’m sure, many more).

How is “climate science” even considered science? I know they’re trying, but it seems there’s also way too much hubris involved, that’s not warranted in the least…

63. MarkW says:

Even if you could accurately model the climate, you aren’t even half way home yet.
For example, changes in the atmosphere are going to create changes in the biosphere, which will in turn create changes in the atmosphere.
You have to accurately model how the biosphere, cryosphere, lithosphere and hydrosphere interact with the atmosphere as well as interact with each other.
Until you can manage all of those interactions, your models will remain useless.

64. Robert Clemenzi says:

As the very first comment (by Heisenburg) suggested – please fix the spelling error !

John van Neumann should be John von Neumann

• Mike Jonas says:

My mistake. Aplologies.

65. Mike, I got on with the intention of asking how the models account for the actual use of longwave radiation by the “climate system” being modeled. I did go ahead and read parts 1 through 4 of your mathematics of CO2 and it confirmed my initial impression that the really don’t. Later on I found a heated debate (pun intended) on the subject in the comments section.

This seems like a pretty glaring oversight/omission and if it really is intentional as Nicholas Schroeder suggests (see IPCC FAQ on the water cycle), it supports a conclusion that the current lot of CMIP-5 models are poor by design.

The first thing I noticed after reading your “Mathematics of…” series was there was no obvious recognition of work being performed with all that radiant energy, nor did I see an easily detected pathway back to space in the model. It seemed like an elephant in the living room. How did that happen? Obviously, if you’re trying to figure out where all that heat’s going, up the chimney seems like a more intuitive answer than Davy Jones Locker. Water absorbs IR, water rises in the atmosphere, water condenses releasing IR in the upper troposphere (and yes, even in the stratosphere), water falls as rain. Rinse and repeat. Not rocket science.

I’d think they’d do some work on the low hanging fruit like this before really breaking out the big guns and starting over from scratch… Just a thought.

By the way, thanks for writing this. I understand it’s just your interpretation and there seems to be a fair amount of debate on the subject, but I found it useful.

• Mike Jonas says:

I think you are on the right track (one of the right tracks). But heat going up the chimney gave them problems, because then they didn’t have enough heat left to give the observed temperature rise. So they blocked the chimney a bit by ramping down the water cycle. An ocean oscillation, for example, could have given them some of the heat, but they appear not to have looked there.

66. Robert Clemenzi says:

A few years back, I worked with the NASA AR4 climate model. I found that they were using the wrong equation to compute the orbit of the Earth! While the current (AR5) NASA model has fixed that problem, I wonder how many other errors their software has?

For instance, their equation to compute relative humidity is non-standard – apparently used by no one else on the planet.

67. I did a search for “grav” on this page and when I typed the “v” , it bonged not found . So , the discussion of atmospheric temperature profile must be incomplete and therefore incorrect .

I live my life in Array Programming Languages in which physics can be expressed in computable notation as or more succinctly as the notation in any physics text . A 3D finite element model such as discussed here is just the application of functions over the outer product of a surface map and a radial map . See my friend Morten Kromberg’s Talk at Google , https://youtu.be/PlM9BXfu7UY , for an overview of the nature of mainstream APL .

There can be no explanation of atmospheric temperature profile without consideration of the easily derived effect of gravity as HockeySchtick has computed , eg : http://hockeyschtick.blogspot.com/2014/12/how-gravity-continuously-does-work-on.html .

I have not yet worked thru his derivation being busy getting a clean-as-possible copy of my own APL evolute uploaded to http://cosy.com/CoSy/4th.CoSy.html so people can explore it before my presentation at the Forth Day Hangout at Stanford on the 21st .

But gravity is clearly the next parameter to added to the handful of expressions implementing the computation of the mean temperature of a uniformly colored ball in our orbit I presented at ICCC9 , http://cosy.com/Science/HeartlandBasicBasics.html .

gravity is not optional . In fact , as anyone who groks classical physics should realize upon reflection , gravity is the only basic force which can , and therefore must , balance the requirement of the Divergence Theorem that mean energy density inside of a ball must match that calculated over its surface as given in my Heartland presentation .

Alan Guth points out that the unique asymmetric centripetal force gravity computes as negative energy . Thus it is easy to understand the equations which must hold so that total energy , `thermal + gravitational` satisfies the Divergence Theorem on the boundary which satisfies the spectral radiative balance equations .

Equations of electroMagnetic radiation , on the other hand , are symmetric . They cannot “trap” energy . “Green House Gas” theory reduces to the claim that one can construct an adiabatic “tube” with a cap of some particular absorption=emission , `ae` , spectrum at one end , a filter of some given ( `ae` ; transmission ) spectrum in the middle , and a radiant source of some given power spectrum at the other , such that the energy density on the cap side of the filter will be greater than that on the source side . If you can do that , you can construct a perpetual heat engine .

That is why we have never seen and never will physical equations in SI units to ensure they are computable of the asserted phenomenon , nor an experiment demonstrating the phenomenon — which in the case of Venus is claimed to create an energy density at its surface 25 times that supplied by the Sun in its orbit .

Something I will never understand is why there has been no effective push on either side , even by great physicists , to quantitatively understand and test the enabling classical physics over what is now decades .

The temperature “sensitivity” to CO2 or CH4 , or whatever is due only to their minuscule effect on our planet’s spectrum as seen from the outside . ( Which is not to say they do not effect temperature variance , both diurnal and equi-polar . )

I think this is truly an example of a Kuhnian paradigm , a box within which all thought is constrained .

But given that the physics is classic , this is the paradigm which never should have been .

68. Ferdinand Engelbeen wrote “Taking into account the about double change in temperature at the poles than global, that gives a near linear change of ~16 ppmv/°C, confirmed by more recent periods (like the MWP-LIA transition) and the much shorter seasonal (~5 ppmv/°C) and 1-3 year variability (4-5 ppmv/°C: Pinatubo, El Niño).”

This means since we have raised co2 by hundreds of ppm that we should expect 10s of degrees in temperature change. So, something has gone wrong because we see maybe 0.5C for 100ppm or less.

1) CO2 and temperature are related in that temperature affects CO2 but not the other way around.
2) The effects take MUCH longer to occur than is guessed.

I subscribe to the first explanation. It is very clear that CO2 is not the CAUSE of the ice ages or other phenomenon so it must be a secondary effect. The major cause of these changes overwhelms co2 both on the upside and downside of the curves and drives CO2. There could be some complementary effect of CO2 but logic would dictate that whatever CO2s effect it is minimal compared to other effects which dominate it. Recently we’ve seen for instance that El Ninos and AMO/PDO can completely overwhelm 100ppm of CO2 at least in the short term.

Your graph proves the point. If CO2 were such a potent effect causing any substantial portion of the effect you point out was seen related to it then we should have seen much more. I have to say that the climate alarmists back in 1980s were on track for warning that something big could happen but they should have been much more circumspect about the certainty of their possibilities. Whatever the history of all that in the end it is apparent after 75 years of pouring co2 into the atmosphere we should have gotten a lot more response if it was as you and others have pointed out. It clearly isn’t, so the hunt is on to discover what and how these temperature changes happened that is being ascribed to CO2 in the past and therefore what we can expect from CO2 increases now is definitely a LOT LESS than what was suggested decades ago.

69. Regarding reliability of models here is a Response from Gavin Schmidt to a comment by Mark over at realclimate.org:
http://www.realclimate.org/index.php/archives/2015/11/unforced-variations-nov-2015/#sthash.eIH9lMBG.dpuf

“Mark says:
3 Nov 2015 at 6:41 PM
Apparently Roy Spencer’s CMIP5 models vs observations graph has gotten some “uninformed and lame” criticisms from “global warming activist bloggers,” but no criticism from any “actual climate scientists.” Would any actual climate scientists, perhaps one with expertise in climate models, care to comment?
http://www.drroyspencer.com/2015/11/models-vs-observations-plotting-a-conspiracy/

[Response: Happy to! The use of single year (1979) or four year (1979-1983) baselines is wrong and misleading. The use of the ensemble means as the sole comparison to the satellite data is wrong and misleading. The absence of a proper acknowledgement of the structural uncertainty in the satellite data is wrong and misleading. The absence of NOAA STAR or the Po-Chedley et al reprocessing of satellite data is… curious. The averaging of the different balloon datasets, again without showing the structural uncertainty is wrong and misleading. The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission. The pretence that they are just interested in trends when they don’t show the actual trend histogram and the uncertainties is also curious, don’t you think? Just a few of the reasons that their figures never seem to make their way into an actual peer-reviewed publication perhaps… – gavin] ”

——

I Wonder if anybody has informed the 40 000 naive believers in United Nations climate theory – the 40 000 who will meet in Paris to hail United Nations plan to increase energy costs and redistribute wealth – that: “The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission.”

70. Bob Armstrong writes: “Alan Guth points out that the unique asymmetric centripetal force gravity computes as negative energy”

Dare I be the first to suggest that, in my opinion, this makes Alan a true Master of the Obvious” (and I don’t mean that in any way to be sarcastic). It defies imagination that anyone with a basic understanding of physics could not recognize gravity as negative energy, and in fact negative entropy. Gravity organizes. Entropy normalizes.

71. The model I like is the following:

Pi = Po + dE/dt

If Pi = Psun*(1-albedo), Po = Psurf*Emissivity and E is the energy stored by the planet, then this is exact for all t. E is a combination of many factors, some of which do work that does not contribute to the surface temperature, T, which is linear to those components of E that do.

If you define an arbitrary amount of time, tau, such that all of E can be emitted at the rate Po, rewrite as,

Psun(1-albedo) = E/tau + dE/dt

This is the form of the LTI that describes an RC circuit, whose solutions are well known and whose steady state is defined when dE/dt is zero. Note that the dE/ dt term is forcing, per the IPCC definition. They assert that E is linear to T to support a high sensitivity (see fig1 below) which would be true if E wasn’t also being radiated away by the emissions consequential to T^4 and in the steady state those emissions are equal to the energy arriving. The small dots are 3 decades of monthly averages of surface temp vs. planet emissions, for constant slices of latitude (from ISCCP at GISS).