Inside the Climate Computer Models

Guest essay by Mike Jonas

In this article, I take a look inside the workings of the climate computer models (“the models”), and explain how they are structured and how useful they are for prediction of future climate.

This article follows on from a previous article (here) which looked at the models from the outside. This article takes a look at their internal workings.

The Models’ Method

The models divide up the atmosphere and/or oceans and/or land surface into cells in a three-dimensional grid and assign initial conditions. They then calculate how each cell influences all its neighbours over a very short time. This process is then repeated a very large number of times so that the model then predicts the state of the planet over future decades. The IPCC (Intergovernmental Panel on Climate Change) describes it here. The WMO (World Meteorological Organization) describes it here.

clip_image002

[Enlarge]

Seriously powerful computers are needed, because even on a relatively coarse grid, the number of calculations required to predict just a few years ahead is mind-bogglingly massive.

Internal and External

At first glance, the way the models work may appear to have everything covered, after all, every single relevant part of the planet is (or can be) covered by a model for all of the required period. But there are three major things that cannot be covered by the internal workings of the model:

1 – The initial state.

2 – Features that are too small to be represented by the cells.

3 – Factors that are man-made or external to the planet, or which are large scale and not understood well enough to be generated via the cell-based system.

The workings of the cell-based system will be referred to as the internals of the models, because all of the inter-cell influences within the models are the product of the models’ internal logic. Factors 1 to 3 above will be referred to as externals.

Internals

The internals have to use a massive number of iterations to cover any time period of climate significance. Every one of those iterations introduces a small error into the next iteration. Each subsequent iteration adds in its own error, but is also misdirected by the incoming error. In other words, the errors compound exponentially. Over any meaningful period, those errors become so large that the final result is meaningless.

NB. This assertion is just basic mathematics, but it is also directly supportable: Weather models operate on much finer data with much more sophisticated and accurate calculations. They are able to do this because unlike the climate models they are only required to predict local conditions a short time ahead, not regional and global conditions over many decades. Yet the weather models are still unable to accurately predict more than a few days ahead. The climate models’ internal calculations are less accurate and therefore exponentially less reliable over all periods. Note that each climate cell is local, so the models build up their global views from local conditions. On the way from ‘local’ to ‘global’, the models pass through ‘regional’, and the models are very poor at predicting regional climate [1].

At this point, it is worth clearing up a common misunderstanding. The idea that errors compound exponentially does not necessarily mean that the climate model will show a climate getting exponentially hotter, or colder, or whatever, and shooting “off the scale”. The model could do that, of course, but equally the model could still produce output that at first glance looks quite reasonable – yet either way the model simply has no relation to reality.

An analogy : A clock which runs at irregular speed will always show a valid time of day, but even if it is reset to the correct time it very quickly becomes useless.

Initial State

It is clearly impossible for a model’s initial state to be set completely accurately, so this is another source of error. As NASA says : “Weather is chaotic; imperceptible differences in the initial state of the atmosphere lead to radically different conditions in a week or so.“. [2]

This NASA quote is about weather, not climate. But because the climate models’ internals are dealing with weather, ie. local conditions over a short time, they suffer from the same problem. I will return to this idea later.

Small Externals

External factor 2 concerns features that are too small to be represented in the models’ cell system. I call these the small externals. There are lots of them, and they include such things as storms, precipitation and clouds, or at least the initiation of them. These factors are dealt with by parameterisation. In other words, the models use special parameters to initiate the onset of rain, etc. On each use of these parameters, the exact situation is by definition not known because the cell involved is too large. The parameterisation therefore necessarily involves guesswork, which itself necessarily increases the amount of error in the model.

For example, suppose that the parameterisations (small externals) indicate the start of some rain in a particular cell at a particular time. The parameterisations and/or internals may then change the rate of rain over hours or days in that cell and/or its neighbours. The initial conditions of the cells were probably not well known, and if the model had progressed more than a few days the modelled conditions in those cells were certainly by then totally inaccurate. The modelled progress of the rain – how strong it gets, how long it lasts, where it goes – is therefore ridiculously unreliable. The entire rain event would be a work of fiction.

Large Externals

External factor 3 could include man-made factors such as CO2 emissions, pollution and land-use changes (including urban development), plus natural factors such as the sun, galactic cosmic rays (GCRs), Milankovitch cycles (variations in Earth’s orbit), ocean oscillations, ocean currents, volcanoes and, over extremely long periods, things like continental drift.

I covered some of these in my last article. One crucial problem is that while some of these factors are at least partially understood, none of them are understood well enough to predict their effect on future climate – with just one exception. Changes in solar activity, GCRs, ocean oscillations, ocean currents and volcanoes, for example, cannot be predicted at all accurately, and the effects of solar activity and Milankovitch cycles on climate are not at all well understood. The one exception is carbon dioxide (CO2) itself. It is generally-accepted that a doubling of atmospheric CO2 would by itself, over many decades, increase the global temperature by about 1 degree C. It is also generally accepted that CO2 levels can be reasonably accurately predicted for given future levels of human activity. But the effect of CO2 is woefully inadequate to explain past climate change over any time scale, even when enhanced with spurious “feedbacks” (see here, here, here, here).

Another crucial problem is that all the external factors have to be processed through the models’ internal cell-based system in order to be incorporated in the final climate predictions. But each external factor can only have a noticeable climate influence on time-scales that are way beyond the period (a few days at most) for which the models’ internals are capable of retaining any meaningful degree of accuracy. The internal workings of the models therefore add absolutely no value at all to the externals. Even if the externals and their effect on climate were well understood, there would be a serious risk of them being corrupted by the models’ internal workings, thus rendering the models useless for prediction.

Maths trumps science

The harsh reality is that any science is wrong if its mathematics is wrong. The mathematics of the climate models’ internal workings is wrong.

From all of the above, it is clear that no matter how much more knowledge and effort is put into the climate models, and no matter how many more billions of dollars are poured into them, they can never be used for climate prediction while they retain the same basic structure and methodology.

The Solution

It should by now be clear that the models are upside down. The models try to construct climate using a bottom-up calculation starting with weather (local conditions over a short time). This is inevitably a futile exercise, as I have explained. Instead of bottom-up, the models need to be top-down. That is, the models need to work first and directly with climate, and then they might eventually be able to support more detailed calculations ‘down’ towards weather.

So what would an effective climate model look like? Well, for a start, all the current model internals must be put to one side. They are very inaccurate weather calculations that have no place inside a climate model. They could still be useful for exploring specific ideas on a small scale, but they would be a waste of space inside the climate model itself.

A climate model needs to work directly with the drivers of climate such as the large externals above. The work done by Wyatt and Curry [3] could be a good starting point, but there are others. Before such a climate model could be of any real use, however, much more research needs to be done into the various natural climate factors so that they and their effect on climate are understood.

Such a climate model is unlikely to need a super-computer and massive complex calculations. The most important pre-requisite would be research into the possible drivers of climate, to find out how they work, how relatively important they are, and how they have influenced climate in the past. Henrik Svensmark’s research into GCRs is an example of the kind of research that is needed. Parts of a climate model may well be developed alongside the research and assist with the research, but only when the science is reasonably well understood can the model deliver useful predictions. The research itself may well be very complex, but the model is likely to be relatively straightforward.

The first requirement is for a climate model to be able to reproduce past climate reasonably well over various time scales with an absolutely minimal number of parameters. (John van Neumann’s elephant). The first real step forward will be when a climate model’s predictions are verified in the real world. Right now, the models are a long way away from even the first requirement, and are heading in the wrong direction.

###

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.


References

[1] A Literature Debate … R Pielke October 28, 2011 [Note : This article has been referenced instead of the original Demetris Koutsoyiannis paper, because it shows the subsequent criticism and rebuttal. Even the paper’s critic admits that the models “have no predictive skill whatsoever on the chronology of events beyond the annual cycle. A climate projection is thus not a prediction of climate [..]“. A link to the original paper is given.

[2] The Physics of Climate Modeling. NASA (By Gavin A. Schmidt), January 2007

[3] M.G. Wyatt and J.A. Curry, “Role for Eurasian Arctic shelf sea ice in a secularly varying hemispheric climate signal during the 20th century,” (Climate Dynamics, 2013). The best place to start is probably The Stadium Wave, by Judith Curry.

Abbreviations

CO2 – Carbon Dioxide

GCR – Galactic Cosmic Ray

IPCC – Intergovernmental Panel on Climate Change

NASA – National Aeronautics and Space Administration

WMO – World Meteorological Organization

Get notified when a new post is published.
Subscribe today!
5 1 vote
Article Rating
256 Comments
Inline Feedbacks
View all comments
Berényi Péter
November 8, 2015 3:52 pm

First of all, one needs a basic understanding of non reproducible non equilibrium thermodynamic systems, which is lacking. For the climate system, due to the chaotic nature of its dynamics, is a member of this class.
note: A system is said to be reproducible if microstates belonging to the same macrostate can’t develop into different macrostates in a short time. The climate system is not like this, see butterfly effect.
For systems with non reproducible dynamics, not even their Jaynes entropy can be defined, which makes any progress difficult.
No wonder the climate system as a whole does not lend itself to the principle of maximum entropy production, in spite of the fact all reproducible non equilibrium quasi steady state thermodynamic systems are proven to belong to this class.
However, most of entropy production in the climate system happens when incoming short wave radiation gets absorbed and thermalized, therefore it could be increased easily by making Earth darker. But its albedo has a specific (rather high) value, which is the same for the two hemispheres, in spite of the huge difference between their clear sky albedoes.
This latter property is not replicated by any computational climate model, so lack of reproducibility is an essential property, with far reaching, but ill understood consequences.
By the way, unlike the climate system, some non reproducible non equilibrium thermodynamic systems can be replicated under lab conditions, so the missing physics could be studied and perhaps nailed down.
I have no idea why this line of investigation is neglected. It’s nothing else but the traditional approach to unsolved problems in physics.

Curious George
Reply to  Berényi Péter
November 8, 2015 6:00 pm

Péter – I did now that the principle of maximum entropy production was proven. Could you please provide some links?

Curious George
Reply to  Curious George
November 8, 2015 6:00 pm

Grrrh .. I did NOT know.

Berényi Péter
Reply to  Curious George
November 8, 2015 10:35 pm

See:
Journal of Physics A: Mathematical and General Volume 36 Number 3
2003 J. Phys. A: Math. Gen. 36 631
doi:10.1088/0305-4470/36/3/303
Information theory explanation of the fluctuation theorem, maximum entropy production and self-organized criticality in non-equilibrium stationary states
Roderick Dewar
Please note the role reproducibility plays in that discussion, so it is clearly not valid for the climate system. Still, it is a step forward.

otsar
November 8, 2015 4:53 pm

I have noticed that no one has addressed the time step size and damping factors in these models.

David Young
November 8, 2015 5:23 pm

There is something that is overlooked I think in this article. The argument that GCM’s are accurate for climate is not based on time discretizaiton error at all. It is based on the attractor which at least in principle would be the basis for the climate being computable. I personally have my doubts about this argument, but correctly stating it would be helpful.

Curious George
Reply to  David Young
November 8, 2015 5:50 pm

That’s what Dame Slingo of the MetOffice peddles. She assumes that there is only one attractor, independent of whatever value you choose for the latent heat of water vaporization.

David M. Lallatin
November 8, 2015 7:16 pm

This thread has been a marvelous presentation of Finagle’s Law.

tex
November 8, 2015 7:27 pm

There is much more the models cannot handle such as the important interfaces between atmosphere & ocean & atmosphere & land and ocean & land all of which are important. There is much more as well.

Tom S.
November 8, 2015 8:26 pm

Wow this is great, I’m a programmer with climate science as more of a hobby. I always wondered if they took a “ray-trace” approach. Seems like a variation on this method could be good for GPU (highly parallel) processing. Shoot a ray from the sun, soon as it hits a cell, calculate attenuation of the light, energy transfer etc.

LarryFine
November 8, 2015 10:28 pm

From what I’ve seen, these people always predict the temperatures increasing on a 45 degree angle, even while measurements remain flat. If engineers built cars like that, we’d all be dead.
I’d like to see them plug in the initial conditions from 50 years ago and “predict” the climate of 40, 30, 20, 10 years ago and today accurately before any policy makers use their model results. In fact, let’s see them do that AND predict the climate 10 or 20 years from now with accuracy before we make any economic decisions based on their work.

John Peter
November 9, 2015 12:10 am

When you use climate models this is what you get
http://www.telegraph.co.uk/news/worldnews/11983115/How-our-megacities-will-slip-under-the-waves-with-two-degree-rise-in-temperatures.html
“Large swathes of Shanghai, Mumbai, New York and other cities will slip under the waves even if an upcoming climate summit limits global warming to two degrees Celsius, scientists reported on Sunday.”
Not even peer reviewed. Would put publication after Paris. Amazing.

Alx
Reply to  John Peter
November 9, 2015 3:25 am

The MSM regularly trot out photo-shopped images of cities underwater as well as articles with headlines like, “Climate change could create 100 million poor, over half a billion homeless” Yeah and new technology could make me as handsome as George Clooney.
http://www.cnn.com/2015/11/09/world/climate-change-create-poor-homeless/index.html
http://i2.cdn.turner.com/cnnnext/dam/assets/151109133957-climate-change-4-degrees-10-exlarge-169.jpg

November 9, 2015 1:00 am

Why include ocean oscillations and ocean currents in the “Externals” ?
Aren’t they dependant from the other Externals, like winds and precipitations, albeit at a much longer time period?

Alx
November 9, 2015 3:12 am

Does not matter how seriously powerful computers are or how many calculations they can perform per nano-second if they are calculating 2+2=4.371

North of 43 and south of 44
Reply to  Alx
November 9, 2015 3:32 pm

+1 makes it 1.326. Group says so.

Reply to  Alx
November 9, 2015 8:44 pm

What number base are you using?

Man Bearpig
November 9, 2015 5:00 am

sarc >
I have a computer model that gets the winner of horse races, if you all pay me lots of money I will constantly reprogram it every year and will tell you the winner of the derby in 2100, it works perfectly using hindcasting, but has been making a loss for the last 18 or 19 years – a hiatus or a pause if you will, but it must come good at the end, mustn’t it?
<sarc

LarryFine
November 9, 2015 5:00 am

Some of the unknowable inputs to their models include the amounts of various anthropogenic gasses released, which depend on future economic activity and technologies.
Nobody knows where the DOW will be in 25 or 50 years, or even whether there will be a DOW. The way things are going, a UN dictator will have banned the DOW and other remnants of “evil” Capitalism.
What will future technologies and manufacturing processes be? Nobody knows, but their emission will affect model variables greatly.
When you think about, they don’t really know the values of many variables used in their models and don’t know the proper algorithms to use. Yet they’re pretending to solve for the variable temperature, and they demand that government’s all over the world change their economic systems based on model results. That’s nuts.

Solomon Green
November 9, 2015 5:28 am

Nick Stokes:
” ‘However, in computational fluid dynamics, particularly where there is turbulence, we do not even know whether a unique solution exists.’
You can’t prove in advance whether a unique solution exists. But when you have one, you can test it simply by seeing whether it satisfies the equations. Go back and substitute. More usually, you test whether the conserved quantities that you are transporting are actually conserved.”
Actually it may satisfy the equations and be a solution but still not be a unique solution.

November 9, 2015 6:56 am

Greg Cavanagh – 11/8/15
“Does the climate change without human influence, and if so how much would it have changed in the last 57 years?”
If you work the numbers on IPCC AR5 Figure 6.1 you will discover that anthro CO2 is partitioned 57/43 between natural sequestration and atmospheric retention. (555 – 240 = 315 PgC) World Bank 4C (AR4) said 50/50, IGSS said 55/45. So much for consensus. This arbitrary partition was “assumed” in order to “prove” (i.e. make the numbers work) that anthro CO2 was solely responsible for the 112 ppmv increase between 1750 – 2011.
This implies that without FF CO2 (3.3 ppmv/y) the natural cycle is a net sink of about 4.0 ppmv/y. Drawing a simplistic straight line extrapolation (see, nothing to this “climate science”) in 69.5 years (278/4) or year 1680 atmos CO2 would be 0, zero, nadda, nowhere to be found.
Oh, what a tangled web we weave!
FE considers my points off topic for this thread which is the worth of the models. How much CO2 in stores and fluxes and the 2 W/m^2 RF climate sensitivity ( which is too high see Climate change in 12 minutes) are critical to the initial conditions (garbage in) and if the models have those incorrect the internal mechanisms are irrelevant. (garbage out)

November 9, 2015 7:38 am

Mod:
When I post a comment and it appears to disappear I post again & get a pop up that says I already said that.
So is the site suffering from the Mondays or have I achieved double secret probation?

Not Chicken Little
November 9, 2015 7:45 am

Just as an interested layman, it seems to me the models must:
1. Know all the factors that affect the climate
2. Accurately model all the factors and their interactions
3. Predict results that match reality, at least most of the time
But all the models are all over the map (literally, too). They fail all three points (and I’m sure, many more).
How is “climate science” even considered science? I know they’re trying, but it seems there’s also way too much hubris involved, that’s not warranted in the least…

MarkW
November 9, 2015 10:15 am

Even if you could accurately model the climate, you aren’t even half way home yet.
For example, changes in the atmosphere are going to create changes in the biosphere, which will in turn create changes in the atmosphere.
You have to accurately model how the biosphere, cryosphere, lithosphere and hydrosphere interact with the atmosphere as well as interact with each other.
Until you can manage all of those interactions, your models will remain useless.

Robert Clemenzi
November 9, 2015 10:33 am

As the very first comment (by Heisenburg) suggested – please fix the spelling error !
John van Neumann should be John von Neumann

November 9, 2015 10:57 am

Mike, I got on with the intention of asking how the models account for the actual use of longwave radiation by the “climate system” being modeled. I did go ahead and read parts 1 through 4 of your mathematics of CO2 and it confirmed my initial impression that the really don’t. Later on I found a heated debate (pun intended) on the subject in the comments section.
This seems like a pretty glaring oversight/omission and if it really is intentional as Nicholas Schroeder suggests (see IPCC FAQ on the water cycle), it supports a conclusion that the current lot of CMIP-5 models are poor by design.
The first thing I noticed after reading your “Mathematics of…” series was there was no obvious recognition of work being performed with all that radiant energy, nor did I see an easily detected pathway back to space in the model. It seemed like an elephant in the living room. How did that happen? Obviously, if you’re trying to figure out where all that heat’s going, up the chimney seems like a more intuitive answer than Davy Jones Locker. Water absorbs IR, water rises in the atmosphere, water condenses releasing IR in the upper troposphere (and yes, even in the stratosphere), water falls as rain. Rinse and repeat. Not rocket science.
I’d think they’d do some work on the low hanging fruit like this before really breaking out the big guns and starting over from scratch… Just a thought.
By the way, thanks for writing this. I understand it’s just your interpretation and there seems to be a fair amount of debate on the subject, but I found it useful.

Robert Clemenzi
November 9, 2015 11:00 am

A few years back, I worked with the NASA AR4 climate model. I found that they were using the wrong equation to compute the orbit of the Earth! While the current (AR5) NASA model has fixed that problem, I wonder how many other errors their software has?
For instance, their equation to compute relative humidity is non-standard – apparently used by no one else on the planet.

November 9, 2015 11:26 am

I did a search for “grav” on this page and when I typed the “v” , it bonged not found . So , the discussion of atmospheric temperature profile must be incomplete and therefore incorrect .
I live my life in Array Programming Languages in which physics can be expressed in computable notation as or more succinctly as the notation in any physics text . A 3D finite element model such as discussed here is just the application of functions over the outer product of a surface map and a radial map . See my friend Morten Kromberg’s Talk at Google , https://youtu.be/PlM9BXfu7UY , for an overview of the nature of mainstream APL .
There can be no explanation of atmospheric temperature profile without consideration of the easily derived effect of gravity as HockeySchtick has computed , eg : http://hockeyschtick.blogspot.com/2014/12/how-gravity-continuously-does-work-on.html .
I have not yet worked thru his derivation being busy getting a clean-as-possible copy of my own APL evolute uploaded to http://cosy.com/CoSy/4th.CoSy.html so people can explore it before my presentation at the Forth Day Hangout at Stanford on the 21st .
But gravity is clearly the next parameter to added to the handful of expressions implementing the computation of the mean temperature of a uniformly colored ball in our orbit I presented at ICCC9 , http://cosy.com/Science/HeartlandBasicBasics.html .
gravity is not optional . In fact , as anyone who groks classical physics should realize upon reflection , gravity is the only basic force which can , and therefore must , balance the requirement of the Divergence Theorem that mean energy density inside of a ball must match that calculated over its surface as given in my Heartland presentation .
Alan Guth points out that the unique asymmetric centripetal force gravity computes as negative energy . Thus it is easy to understand the equations which must hold so that total energy , thermal + gravitational satisfies the Divergence Theorem on the boundary which satisfies the spectral radiative balance equations .
Equations of electroMagnetic radiation , on the other hand , are symmetric . They cannot “trap” energy . “Green House Gas” theory reduces to the claim that one can construct an adiabatic “tube” with a cap of some particular absorption=emission , ae , spectrum at one end , a filter of some given ( ae ; transmission ) spectrum in the middle , and a radiant source of some given power spectrum at the other , such that the energy density on the cap side of the filter will be greater than that on the source side . If you can do that , you can construct a perpetual heat engine .
That is why we have never seen and never will physical equations in SI units to ensure they are computable of the asserted phenomenon , nor an experiment demonstrating the phenomenon — which in the case of Venus is claimed to create an energy density at its surface 25 times that supplied by the Sun in its orbit .
Something I will never understand is why there has been no effective push on either side , even by great physicists , to quantitatively understand and test the enabling classical physics over what is now decades .
The temperature “sensitivity” to CO2 or CH4 , or whatever is due only to their minuscule effect on our planet’s spectrum as seen from the outside . ( Which is not to say they do not effect temperature variance , both diurnal and equi-polar . )
I think this is truly an example of a Kuhnian paradigm , a box within which all thought is constrained .
But given that the physics is classic , this is the paradigm which never should have been .

November 10, 2015 8:15 am

Ferdinand Engelbeen wrote “Taking into account the about double change in temperature at the poles than global, that gives a near linear change of ~16 ppmv/°C, confirmed by more recent periods (like the MWP-LIA transition) and the much shorter seasonal (~5 ppmv/°C) and 1-3 year variability (4-5 ppmv/°C: Pinatubo, El Niño).”
This means since we have raised co2 by hundreds of ppm that we should expect 10s of degrees in temperature change. So, something has gone wrong because we see maybe 0.5C for 100ppm or less.
1) CO2 and temperature are related in that temperature affects CO2 but not the other way around.
2) The effects take MUCH longer to occur than is guessed.
I subscribe to the first explanation. It is very clear that CO2 is not the CAUSE of the ice ages or other phenomenon so it must be a secondary effect. The major cause of these changes overwhelms co2 both on the upside and downside of the curves and drives CO2. There could be some complementary effect of CO2 but logic would dictate that whatever CO2s effect it is minimal compared to other effects which dominate it. Recently we’ve seen for instance that El Ninos and AMO/PDO can completely overwhelm 100ppm of CO2 at least in the short term.
Your graph proves the point. If CO2 were such a potent effect causing any substantial portion of the effect you point out was seen related to it then we should have seen much more. I have to say that the climate alarmists back in 1980s were on track for warning that something big could happen but they should have been much more circumspect about the certainty of their possibilities. Whatever the history of all that in the end it is apparent after 75 years of pouring co2 into the atmosphere we should have gotten a lot more response if it was as you and others have pointed out. It clearly isn’t, so the hunt is on to discover what and how these temperature changes happened that is being ascribed to CO2 in the past and therefore what we can expect from CO2 increases now is definitely a LOT LESS than what was suggested decades ago.

Science or Fiction
November 10, 2015 1:31 pm

Regarding reliability of models here is a Response from Gavin Schmidt to a comment by Mark over at realclimate.org:
http://www.realclimate.org/index.php/archives/2015/11/unforced-variations-nov-2015/#sthash.eIH9lMBG.dpuf
“Mark says:
3 Nov 2015 at 6:41 PM
Apparently Roy Spencer’s CMIP5 models vs observations graph has gotten some “uninformed and lame” criticisms from “global warming activist bloggers,” but no criticism from any “actual climate scientists.” Would any actual climate scientists, perhaps one with expertise in climate models, care to comment?
http://www.drroyspencer.com/2015/11/models-vs-observations-plotting-a-conspiracy/
[Response: Happy to! The use of single year (1979) or four year (1979-1983) baselines is wrong and misleading. The use of the ensemble means as the sole comparison to the satellite data is wrong and misleading. The absence of a proper acknowledgement of the structural uncertainty in the satellite data is wrong and misleading. The absence of NOAA STAR or the Po-Chedley et al reprocessing of satellite data is… curious. The averaging of the different balloon datasets, again without showing the structural uncertainty is wrong and misleading. The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission. The pretence that they are just interested in trends when they don’t show the actual trend histogram and the uncertainties is also curious, don’t you think? Just a few of the reasons that their figures never seem to make their way into an actual peer-reviewed publication perhaps… – gavin] ”
——
I Wonder if anybody has informed the 40 000 naive believers in United Nations climate theory – the 40 000 who will meet in Paris to hail United Nations plan to increase energy costs and redistribute wealth – that: “The refusal to acknowledge that the model simulations are affected by the (partially overestimated) forcing in CMIP5 as well as model responses is a telling omission.”

November 12, 2015 1:08 am

Bob Armstrong writes: “Alan Guth points out that the unique asymmetric centripetal force gravity computes as negative energy”
Dare I be the first to suggest that, in my opinion, this makes Alan a true Master of the Obvious” (and I don’t mean that in any way to be sarcastic). It defies imagination that anyone with a basic understanding of physics could not recognize gravity as negative energy, and in fact negative entropy. Gravity organizes. Entropy normalizes.

November 19, 2015 3:41 pm

The model I like is the following:
Pi = Po + dE/dt
If Pi = Psun*(1-albedo), Po = Psurf*Emissivity and E is the energy stored by the planet, then this is exact for all t. E is a combination of many factors, some of which do work that does not contribute to the surface temperature, T, which is linear to those components of E that do.
If you define an arbitrary amount of time, tau, such that all of E can be emitted at the rate Po, rewrite as,
Psun(1-albedo) = E/tau + dE/dt
This is the form of the LTI that describes an RC circuit, whose solutions are well known and whose steady state is defined when dE/dt is zero. Note that the dE/ dt term is forcing, per the IPCC definition. They assert that E is linear to T to support a high sensitivity (see fig1 below) which would be true if E wasn’t also being radiated away by the emissions consequential to T^4 and in the steady state those emissions are equal to the energy arriving. The small dots are 3 decades of monthly averages of surface temp vs. planet emissions, for constant slices of latitude (from ISCCP at GISS).
http://www.palisad.com/co2/tp/fig1.png