How reliable are the climate models?

Guest essay by Mike Jonas


There are dozens of climate models. They have been run many times. The great majority of model runs, from the high-profile UK Met Office’s Barbecue Summer to Roy Spencer’s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high. Why?

The answer is, mathematically speaking, very simple.

The fourth IPCC report [para 9.1.3] says : “Results from forward calculations are used for formal detection and attribution analyses. In such studies, a climate model is used to calculate response patterns (‘fingerprints’) for individual forcings or sets of forcings, which are then combined linearly to provide the best fit to the observations.”

To a mathematician that is a massive warning bell. You simply cannot do that. [To be more precise, because obviously they did actually do it, you cannot do that and retain any credibility]. Let me explain :

The process was basically as follows

(1) All known (ie. well-understood) factors were built into the climate models, and estimates were included for the unknowns (The IPCC calls them parametrizations – in UK English : parameterisations).

(2) Model results were then compared with actual observations and were found to produce only about a third of the observed warming in the 20th century.

(3) Parameters controlling the unknowns in the models were then fiddled with (as in the above IPCC report quote) until they got a match.

(4) So necessarily, about two-thirds of the models’ predicted future warming comes from factors that are not understood.

Now you can see why I said “You simply cannot do that”: When you get a discrepancy between a model and reality, you obviously can’t change the model’s known factors – they are what they are known to be. If you want to fiddle the model to match reality then you have to fiddle the unknowns. If your model started off a long way from reality then inevitably the end result is that a large part of your model’s findings come from unknowns, ie, from factors that are not understood. To put it simply, you are guessing, and therefore your model is unreliable.

OK, that’s the general theory. Now let’s look at the climate models and see how it works in a bit more detail.

The Major Climate Factors


The climate models predict, on average, global warming of 0.2 deg C per decade for the indefinite future.

What are the components of climate that contribute to this predicted future warming, and how well do we understand them?

ENSO (El Nino Southern Oscillation) : We’ll start with El Nino, because it’s in the news with a major El Nino forecast for later this year. It is expected to take global temperature to a new high. The regrettable fact is that we do not understand El Nino at all well, or at least, not in the sense that we can predict it years ahead. Here we are, only a month or so before it is due to cut in, and we still aren’t absolutely sure that it will happen, we don’t know how strong it will be, and we don’t know how long it will last. Only a few months ago we had no idea at all whether there would be one this year. Last year an El Nino was predicted and didn’t happen. In summary : Do we understand ENSO (in the sense that we can predict El Ninos and La Ninas years ahead)? No. How much does ENSO contribute, on average, to the climate models’ predicted future warming? 0%.

El Nino and La Nina are relatively short-term phenomena, so a 0% contribution could well be correct but we just don’t actually know. There are suggestions that an El Nino has a step function component, ie. that when it is over it actually leaves the climate warmer than when it started. But we don’t know.

Ocean Oscillations : What about the larger and longer ocean effects like the AMO (Atlantic Multidecadal Oscillation), PDO (Pacific Decadal Oscillation), IOD (Indian Ocean Dipole), etc. Understood? No. Contribution in the models : 0%.

Ocean Currents : Are the major ocean currents, such as the THC (Thermohaline Circulation), understood? Well we do know a lot about them – we know where they go and how big they are, and what is in them (including heat), and we know much about how they affect climate – but we know very little about what changes them and by how much or over what time scale. In summary – Understood? No. Contribution in the models : 0%.

Volcanoes : Understood? No. Contribution in the models : 0%.

Wind : Understood? No. Contribution in the models : 0%.

Water cycle (ocean evaporation, precipitation) : Understood? Partly. Contribution in the models : the contribution in the climate models is actually slightly negative, but it is built into a larger total which I address later.

The Sun : Understood? No. Contribution in the models : 0%. Now this may come as a surprise to some people, because the Sun has been studied for centuries, we know that it is the source of virtually all the surface and atmospheric heat on Earth, and we do know quite a lot about it. Details of the 11(ish) year sunspot cycle, for example, have been recorded for centuries. But we don’t know what causes sunspots and we can’t predict even one sunspot cycle ahead. Various longer cycles in solar activity have been proposed, but we don’t even know for sure what those longer cycles are or have been, we don’t know what causes them, and we can’t predict them. On top of that, we don’t know what the sun’s effect on climate is – yes we can see big climate changes in the past and we are pretty sure that the sun played a major role (if it wasn’t the sun then what on Earth was it?) but we don’t know how the sun did it and in any case we don’t know what the sun will do next. So the assessment for the sun in climate models is : Understood? No. Contribution in the models : 0%. [Reminder : this is the contribution to predicted future warming]

Galactic Cosmic Rays (GCRs) : GCRs come mainly from supernovae remnants (SNRs). We know from laboratory experiment and real-world observation (eg. of Forbush decreases) that GCRs create aerosols that play a role in cloud formation. We know that solar activity affects the level of GCRs. But we can’t predict solar activity (and of course we can’t predict supernova activity either), so no matter how much more we learn about the effect of GCRs on climate, we can’t predict them and therefore we can’t predict their effect on climate. And by the way, we can’t predict aerosols from other causes either. In summary for GCRs : Understood? No. Contribution in the models : 0%.

Milankovich Cycles : Milankovich cycles are all to do with variations in Earth’s orbit around the sun, and can be quite accurately predicted. But we just don’t know how they affect climate. The most important-looking cycles don’t show up in the climate, and for the one that does seem to show up in the climate (orbital inclination) we just don’t know how or even whether it affects climate. In any case, its time-scale (tens of thousands of years) is too long for the climate models so it is ignored. In summary for Milankovich cycles : Understood? No. Contribution in the models : 0%. (Reminder : “Understood” is used in the context of predicting climate).

Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary – Understood? Yes. Contribution in the models : about 37%.

Water vapour : we know that water vapour is a powerful greenhouse gas, and that in total it has more effect than CO2 on global temperature. We know something about what causes it to change, for example the Clausius-Clapeyron equation is well accepted and states that water vapour increases by about 7% for each 1 deg C increase in atmospheric temperature. But we don’t know how it affects clouds (looked at next) and while we have reasonable evidence that the water cycle changes in line with water vapour, the climate models only allow for about a third to a quarter of that amount. Since the water cycle has a cooling effect, this gives the climate models a warming bias. In summary for water vapour – Understood? Partly. Contribution in the models : 22%, but suspect because of the missing water cycle.

Clouds : We don’t know what causes Earth’s cloud cover to change. Some kinds of cloud have a net warming effect and some have a net cooling effect, but we don’t know what the cloud mix will be in future years. Overall, we do know with some confidence that clouds at present have a net cooling effect, but because we don’t know what causes them to change we can’t know how they will affect climate in future. In particular, we don’t know whether clouds would cool or warm in reaction to an atmospheric temperature increase. In summary, for clouds : Understood? No. Contribution in the models : 41%, all of which is highly suspect.


The following table summarises all of the above:

Factor Understood? Contribution to models’ predicted future warming
ENSO No 0%
Ocean Oscillations No 0%
Ocean Currents No 0%
Volcanoes No 0%
Wind No 0%
Water Cycle Partly (built into Water Vapour, below)
The Sun No 0%
Galactic Cosmic Rays (and aerosols) No 0%
Milankovich cycles No 0%
Carbon Dioxide Yes 37%
Water Vapour Partly 22% but suspect
Clouds No 41%, all highly suspect
Other (in case I have missed anything) 0%

The not-understood factors (water vapour, clouds) that were chosen to fiddle the models to match 20th-century temperatures were both portrayed as being in reaction to rising temperature – the IPCC calls them “feedbacks” – and the only known factor in the models that caused a future temperature increase was CO2. So those not-understood factors could be and were portrayed as being caused by CO2.

And that is how the models have come to predict a high level of future warming, and how they claim that it is all caused by CO2. The reality of course is that two-thirds of the predicted future warming is from guesswork and they don’t even know if the sign of the guesswork is correct. ie, they don’t even know whether the guessed factors actually warm the planet at all. They might even cool it (see Footnote 3).

One thing, though, is absolutely certain. The climate models’ predictions are very unreliable.


Mike Jonas

September 2015

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.



1. If you still doubt that the climate models are unreliable, consider this : The models typically work on a grid system, where the planet’s surface and atmosphere are divided up into not-very-small chunks. The interactions between the chunks are then calculated over a small time period, and the whole process is then repeated a mammoth number of times in order to project forward over a long time period (that’s why they need such large computers). The process is similar to the process used for weather prediction but much less accurate. That’s because climate models run over much longer periods so they have to use larger chunks or they run out of computer power. The weather models become too inaccurate to predict local or regional weather in just a few days. The climate models are less accurate.

2. If you still doubt that the climate models are unreliable, then perhaps the IPCC themselves can convince you. Their Working Group 1 (WG1) assesses the physical scientific aspects of the climate system and climate change. In 2007, WG1.said “we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.

3. The models correctly (as per the the Clausius-Clapeyron equation) show increased atmospheric water vapour from increased temperature. Water vapour is a greenhouse gas so there is some warming from that. In the real world, along with the increased water vapour there is more precipitation. Precipitation comes from clouds, so logically there will be more clouds. But this is where the models’ parameterisations go screwy. In the real world, the water cycle has a cooling effect, and clouds are net cooling overall, so both an increased water cycle and increased cloud cover will cool the planet. But, as it says in the IPCC report, they had to find a way to increase temperature in the models enough to match the observed 20th century temperature increase. To get the required result, the parameter setttings that were selected (ie, the ones that gave them the “best fit to the observations“), were the ones that minimised precipitation and sent clouds in the wrong direction. Particularly in the case of clouds, where there are no known ‘rules’, they can get away with it because, necessarily, they aren’t breaking any ‘rules’ (ie, no-one can prove absolutely that their settings are wrong). And that’s how, in the models, cloud “feedback” ends up making the largest contribution to predicted future warming, larger even than CO2 itself.

4. Some natural factors, such as ENSO, ocean oscillations, clouds (behaving naturally), etc, may well have caused most of the temperature increase of the 20th century. But the modellers chose not to use them to obtain the required “best fit“.If those natural factors did in fact cause most of the temperature increase of the 20th century then the models are barking up the wrong tree. Model results – consistent overestimation of temperature – suggest that this is the case.

5. To get their “best fit“, the chosen fiddle factors (that’s the correct mathematical term, aka fudge factors) were “combined linearly“. But as the IPCC themselves said, “we are dealing with a coupled nonlinear chaotic system”. Hmmmm ….

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
Quinn the Eskimo
September 17, 2015 5:04 pm

Mike, you might on Trenberth’s 10 most-wanted list for a RICO prosecution.

Reply to  Quinn the Eskimo
September 17, 2015 6:23 pm

Racketeering? Just how does Mike Jonas qualify for Racketeering charges?
Collusion? No.
Conspiring with others to injure or harm others or damage their careers? No.
Gain benefit through collusion? No.
Now try those questions against many of the ‘climate team’.

Reply to  ATheoK
September 24, 2015 11:15 am

[snip – fake email address – a valid email address is required to comment here -mod]

Reply to  Quinn the Eskimo
September 17, 2015 9:23 pm

Oh, they shouldn’t start invoking RICO in the climate field. There’s so much opportunity for that to gloriously backfire on them.

Reply to  AnonyMoose
September 18, 2015 6:31 pm

What a delicious thought!

Steve Zell
Reply to  Quinn the Eskimo
September 25, 2015 8:44 am

Thank you for a great article on many reasons why climate models over-predict future temperatures.
You mentioned the Clauseus-Clapeyron equation for concentrations of water vapor in air. But it actually only predicts the vapor pressure of water as a function of temperature, which is proportional to the concentration of water in saturated air. Climate modelers tend to arbitrarily assume that if the air gets warmer, the relative humidity stays constant, meaning that the water vapor concentration (saturation concentration * relative humidity) increases.
But this additional water vapor has to come from somewhere–evaporation of water from a body of water (ocean or lake) in contact with air, and this evaporation requires heat input either from the air or water. Since absorption of IR radiation by CO2 only affects air temperature, if the temperature of the air in contact with the ocean increases, the heat to evaporate the water needed to maintain constant relative humidity must be provided by the air. Depending on temperature and the assumed relative humidity, this can require 50 to 80% of the heat required to raise the air temperature, meaning that a “constant relative humidity” assumption imposes a negative feedback of -0.5 to -0.8 on the energy increase due to increasing CO2.
By assuming constant relative humidity, climate modelers like to take “credit” for the IR radiation absorbed by the additional water vapor in the air, but do not account for the heat loss required to vaporize the water, so the models tend to exaggerate the overall warming effects.

pippen kool
September 17, 2015 5:14 pm

You average the models. Why do you not average the surface temps, which they are modeled for?
Rather than using an apples and oranges comparison, which is troposphere to models?

Reply to  pippen kool
September 17, 2015 7:36 pm

If the factoring for the two volcanic eruptions are removed, the two lines (model and actual) look like two divergent lines with zero correlation.

Reply to  pippen kool
September 18, 2015 6:54 am

Your question makes no sense.

Reply to  MarkW
September 20, 2015 9:10 pm

Presumably a reference to the lead graphic. It shows weather balloon and satellite observations of the lower troposphere. Not sure which CMIP5 models are represented (by an average of them all), but the IPCC report mainly focuses on the CMIP5 surface temperature models.
Comparing these against observations to July 2015 gives this:
Still below the multi-model average, but well within the projected range. 2014 and 2015 have pushed observations a lot closer to the surface models’ average, but still low.

September 17, 2015 5:17 pm

The IPCC calls them parametrizations – in UK English : parameterisations.
In common English : BS.

Curious George
September 17, 2015 5:18 pm

Most IPCC climate models don’t even have a latent heat of water right – at least CAM 5.1 does not have it right, and they keep it in their assembly. It assumes the latent heat of water vaporization to be independent of temperature, but actually at 30 degrees C it is 3% lower than at the freezing point, so it overestimates the transfer of heat by water evaporation from tropical seas by 3%, see

Reply to  Curious George
September 17, 2015 5:26 pm

“…latent heat of water…”
Sensible heat of air (incl CO2): 0.24 Btu/lb – F
Sensible heat of liquid water: 1.0 Btu/lb – F
Latent heat of evap/cond: 1,000 Btu/lb
When water evaporates into dry air it takes about 1,000 Btu/lb of water with it cooling the air by bunches. Ever notice how cool the rain makes the air?

Curious George
Reply to  Nicholas Schroeder
September 17, 2015 5:41 pm

Thanks, Nick. The issue is that that the latent heat of evaporation is not always 1,000 Btu/lb. At a boiling point it is 970, at 30 C it is 1040, and at a freezing point it is 1070, which the models use (all numbers rounded).

Anthony Zeeman
Reply to  Nicholas Schroeder
September 17, 2015 5:42 pm

Careful, you’re mixing physics with climatology and the result could be dangerous. In physics, average temperature has no basis while it’s climatology’s base.
In climatology, adding one ice cube to your drink is the same as adding 10 as the average temperature is the same. Physics begs to differ.
Sort of like the difference between astronomy and astrology. They’re both about stars and planets, but come up with completely different results

Reply to  Nicholas Schroeder
September 17, 2015 7:06 pm

…mixing physics with climatology and the result could be dangerous
As dangerous as the engineering world’s occasional unfortunate spontaneous rapid uncontrolled disassemblies?

Reply to  Nicholas Schroeder
September 18, 2015 5:37 am

“As dangerous as the engineering world’s occasional unfortunate spontaneous rapid uncontrolled disassemblies?”
Hey, don’t be rash. The engineering world’s occasional unfortunate spontaneous rapid uncontrolled disassembly is the demolition worlds’ bread and butter.

Reply to  Nicholas Schroeder
September 24, 2015 2:04 pm

Nichplas – ( RE ” Ever notice how cool the rain makes the air ?” )… It is my belief that the air makes the rain cool – as the cooler air condenses the water vapor to form the rain.

Reply to  Curious George
September 17, 2015 8:59 pm
The rate of evaporation is highly dependent on temperature and the partial pressure of water at 30°C is 695% of the partial pressure at 0°C.

Erik Magnuson
Reply to  PA
September 17, 2015 10:18 pm

In environments near room temperature, vapor pressure of water roughly doubles for every ~11C (20F) increase in temperature. With a dew point of 30C, the vapor pressure of water is over 4% of the standard sea level pressure and the buoyancy of humid air is significant – no wonder the NHC states 26C as the SST needed to support hurricanes – and also Willis’s comment about a governor kicking in when SST’s approach 30C.

September 17, 2015 5:21 pm

“How reliable are the climate models?”
IPCC AR5 text box 9.2 – mostly not very, low confidence, un-robust, PoS.

Reply to  Nicholas Schroeder
September 18, 2015 9:01 am

Isn’t this the smoking gun related to consensus?
How can there be a consensus if the models don’t share common code related to Physics and other aspects which are considered resolved?
One other obvious issue, why is everyone trying to reinvent the wheel instead of parallel processing using nodes which specialize on an aspect of the climate system making it easier to maintain corrections to all models?
Do the models represent state of the understanding, if no then what is?

V. eng
Reply to  John
September 18, 2015 10:40 pm

Why, yes the models do represent the present “state of the understanding.” They are incorrect, loaded with parameterisations, biases, and fudged to output a desired result. Look at the author’s list of forcings and some of the most important such as the Sun’s influence are not really used properly because they are not predictable. Add to that the use of the obviously fudged input temperature data (now with an attempt to remove the present warming hiatus) and what results is fantasy, nothing more, though it does keep the grant money flowing and pushes a political narrative.

Mark Cooper
September 17, 2015 5:21 pm

What is the source of that plot at the top of your post- Did you make it yourself ? (That’s the impression I get from your post)
the nearest thing I can find to your plot is this:
Why are we still seeing plots on WUWT without error bars,especially on a plot of an average of 102 models? It’s very misleading…

Reply to  Mark Cooper
September 17, 2015 5:47 pm

…. because the model error bars wouldn’t fit on most people’s computer screens ??

Reply to  philincalifornia
September 17, 2015 5:58 pm

You Bad Person. +1

Reply to  philincalifornia
September 17, 2015 9:25 pm

I’ll drink to that, as soon as I find an error bar.

Reply to  Mark Cooper
September 17, 2015 7:08 pm

I didn’t supply it, and I don’t know its source. It’s similar to the one you provided a link to, and of course the “Epic Fail” chart could have been used too. No matter where you look, the models overestimate global temperature.

george e. smith
Reply to  Mike Jonas
September 18, 2015 7:27 am

“””””….. The Sun : Understood? No. Contribution in the models : 0%. Now this may come as a surprise to some people, because the Sun has been studied for centuries, we know that it is the source of virtually all the surface and atmospheric heat on Earth, and we do know quite a lot about it. …..”””””
Could you change the word ” heat ” to the word ” heating ” please.
We get energy from the sun; but not in the form of heat.
We make all of our surface and atmospheric heat right here on earth; home grown you might say.
But I’ll accept ” heating ” as a true statement.

Reply to  Mike Jonas
September 18, 2015 8:50 am

George, as has been explained to, you can’t have your own definition of Heat In the context of Physics:
Heat is energy in transfer other than as work or by transfer of matter.

george e. smith
Reply to  Mike Jonas
September 18, 2015 12:23 pm

And I won’t even ask what reference Physics text book Viking Explorer got HIS definition of ” heat ” out of.
So the other day we got expert opinion that UV is light, and now we also have expert opinion that UV also is heat, and my AM radio picks up heat as does my television set.
And I guess that convection is now demoted to just dwarf heat standing since it involves transfer of matter.
Now that is quite a revelation that transfer of hot matter is not heat, but a photon is heat.
I’ll have to remember that.

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 5:06 pm

George, technically you are correct. The energy does not ARRIVE in the form of heat. Nevertheless, what you might consider being a stickler for detail, others might consider nitpicking. We do know what he meant.

Reply to  Mike Jonas
September 19, 2015 5:32 pm

“Ultraviolet (UV) light is an electromagnetic radiation with a wavelength from 400 nm to 100 nm, shorter than that of visible light but longer than X-rays.”
“UV, or ultraviolet, light is an invisible form of electromagnetic radiation that has a shorter wavelength than the light humans can see. It carries more energy than visible light and can sometimes break bonds between atoms and molecules, altering the chemistry of materials exposed to it. UV light can also cause some substances to emit visible light, a phenomenon known as fluorescence. This form of light — which is present in sunlight — can be beneficial to health, as it stimulates the production of vitamin D and can kill harmful microorganisms, but excessive exposure can cause sunburn and increase the risk of skin cancer. UV light has many uses, including disinfection, fluorescent light bulbs, and in astronomy”
Light, Definition:
“Physics. a.Also called luminous energy, radiant energy. electromagnetic radiation to which the organs of sight react, ranging in wavelength from about 400 to 700 nm and propagated at a speed of 186,282 mi./sec (299,972 km/sec), considered variously as a wave, corpuscular, or quantum phenomenon.
b.a similar form of radiant energy that does not affect the retina, as ultraviolet or infrared rays.”
Just sayin’.
We have ultraviolet cameras. They are sensitive to UV radiation. This radiation is commonly referred to as UV light. I go to a dermatologist to get treatments for a mild case of psoriasis, and one of the treatments consists of spending time in a “lightbox”, which contains special bulbs which emit some stuff that everyone calls, and are sold as, ultraviolet lights.
Some organisms have eyes that can detect UV. Is it only “light” if humans can see it?
But the dictionary, which contains the definition of the words we use, lists clear references to light which cannot necessarily be detected by our retina.
We use language to communicate. When someone says UV light, few, if any, are baffled by this phrase, hence it is a valid reference to call it thusly.

Reply to  Mike Jonas
September 21, 2015 2:37 am

“No matter where you look, the models overestimate global temperature.”
That’s not correct when you compare CMIP3 and CMIP5 surface models to surface observations, e.g.:
Observations are now well inside the multi-model range for CMIP5 (more so for the earlier but longer running CMIP3), though still below the multi-model mean.
It’s correct to say that most surface models have so far overestimated surface observations; but several have also underestimated them and the model range remains valid, though on the low side.

Reply to  Mike Jonas
September 21, 2015 10:10 am

Hi Mike, if you remove the CO2 effect from the models do they get closer to the actual measured temperature or further away?

George E. Smith
Reply to  Mike Jonas
September 21, 2015 11:40 am

I’m finding it virtually impossible to read or write at WUWT any more. ground script peculiar to WUWT uses up all my CPU time.
So Viking Explorer says I make up my own Physics.
Anne Ominous says I’m sticklikly correct.
Both of them can’t be correct.
And I have cited the commonly recognized definition of ‘Light’ several times.
That comes from an international body of recognized experts; not from Sam’s Bar Club dictionary.
Bottom line:
Light ‘by definition’ IS visible. (to humans)
And it is not a physical phenomenon, but a psycho-physical one, with its own set of defined measurements and units. (it’s all in your head )
And WUWT is supposed to be a place where truth and accuracy are expected
it is NOT true that we all understand what is meant by someone’s post. Some do, some don’t; we are supposed to be helping those who don’t; which often times includes ourselves.
The Physics Department at my alma mater freely admits that the still don’t teach Ohm’s Law correctly ( R = constant) , but at least the don’t tell me I’m a stickler for correctness.
R = E / I is not Ohm’s law; it is the definition of R.

Reply to  Mike Jonas
September 24, 2015 5:29 pm

George obviously doesn’t understand the concept of radiant heat transfer. We get heat from the sun. Period. Convection is not the only form of heat transfer. Anne is wrong. Heat can be transferred without the exchange of mass.
Do you have some reference for your “commonly recognized definition of ‘Light’?” Or is it a personal thing that you made up in a bar yourself? “it is not a physical phenomenon, but a psycho-physical one?” LOL

Anne Ominous
Reply to  Mike Jonas
September 26, 2015 12:09 pm

No, I was not wrong. You are neglecting the context of his comment.
Of course heat can be transferred without mass transfer. Who claimed otherwise? What I was referring to was the fact that the energy input to the Earth arrives as radiation, not “heat”. Heat doesn’t occur until the radiation is absorbed.
So in the context given, George is technically correct. The energy DOESN’T arrive as “heat”. It arrives as radiation.

September 17, 2015 5:27 pm

It’s always worth quoting from Jeff Glassman’s “Conjecture, Hypothesis, Theory, Law: The Basis of Rational Argument” (December 2007) regarding the climate models conjured up by the warmist quacks:

The consensus relies on models initialized after the start of the Industrial era, which then try to trace out a future climate. Science demands that a climate model reproduce the climate data first. These models don’t fit the first-, second-, or third-order events that characterize the history of Earth’s climate. They don’t reproduce the Ice Ages, the Glacial epochs, or even the rather recent Little Ice Age. The models don’t even have characteristics similar to these profound events, much less have the timing right. Since the start of the Industrial era, Earth has been warming in recovery from these three events. The consensus initializes its models to be in equilibrium, not warming.

Note that these observations were published by Dr. Glassman long before the tranche enabled those of us outside “the consensus” to directly examine the computer programming that drew the infamous hockey stick curve from Brownian “red noise” random numbers.

September 17, 2015 5:49 pm

@Mike Jonas, who wrote:

The great majority of model runs, from the high-profile UK Met Office’s Barbecue Summer to Roy Spencer’s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high.

Do we really know each model run?
They pick out subsets, don’t show every one.
Many runs did not fit preconceptions;
Those are binned. They don’t allow exceptions.
It may just be that, if we saw them all,
The runs, on average, made a closer call.
But “closer calls” that leave the market free
Are not allowed: They need catastrophe!
===|==============/ Keith DeHavelle

September 17, 2015 5:56 pm

I thought aerosols were the magic control knob for adjusting the models to fit observation. It works like this:
You put in way too much warming due to CO2, then cancel it out with way too much aerosol cooling. At this point your model matches observation. Now you carry forward, reducing the aerosols as you go. This allows the way-too-much warming to emerge. Presto, Global Warming with GCMs.
But there are much easier ways to generate AGW. For instance, I was playing around with the UAH data set in R-Studio, and I generated 3.0 deg/century warming in just a few minutes.

Mike Smith
September 17, 2015 5:59 pm

It’s really rather simple. The models were built on the assumption that CO2 drives warming. Hence the models showed that CO2 causes warming. The fact that the models don’t correlate with reality is completely irrelevant to the modellers and the AGW True Believers.

September 17, 2015 6:10 pm

Anyone who claims that a computer game simulation of an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster.
So the fact that they DO appear to give relatively consistent answers – albeit entirely incorrect ones – is evidence that someone is extracting the urine.

Reply to  catweazle666
September 17, 2015 8:25 pm

Judith Curry quotes Edward Lorenz on her blog. Here’s the part she chose to highlight:

Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound. link

What I take from that is this; your model should contain the physics and the starting conditions. A valid model will not be an exercise in curve fitting.
One thing I haven’t seen in a discussion of chaotic system models, is the idea of attractors. After enough runs, valid models of chaotic systems should show where the attractors are.

Reply to  commieBob
September 17, 2015 9:33 pm

Could you elaborate on this idea for me?

Reply to  commieBob
September 17, 2015 10:34 pm

brambo4 says:
September 17, 2015 at 9:33 pm
Could you elaborate on this idea for me?

There are actually two ideas.
First, it seems to me that Lorenz is agreeing with Mike Jonas. If you have to tune a model to match the historical record, it means that you don’t understand the system well enough to model it correctly from basic principles. That means your model’s predictions aren’t reliable.
Second, even though a chaotic system is hard to predict, it will tend to certain behaviours. For instance, Detroit is warmer in July than January. July will have a certain average temperature (about 76). If you run your model once, it probably won’t achieve exactly that temperature. The model might give 90 or it might give 60. It’s a chaotic system so that’s OK. If you run the model enough times you might notice that many of the results end up somewhere around 76. We would say that 76 is the attractor for July. wiki

Reply to  commieBob
September 17, 2015 11:48 pm

commieBob, Yes, that is why they average model runs. But the modelled surface temperatures have no realistic attractors. They should stop and notice that.
If the modelled surface temperatures have an attractor it’s somewhere so much hotter than anything anyone has ever seen in the historical record – it needs real evidence to support it. But there isn’t any evidence. Just unreliable modelled guesswork.
The models keep getting hotter.
And the planet can’t keep up.

Reply to  commieBob
September 18, 2015 5:49 am

I would actually argue the other direction. Even exceedingly simple models can take into account the basic effects through parameterization, but they must be taken as very rough estimates of future behavior.
For example, the simplest possible climate model is to simply do a two-point fit of a logarithmic curve between CO2 and temperature. This gives a rough estimate of ~1.5-2C/doubling of CO2. This assumes that all warming is due (directly or indirectly) due to CO2 and nothing else is happening. Due to having evidence that we are in a naturally warming period, this gives us a reasonable upper limit of warming that we can expect.
However, that is all that a parameterized model can do. Give us a general guide of what to expect from a certain effect absent any outside forces. While you can use a lot more points and a lot more computing power, the core, horrific assumption is still present: nothing that isn’t parameterized has an effect on the result. In climate, where most of the known forces have vaguely known large scale effects or and many effects have simply unknown causes, this core assumption is fatal for any long term forecast.
The problems come when you try to take these rough estimates and make strong predictions from it.

Anne Ominous
Reply to  commieBob
September 19, 2015 5:00 pm

My guess is that the main attractor is somewhere between the Minoan Warm Period and the Little Ice Age… kind of like what we have now.
That it tends to cycle between those extremes — albeit somewhat unpredictably, with today’s knowledge — suggests that there is at least a weak attractor somewhere between.

DD More
Reply to  catweazle666
September 18, 2015 8:32 am

To sum up the computer modeling tuning.
From my high school math teacher’s post board –
Flannegan’s Finangling Fudge Factor – “That quantity which, when multiplied by, divided by, added to, or subtracted from the answer you get, gives you the answer you should have gotten.”
Only successfully used when the correct answer is known.

Gilbertl K. Arnold
Reply to  DD More
September 18, 2015 1:04 pm

Actually in my engineering classes the FFFF was called the UEFF (Universal Engineering Fudge Factor) or even more simply: Finagles Constant.

Reply to  Gilbertl K. Arnold
September 18, 2015 4:31 pm

Often referred to as the ‘Factor of Safety’!

Anne Ominous
Reply to  catweazle666
September 19, 2015 5:15 pm

Not even the simplest model, but any simple model, probably should not have CO2 as a component at all. The Monckton et al. simple model is an example. Despite all the criticism, it arguably models climate better than 97% of the “official” climate models do, including CO2.

Reply to  Anne Ominous
September 19, 2015 5:45 pm

Anne Ominous:
The Monckton model has the logical shortcoming that the “global warming” in a given interval of time is multivalued, negating the law of non-contradiction (LNC). Thus, for example, if the proposition is true that there has been no global warming in a given interval the proposition may also be true that there has been global warming in this interval. For mutually exclusive propositions to be true is impossible under the LNC but possible under negation of the LNC. (A proposition is “negated” through operation on it by the NOT operator.)

Anne Ominous
Reply to  Anne Ominous
September 19, 2015 8:27 pm

Can you provide a reference for this? Obviously if what you say is true then it must be invalid, but I must see evidence before accepting the assertion.
Regardless, it was only an example and the main point still holds: There are simple climate models that do not include CO2 at all, which do a much better job than, say, the CMIP5 average. Or actually nearly all of the CMIP5 models.

Reply to  Anne Ominous
September 19, 2015 8:51 pm

Anne Ominous:
Looks like you’ve confused me with benofhouston.

September 17, 2015 6:11 pm

I would add this point from experience from working within government research:
Where there are multiple variables, the variable which specifically pertains to government research is often elevated above other variables, for a variety of reasons, but usually it also happens to benefit those doing the research.
Moreover they usually don’t see anything wrong with this, as it solves certain problems and makes management of complex issues much easier, and at the same time promotes the agency’s profile. But they genuinely believe there is nothing wrong with assigning high importance/influence to their chosen variable.
Whilst they are aware that there are other variables and many of these are uncertain, they often justify their diminished importance by the 90/10 (or sometimes also called the 80/20) rule, i.e. something that happens only 10% of the time must have more or less only a 10% effect, or be of 10% importance, and so on. (They assume linearity to an absurd degree). They fail to notice that frequency of occurrence or probability doesn’t equate to magnitude or effect, and moreover that within some systems, the entire system collapses without the 10% that they assign as ‘less important’. And in still other systems, the so called 10% controls most, or all, of the rest of the system. Their assignment of high importance to their chosen variable is in many cases purely arbitrary.

September 17, 2015 6:19 pm

The models that generated the red line of your graphic do not make “predictions.” They make “projections.” When the two words are used as synonyms the result is to enable applications of the equivocation fallacy that lead people to false or unproved conclusions and the resulting public policies.

Reply to  Terry Oldberg
September 17, 2015 6:55 pm

You should tell this to Barack Obama and all the other world leaders who treat them as predictions.

Reply to  braddles
September 17, 2015 7:27 pm

The most charitable interpretation of the evidence is that Obama et al have been duped by applications of the equivocation fallacy.

Curious George
Reply to  braddles
September 18, 2015 7:32 am

As models make no predictions at all, the question is How reliable are their projections?

Reply to  Terry Oldberg
September 17, 2015 6:57 pm

“Predictions” passes the duck test, so “predictions” it is.

Reply to  Mike Jonas
September 17, 2015 8:17 pm

Mike Jonas:
It may pass the duck test. However, it does not pass the logic test.

Reply to  Mike Jonas
September 17, 2015 9:10 pm

With reference to the following argument:
Major premise: A plane is a carpenter’s tool.
Minor premise: A Boeing 737 is a plane.
Conclusion: A Boeing 737 is a carpenter’s tool.
does “plane” pass the duck test? If not why not?

Reply to  Mike Jonas
September 17, 2015 10:28 pm

A Carpenter’s Plane does not look like a Boeing 737, it does not swim like a Boeing 737 and it does not quack like a Boeing 737. So it fails your duck test.

Reply to  Mike Jonas
September 18, 2015 8:59 am

Mike Jonas:
I had hoped that you would apply the logic test rather than the duck test. Had you done so you would have observed that the faulty conclusion had been drawn from an argument that a Boeing 737 was a carpenter’s plane. Upon further reflection you could have discerned the cause: the word “plane” is polysemic and changes meaning in the midst of the argument. An argument in which a term changes meaning is called an “equivocation.” It looks like an argument having a true conclusion (a syllogism) but isn’t one. Thus, though one can properly draw a conclusion from a syllogism the same is not true of an equivocation.
In the literature of global warming the word “predict” is polysemic. Thus through the use of this word faulty conclusions can be and often are drawn from arguments. Conventionally this problem is avoided by reserving the word “predict” for one meaning and using the word “project” for the other. Under this convention, past and current global warming models make projections. They do not make predictions. These models are incapable of making predictions because the underlying statistical populations have yet to be identified.

Reply to  Mike Jonas
September 18, 2015 5:50 am

If I say “I believe that on December 24, 2015, mankind will be wiped out,” is that a prediction? Yes. It just happens to be a prediction that is also my own belief.
On the same token, if I say “My model run projects that on December 24, 2015, mankind will be wiped out,” then my model has made a prediction, even if it is only projected from model inputs. It is a prediction that just happens to be based off of my models’ projection.
All predictions, no matter their source or origin, must be vetted against reality. Otherwise, it is not science.

Reply to  Arsten
September 18, 2015 2:53 pm

A “prediction” is an example of a proposition but “I believe that on December 24, 2015, mankind will be wiped out” is not an example of a proposition.

Reply to  Mike Jonas
September 18, 2015 11:56 am

“1540-50; < Latin praedictus, past participle of praedīcere to foretell, equivalent to prae- pre- + dic-, variant stem of dīcere to say + -tus past participle suffix; see dictum"
"verb (used with object)
1. to declare or tell in advance; prophesy; foretell:
to predict the weather; to predict the fall of a civilization.
verb (used without object) foretell the future; make a prediction.
1, 2. presage, divine, augur, project, prognosticate, portend. Predict, prophesy, foresee, forecast mean to know or tell (usually correctly) beforehand what will happen. To predict is usually to foretell with precision of calculation, knowledge, or shrewd inference from facts or experience: The astronomers can predict an eclipse;it may, however, be used without the implication of underlying knowledge or expertise: I predict she'll be a success at the party. Prophesy usually means to predict future events by the aid of divine or supernatural inspiration: Merlin prophesied the two knights would meet in conflict;this verb, too, may be used in a more general, less specific sense. I prophesy he'll be back in the old job. To foresee refers specifically not to the uttering of predictions but to the mental act of seeing ahead; there is often (but not always) a practical implication of preparing for what will happen: He was clever enough to foresee this shortage of materials. Forecast has much the same meaning as predict; it is used today particularly of the weather and other phenomena that cannot easily be accurately predicted: Rain and snow are forecast for tonight. Economists forecast a rise in family income."
noun (ˈprɒdʒɛkt)
1.a proposal, scheme, or design
2.a.a task requiring considerable or concerted effort, such as one by students
b.the subject of such a task
3. (US) short for housing project
verb (prəˈdʒɛkt)
4. (transitive) to propose or plan
5. (transitive) to predict; estimate; extrapolate: we can project future needs on the basis of the current birth rate
6. (transitive) to throw or cast forwards jut or cause to jut out
8.(transitive) to send forth or transport in the imagination: to project oneself into the future
9.(transitive) to cause (an image) to appear on a surface
10. to cause (one's voice) to be heard clearly at a distance
11. (psychol) a.(intransitive) (esp of a child) to believe that others share one's subjective mental life impute to others (one's hidden desires and impulses), esp as a means of defending oneself Compare introject
12. (transitive) ( geometry) to draw a projection of
13. (Intransitive) to communicate effectively, esp to a large gathering
Word Origin
C14: from Latin prōicere to throw down, from pro- 1 + iacere to throw
1. proposal. See plan. 6. contrive, scheme, plot, devise. 8. predict. 18. bulge, obtrude, overhang. "
I see considerable overlap in the dictionary definition of these two words you want to pretend are so different, Terry.
The definition of each includes the other within the range of definitions.
They are even each listed as synonyms of each other.
You are engaging in sophistry.
And as for what the most charitable interpretation of anything is, what has that got to do with the reality of the situation?
Are you seriously suggesting that everyone must give the benefit of the doubt to warmistas and all of their jackassery?
That everyone is merely "taken in"?
It is all a big misunderstanding?
Everyone here that willing to be even slightly honest and frank knows better.
Except maybe you, I suppose.

Reply to  Mike Jonas
September 18, 2015 3:12 pm

Your argument fails to come to grips with the role of the equivocation fallacy in global warming arguments. See the peer-reviewed article at for details.

Reply to  Mike Jonas
September 18, 2015 4:10 pm

Lewis P Buckingham
It is the paper that was peer reviewed.

Reply to  Mike Jonas
September 18, 2015 4:50 pm

Lewis P Buckingham
Actually, I did post a link.

Reply to  Mike Jonas
September 18, 2015 4:56 pm

All I’m asking you is to provide me with the reference to the peer reviewed journal where the article was published.

Heck, I’d be impressed if he could tell us who peer-reviewed all these climate studies papers that have been found exaggerated and false.

Reply to  Mike Jonas
September 18, 2015 5:02 pm

Lewis P Buckingham
I didn’t claim the blog was peer reviewed. I claimed the paper was peer reviewed. It was. End of story.

Reply to  Mike Jonas
September 18, 2015 5:24 pm

Lewis P Buckingham
The paper was published without fee and under peer review in the blog of William M. Briggs. Briggs is a meteorologist, PhD level statistician and professor of statistics.

Reply to  Mike Jonas
September 18, 2015 5:47 pm

Lewis P Buckingham
I made no such error. Also, my alleged error is not an application of the equivocation fallacy contrary to your assertion. If you have nothing other than false or misleading statements to contribute to our conversation its time to end it.

Reply to  Mike Jonas
September 18, 2015 7:08 pm

Terry Oldberg,
You’re just having fun with Lewis Buckingham, I can see that.
I’ll bet you like to pull the wings off flies, too. ☺
I easily found the link to the peer reviewed paper after reading your 3:12 post above. But apparently Buckingham is convinced he’s got you — when it’s just the opposite.
He’s not too smart, you see.
Regarding the climate peer review system, Lewis B ought to read the Climategate email leaks. It’s clear that climate peer review is very corrupted, so those of us who have read their emails know that climate peer review isn’t worth spit. If Lewis likes, I will provide links to the Climategate emails and related commentary. He could learn a lot, if he wanted to.
Buckingham is also unaware that in addition to Dr. Briggs, Terry Oldberg is a published, peer reviewed author. Since Bucky is so impressed by peer review, it’s a good time to ask him how many peer reviewed papers he has had published?
Really, I shouldn’t have written this. You could have kept spinning up Buckingham until his head exploded by merely not doing his homework for him.
Bucky, the answer is there, just like Terry told you. It was always there. He was just having some fun with you.

Reply to  dbstealey
September 18, 2015 8:44 pm

As always I am in awe of the moral courage that has been displayed by dbstealy in ceaselessly pointing out, against the prevailing view, that none of the IPCC climate models have been validated. That they have not been validated has the significance that these models are not scientific. Thus, the impetus toward regulation of CO2 emissions lacks a basis in science.

Reply to  Mike Jonas
September 19, 2015 11:01 am

you said:
Your argument fails to come to grips with the role of the equivocation fallacy in global warming arguments. See the peer-reviewed article at for details.”
I was not attempting to “come to grips” with anything.
I was demonstrating the two words which you claim have such distinct and separate meanings actually do not.
They are synonymous in common usage.
Below you found a very long word which is not in common usage to make a point that everyone understands: Words have several meanings, and some of these can be completely distinct, based on context.
I am very much aware of this, and that is why I did not edit out all the other definitions of the two words in my post…I wanted to show that there are several ways to use each word.
And words mean what people understand them to mean, and how people use them to convey ideas and thoughts.
The origins of two words may be separate and distinct, but if the meanings overlap, then all that matters is the context, and how the words were intended, and how the words are interpreted…it matters not that is the vernacular of some obscure profession the words are use to convey some subtle or not so sublte difference of meaning…what matters is that in common usage, these words convey the meaning that everyone thinks they mean.
And the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.
You have failed to come to grips with my point that words mean what people intend them to mean, and the idea that they convey in a communication.
And synonymous words convey the same meaning when used in identical context.

Reply to  Mike Jonas
September 19, 2015 11:05 am

BTW, it is no big revelation to point out that alarmists use any number of fallacies to push their meme.
Call it by whatever name you want…it amounts to the same thing…equivocation fallacy or outright lie…is is just another not-so tasty layer in the layer cake of warmista BS.

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 8:46 pm

“And the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.”
Actually that IS his whole point. That distinctions were not made where they should have been.
I can appreciate that it is difficult to separate his argument from semantic nitpicking, but again that is a large part of his point. Subtle context-shifting has taken place in order to press an agenda.

Reply to  Anne Ominous
September 19, 2015 9:28 pm

Anne Ominous:

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 8:49 pm

I think maybe some people here would appreciate the point more when it is put in terms of context-shifting rather than semantics. We all know what context shifting does.

Reply to  Mike Jonas
September 19, 2015 9:02 pm

Lewis P Buckingham :
Your statement that: “Since neither Stealey nor Oldberg can point out where Terry’s “article” was published…” is a lie.

Reply to  Mike Jonas
September 20, 2015 8:59 am

Lewis P Buckingham
That I “got caught misusing the term “peer-review” is an outrageous lie.

Reply to  Mike Jonas
September 20, 2015 10:28 am

L. Buckingham says:
Briggs blog is not peer reviewed”.
Misdirection. No one ever said it was.
(And to be clear, the reference, as always, is to the Briggs et al. peer reviewed paper.)
Terry Oldberg:
Don’t show him where it’s linked! You already posted it once, make Bucky find it himself.
Don’t spoil the fun!

Reply to  Mike Jonas
September 20, 2015 10:56 am

L. Buckingham says:
“Stealey, I see you are unable to post the citation either.”
Au contraire
, my amusing friend. I am fully able to post the citation. I found it easily yesterday.
But I am currently unwilling to give it to you, since it is so much fun watching you dig your hole deeper with every comment.
Since I found it so easily, surely you could, too… or can’t you?

Reply to  Mike Jonas
September 20, 2015 11:18 am

Buckingham sez:
PS Stealey…..when you post “, is to the Briggs et al. peer reviewed paper.” you have incorrectly attributed authorship to Briggs, as the paper Oldberg was referring to was his paper.
Keep diggin’ your hole deeper, Bucky. ☺

Reply to  Mike Jonas
September 20, 2015 11:19 am

Anne Ominonus,
I do appreciate your effort to bridge the gap here, if one indeed exists.
I said:
“And the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.”
To which you said:
“Actually that IS his whole point. That distinctions were not made where they should have been.”
It seems to me that if this were the whole point being made, then there would be no discussion (or is it an argument?) on this topic. I am not going to recount the entire thread here, but reading it above and bellows shows clearly that it is not the whole point Terry is making.
He is also making a point about people on this thread not knowing what they are talking about if they do not agree that he is the final arbiter of what these words mean. And what is logical. And who is to blame for the damage being done…implying several times that skeptics who make his so-called semantic error are contributing to the problem.
Again, I appreciate your input, but now I for one am tired of kicking this horse.
– Me, Nicholas

Reply to  Mike Jonas
September 20, 2015 1:40 pm

Lewis P Buckingham
It is unscrupulous of you to repeatedly claim that a citation has not been provided to you when it has been provided and to use this false claim in an attack on my character. Are you aware of the fact that personal attacks are illegal when unjustified?

Anne Ominous
Reply to  Mike Jonas
September 20, 2015 9:24 pm

With all respect, you are mistaken. You do not seem to be able to separate his technical points from semantic gamery; again the fact that people use those words in different ways is indeed a very major part of his point, which apparently has not sunk in.
I can appreciate your weariness from what you seem to see as boxing with shadows, but the shadows are of your own making.
Once again: the points he is making are very technical, but you do not seem willing to embrace them even as hypotheticals, then see where that leads. Instead you dismiss them out of hand.
That is something about which I have no power to help you.

Reply to  Terry Oldberg
September 17, 2015 8:06 pm

A projection is a prediction that lost it’s nerve.
They need to make predictions in order to do science. If you can’t predict something from your theory, you have no theory. But they know they’re garbage. So they pretend they aren’t predictions – hence the fake name.

Reply to  Hivemind
September 17, 2015 8:41 pm

No. A “projection” lacks the the logical structure of a “prediction.”

Reply to  Hivemind
September 18, 2015 9:59 am

A projection is an “estimation of what might happen, based on the current trend”.
Also, humans make predictions, not models or computers.
Just my 2 cents.

Reply to  Hivemind
September 18, 2015 12:13 pm

Exactly Dahlquist.
Computer models do not issue press releases written in breathless and urgent tones of impending doom, but the people who engage in “climate science” sure do.
The writers and newsmen and women who pass these stories along make no distinction, and neither do the teachers who scare our children and indoctrinate them with lies and made up scare stories, telling them they are doomed, that they are cursed to be living on a poisoned world, and it is their parents and grandparents fault, and especially the fault of those who attempt to speak some sanity and truth to them…the so-called skeptics and deniers.
Terry, no such distinction, as you want to try to tell everyone here they must hew to, is made by the policymakers and the regulatory agencies and governments who are engaged in this massive hoax and fraud.
I challenge you to find me some instances of any of the above named people equivocating in their pronouncements, that these are mere “projections”, subject to verification.
The truth is that it is all sold as a settled subject…it is going to happen.
Are you seriously arguing that it is not?

Jaakko Kateenkorva
Reply to  Hivemind
September 19, 2015 3:24 am

A “projection” lacks the the logical structure of a “prediction.”

If you argue climate models cannot ‘predict’ because they ‘lack logical structure’, IMO consider the mission complete.

Reply to  Hivemind
September 19, 2015 4:46 pm

Jaakko Kateenkorva:
Yes, it is accurate to state that climate models cannot predict because they lack logical structure. This structure can be described by reference to the idea of a “proposition.” In logic, every proposition has a probability of being true. This probability has a numerical value.
Probabilities belong to the theoretical side of science. Relative frequencies belong to the empirical side. When values are claimed for probabilities by a model these claims are testable by making counts called “frequencies” in a sample drawn from the underlying statistical population. A relative frequency
is a ratio of two of these frequencies. A model that survives testing of all of its claimed probability values is said to be “validated.” A model that does not survive testing is said to be “falsified by the evidence.” For brevity I’ve glossed over complications resulting from sampling error.
A “prediction” is a kind of proposition. Associated with this proposition is the logical structure that was developed in the previous two paragraphs. For the climate models none of this structure exists.
Replacing this structure are applications of the equivocation fallacy that lead innocent folks to believe they are looking at scientific findings when they are not. In their innocence these folks cannot believe their governments would stoop so low.

Keith Willshaw
Reply to  Terry Oldberg
September 18, 2015 1:17 am

The use of the word ‘projection’ is aimed at giving them a get out. The reality is that the climate scientists treat their model results as predictions when they say ‘this is what will happen if you don’t cut CO2 emissions’
That is prediction but by using the weasel words they aim to cover their asses if they get it wrong.

Reply to  Keith Willshaw
September 18, 2015 1:56 pm

Keith Willshaw:
No. Trenberth was truthful when he wrote in a post to the blog of Nature that the climate models made projections rather than predictions. Predictions require the existence of the statistical population underlying the model. Here there isn’t one.

Reply to  Keith Willshaw
September 18, 2015 8:09 pm

You’ve ignored Keith’s point, which is the correct one.
Remind me again why we’re supposed to believe that increased CO2 will cause the earth to burn up. Oh yeah, the reason is: BECAUSE THAT’S WHAT THE MODELS CLAIM WILL HAPPEN.
To now say that any disagreement of the model “projections” with observations should be disregarded, provokes the obvious question. Why should we pay attention to warmist “projections” in the first place? Apparently they bear no relation to real temperatures out in the real world. Who cares what they say.

Reply to  TYoke
September 18, 2015 9:33 pm

The IPCC is working a scam that is based upon application of the equivocation fallacy. I am aware of this scam. You are oblivious to it.
The scam leads well meaning but scientifically naïve people to believe that increasing the CO2 concentration will cause the earth to burn up or something similar. The antidote to this scam is to expose it. Being oblivious to it you resist exposure of this scam thus playing into the hands of people that you and I despise.

Reply to  Keith Willshaw
September 19, 2015 10:46 am

Terry, your “antidote” seems to many of us like providing cover for the alarmists.
You seem to be of the opinion that your high minded fascination with words for which there is a distinction without a difference places you in a superior moral position, or somehow gives you a leg up in the argument.
It is evident to me from many of the preceding comments that I am not the only one who thinks you have it exactly backwards.
The letter calling for RICO prosecution of skeptics makes your position not just silly but dangerous, as this debate is showing signs of morphing into an actual war.

Chris Wright
Reply to  Terry Oldberg
September 18, 2015 2:45 am

That’s complete nonsense. Of course they’re predictions. If I publish a “projection” of what will happen in the future it is also a prediction.
Of course, I might use the term “projection” if I had little confidence that my prediction turned out wrong.
The complicating factor is that there are in fact a set of predictions, each one for a different future CO2 scenario. So it’s more precise to call them a set of conditional predictions. But they are predictions, pure and simple. And predictions that have turned out to be hopelessly wrong.

Reply to  Chris Wright
September 18, 2015 2:00 pm

Chris Wright
September 18, 2015 at 2:45 am.
That’s complete nonsense. Of course they’re predictions. If I publish a “projection” of what will happen in the future it is also a prediction.
Hello Chris.
Maybe you are right….but from my point of view, you seem not to be and most likely what you say, as per above selection, is nonsense.
Especially in climate issue the difference between predictions and the projections is huge.
These two are completely different matters of view and approach to climate and climate science.
Being able to pretend the possibility of accurate climate projections of one kind or another does not mean the same in the ability of climate predictions.
The climatology and climate science claim to have some kind of projections about climate, by simply claiming that, what the expected range of natural climate variation would or could be, in temps variation and CO2 concentration variation, at the very least, regardless how accurate that could be or not.
Officially that stands at 4-7C variation and 120ppm variation more or less.
That is a projection not a prediction, about climate, the estimated range of expected natural variation in climate. The projected range, that climate variation must be within the natural term, always.
That range means that climate could be in an interglacial period or glacial period at any given moment regardless, and inside that range of variation, or what ever kind of climatic period.
In climate, that is the projections, the range the boundaries that the climate change, and climate variation must be within at any given moment and period.
In the other hand the predictions mean an ability to tell and to be able to estimate the actual moment that climate is at, at the time period in question.
You see, we can assume, about why the YD period, the Warm Roman period or why and how LIA happened, but we do not actually know.
That means that there is a lack of ability to predict, as predictions need a lot more knowledge to be assessed than projections.
Leaving aside for a moment the AGW and the modern era, the climate projection, while accurate enough or not, in combining with the climate data, suggest and lead to a conclusion that climate definitely must be at the point of the end of interglacial and the beginning of the next glacial period,,,,,,, when actually the inability to predict can not allow for an estimation of the exact path, which could be through a steady change or a drastic change in climate, whether it could be through a pass of another LIA, another YD or a global arming period, an equilibrium or a transient climate period, at the present and or the near future.
For as long as projections have enough room for any of these, any could be possible, but lack of ability to predict means that it can not be predicted in detail how, even when the projections can show the general path to be taken, the lack of ability to predict means that the “exact” particular path may not be estimated or predicted, unless better knowledge and understanding of climate achieved.
At the moment there is not enough of it to allow for predictions, but is claimed that it is enough for projections in climate and climate change.
And the projections do not even have to show the general path the climate should be at future period or the other, for the projections still to stand as accurate or good enough……the projections only need to conform correctly and good enough with the past long and short term data, regardless of the future.
Not the same could be said for the predictions.
No computer model can predict yet the next following climate change path , but according to our climate understanding and science of climate change there is possible the computer modeling of climate projections, and maybe also a chance on getting better and moving towards the computer modeling predictions some time after that.
To me when I am told or read about climate predictions, I think in the lines of more or like of wild speculations reached in the bases of reaching to a speculative conclusion with no regard to any rationale by simply using the climate projections and misleading conceptually the rest through the claim that there is no much difference in between, when actually there is a huge one,,,,, there is why we actually have two different words and concepts in place to rely at, regardless how much relation in between the two.
Sorry for going so long with this, and maybe I am too far out of mark, or even wrong, but never the less this is my opinion and the understanding in this particular aspect.
Projections, and the ability to have projections does not mean predictions and or an ability to predict and have predictions by default.
It takes a lot lot more to be able to predict in some kind of expected “accuracy” than project and have projections.

Reply to  Chris Wright
September 18, 2015 2:02 pm

Chris Wright:
Your understanding is incorrect. To make predictions a model has to have an underlying statistical population. Here there isn’t one.

Reply to  Chris Wright
September 19, 2015 10:50 am

Even if what you say is in some way technically true, it matters not one whit, as the predictions are not made by the models, but by the alarmists who misunderstand or perhaps merely misuse them.
What the hell is the difference how many different ways they have to be wrong?
We have numerous quotes right here from several of the people who create models, and use the results to further the alarmist meme, using the word you say does not apply.

Reply to  Terry Oldberg
September 18, 2015 7:44 am

The semantics game of attempting to differentiate a “projection” from a “prediction”, is just handwaving to obfuscate the fact that the climate models fail. Both words are interchangeable as observed in the peer reviewed literature.
A few quotes from a single paper, Lean&Rind(2009) ‘How will Earth’s surface temperature change in future decades’:
– “Smith et al. [2007] forecast rapid warming after 2008, with “at least half of the five years after 2009 … . . predicted to exceed (1998) the warmest year currently on record.” ”
– “our empirical model predicts that global surface temperature will increase at an average rate of 0.17 +/- 0.03°C per decade in the next two decades. The uncertainty given in our prediction …”
– “The predicted warming rate will not, however, be constant on sub-decadal time scales over the next two decades. As both the anthropogenic influence continues and solar irradiance increases from the onset to the maximum of cycle 24, global surface temperature is projected to increase 0.15 +/- 0.03°C in the five years from 2009 to 2014.”
– “However, our estimated annual temperate[sic] increase of 0.19 +/- 0.03°C from 2004 to 2014 (Figure 1) is less than the 0.3°C warming that Smith et al. [2007] predict over the same interval.”
– “According to our projections of annual mean regional surface temperature changes”
– “Our projections are consistent with IPCC long range forecast that warming will be greatest …”
– And from the summary, “According to our prediction, which is anchored in the reality of observed changes in the recent past …”
Clearly Lean&Rind(2009) use “forecast”, “prediction”, and “projection” interchangeably. The current attempt to distinguish a difference is mere handwaving obfuscation, due to the colossal failure of the alarmists’ forecasts/projections/predictions from their climate models. This is seen the the observed temperature changes over the very 5 and 10 year time periods that these peer reviewed papers chose:
Actual temperature change from 2004 to 2014 was a cooling of 0.03°C, not a 0.19°C increase as L&R2009 forecast/projected/predicted, and not a 0.3°C increase as Smith2007 forecast/projected/predicted.
And actual temperature change from 2009 to 2014 was a cooling of 0.13°C, not a 0.15°C increase as L&R2009 forecast/projected/predicted.
Obviously the “anchor” line L&R2009 mentioned was not anchored to reality, as the reality of observed temperature changes expose the inability of the climate models to “forecast”/”project”/”predict” accurately. And the whole CatastrophicAGW-by-CO2 meme is built on these flawed, faulty, falsified, failed climate models, which vonStorch admitted that didn’t match real observed temperatures at even a 2% confidence level.

Reply to  RealOldOne2
September 18, 2015 3:55 pm

My argument is unrelated to the issue of “arguing over semantics” though you assume the opposite. You could bone up on the details at .

Reply to  Terry Oldberg
September 19, 2015 6:43 am

Let’s say an employee making $20,000 a year receives a 10% raise this year. Overjoyed the employee projects a 10% raise over the next 20 years and predicts he will be making about $150,000 in 20 years. And each day he happily goes to work knowing with mathematical certainty the brightness of his future.
That’s what linear climate models are all about; fanciful projections leading to silly, unfounded predictions.

Reply to  Alx
September 19, 2015 5:16 pm

As you use “prediction” in your example the term is polysemic (has more than one meaning). Use of a polysemic term is common and is harmless in many contexts. It is harmful when the context is an argment and the term changes meaning in the midst of this argument. That it changes meaning makes of this argument an “equivocation.” An equivocation looks like a syllogism (an argument having a true conclusion) but isn’t one. Thus, while one can properly draw a conclusion from a syllogism the same is not true of an equivocation. To draw such a conclusion is the “equivocation fallacy.” If you wish to deceive someone in making an argument application of this fallacy is an effective way to do so because it is exceptionally hard to spot. If you wish not to deceive someone in making an argument use of monosemic terms is an effective antidote to deception because it makes equivocation impossible.

Reply to  Terry Oldberg
September 24, 2015 9:09 am

Terry Oldberg, I think I understand your point, even though I don’t completely understand the scientific and methodological reasons underpinning it. I think it’s the same reason that news broadcasters “project” winners in elections, instead of predicting them. It’s the level of certainty that distinguishes one word from the other.

Reply to  katherine009
September 24, 2015 10:49 am

Hi Katherine. Thanks for giving me the opportunity to clarify.
In understanding what I’m trying to say it is best not to dwell on the ways in which “predict” and “project” are used in the English vernacular. Instead, one should understand that specialized meanings for the two words have evolved in the field of global warming climatology and that the two meanings differ. In making an argument about global warming many people assign both meanings to the single word “predict.” When this word changes meaning in the midst of an argument this argument is classified as an “equivocation” in logical terminology.
An equivocation has the unfortunate property of looking like a syllogism. However, while the conclusion of a syllogism is true the conclusion of an equivocation is false or unproved. Thus, while one can properly draw a conclusion from a syllogism, one cannot properly draw a conclusion from an equivocation. To draw a conclusion from an equivocation is called the “equivocation fallacy” in logical terminology.
Application of the equivocation fallacy can be eliminated through disambiguation of the language in which an argument is made such that each term belonging to this language has a single meaning. However though doing so costs nothing many people are resistant to doing so. Some of these people are well meaning but naïve. Others are swindlers.
Several years ago, the chair of Earth Sciences at Georgia Tech asked me to prepare a paper on the topic of “Logic and Climatology” for publication in her blog. In the ensuing study I discovered that the literature of global warming climatology was infested by applications of the equivocation fallacy. When global warming climatology was described in a disambiguated language it became obvious that it was a pseudoscience dressed up to look like a science through applications of the equivocation fallacy. Details on my methodology and findings are available at .

Reply to  Terry Oldberg
September 24, 2015 1:54 pm

Terry, thanks for the response and link to your interesting paper. At the risk of exposing my ignorance for all the world to see, could you please explain what is meant by “statistical population” and what it has to do with the equivocation fallacy? I follow your explanation of the slippery language of climate science, but don’t quite see the whole picture. Thanks again.

Reply to  katherine009
September 24, 2015 5:30 pm

Katherine: Thanks for giving me another opportunity to clarify. Please keep up the good work for as long has you have questions.
A “statistical population” is the set of concrete aka physical objects that are associated with a scientific study. A subset of a statistical population that is available for observation is called a “sample.” Your doctor’s purpose in ordering a sample of your blood may be to gain a sample of your red cells. In this case the sample is a subset of your red cells while the statistical population is the complete set of your red cells. In the conduct of a scientific study one takes a sample for the purpose of gaining information about properties of the associated statistical population. If the sample is “large” this information may be nearly perfect.
The elements of a statistical population reference a “sample space,” that is, a set of mutually exclusive collectively exhaustive outcomes of events. They may additionally reference a “condition space,” that is a set of mutually exclusive collectively exhaustive conditions of the same events. Conditions and outcomes are examples of states of nature.
The elements of a statistical population (the red cells in my example) are called “sampling units.” Sampling units belonging to a sample and sharing the same outcome or condition-outcome pair can be counted. This count is called the “frequency.” The ratio of one of these frequencies to the sum of all of them is called the “relative frequency.”
A science has an empirical side and a theoretical side. A relative frequency belongs to the empirical side. Its theoretical counterpart is called a “probability.” When a scientific model asserts a value for a probability this assertion can be checked by reference to the corresponding relative frequency. This property of a scientific model is called “falsifiability.” A model in which each such assertion is tested without being falsified is said to have been “validated,”
For each of today’s climate models the corresponding statistical population does not exist. There are no samples, sampling units, frequencies, relative frequencies or probabilities. Not a single one of today’s climate models is falsifiable but falsifiability is the mark of a model that is “scientific.”
More seriously damaging than this fiasco is that these models convey no information to a policy maker about the outcomes from his/her policy decisions. It can convey no information because “information” is defined in terms of probabilities and relative frequencies but these do not exist for one of today’s climate models. In lieu of information about the outcomes from policy decisions, there is not the possibility for a government or group of governments to control the climate. Governments are spending their citizens’ money in massive amounts in attempts at controlling the climate under a circumstance in which it is impossible for the climate to be controlled.
This bizarre situation is able to persist because of applications of the equivocation fallacy exploiting polysemic words or word-pairs in the literature of global warming climatology that include the words “predict,” “model” and “science” plus word pairs that include “validate/evaluate” and “predict/project.” (That a word pair is “polysemic” implies that each word belonging to it has a different meaning but the two words are used as synonyms in making an argument.)
When a word or word pair is used in making an argument and changes meaning in the midst of this argument an equivocation is born. When a conclusion is drawn from an equivocation an application of the equivocation fallacy is made. Applications of the equivocation fallacy obscure the fact that there is not currently a logical or scientific basis for regulation of CO2 emissions by a government or group of governments. Global warming climatologists have blown their assignment.

Reply to  Terry Oldberg
September 25, 2015 7:48 am

Ok, still trying to get hold of this. Wouldn’t a statistical population for climate models be all temperature readings, everywhere, for all time? Or is the problem that temperature readings are not the actual population?

Reply to  katherine009
September 25, 2015 9:41 am

That’s a good question. I’ve found that several bloggers think with you that the set of temperatures is the population but that’s wrong. In proper statistical jargon, the set of measured temperatures is a “time series” rather than a population.
Recall that sampling units are concrete aka physical objects e.g. red blood cells. For global warming climatology the sampling units, if they were to be identified would be the Earth plus its atmosphere in various periods of time such that the complete set of these periods was a partition (mathematical term) of the time line. One of the tasks that would have to be completed in the design of a scientific study is to identify these periods. Each such period would be associated with an event.
In climatology, the tradition is for an event to last 30 years but if this tradition were to be followed by global warming climatology there would be only five or six observed events going back to the start of the various global temperature time series in the year 1850. That’s too few by a factor of at least 30 for statistical significance of conclusions. Thus, what we are looking at is a duration of each event being no greater than one year. A prediction extends over the duration of an event. Thus, to make predictions over 50 years as climatologists imply they can do is not in the cards. They can only do this because their “predictions” are really logically nonsensical “projections.”
In the design of a scientific study another task would be to identify the sample space. I run into people who think the sample space would be comprised of the set of possible global temperature values but that is impractical because the sample space would contain values of infinite number. Divide up a finite number of measured values among an infinite number of elements of a sample space and the average number of measured values per element of the sample space is 0. The conclusions of such a study would lack statistical significance.
The elements of a sample space are classes of outcomes of events. Global warming climatology is severely short on observed events. Therefore what we are probably looking at is a sample space containing two outcome classes rather than an infinite number of them. One of many possibilities is that one of these classes is defined such that the average value of the global temperature over the duration of an event is greater than the median while the other class is defined such that the average value is less than or equal to the median.
Now that we have made the design decision that the sample space contains two outcome classes we are in a position to do something that we couldn’t do before. This is to construct an imaginary histogram. This histogram has two vertical bars. The height of each bar is proportional to the count of the observed events belonging to the associated outcome class. This count is called the “frequency.” The ratio of the frequency of an outcome class to the sum of the frequencies of the two outcome classes is called the “relative frequency.” Today’s global warming climatologists cannot construct a histogram because they have yet to identify the sample space. To identify the sample space is step one in the design of a study but after 20 years and the expenditure of 200 billion US dollars, they have yet to identify it. This is scientific malpractice on a grand scale!
In order for the model to convey information to a policy maker about the outcomes from his or her policy decisions (a necessity for control of the climate) there must be a condition space as well as an outcome space. There is a measure of the intersection of the condition space with the outcome space that is named after the inventor of information theory, Claude Shannon. Shannon’s measure of this intersection is called the “mutual information.” The condition space should be defined such that the mutual information is maximized. This can be achieved by executing a very complicated optimization. If you ping me I’ll give you a citation toward the literature on how to do this.
Do you have any more of your wonderful questions? If so, I’d like to address them.

Reply to  Terry Oldberg
September 25, 2015 7:48 am

PS, thanks for sticking with me on this!

September 17, 2015 6:24 pm

I don’t believe climate models can make very accurate predictions. There are just too many unknowns still, as you rightly point out. However, I do understand that some of the factors you mention are not included. Milankovic cycles, you already point out act on time scales that are way longer than the projections that are being made with climate models, so the decision of not including them seems justified. ENSO and other oscillations average to zero in the long term. At least as far as we know now. There is, AFAIK, no good indication that they will have a significant long term contribution. Big volcanic eruptions typically influence climate/weather for several years and are completely unpredictable, so again no reason to include them for projections over several decades imho. All you can expect from climate models is ballpark estimates under certain conditions.

Reply to  Aran
September 17, 2015 6:31 pm

Today’s climate models don’t make predictions.

Reply to  Terry Oldberg
September 17, 2015 6:44 pm

I don’t really want to get into a semantics discussion, but if you do, you will find that I did not claim the models make predictions.

James Hein
Reply to  Terry Oldberg
September 17, 2015 7:13 pm

But politicians and activists in places like Australia treat them as such

Reply to  James Hein
September 17, 2015 8:27 pm

James Hein:
They do.

Reply to  Terry Oldberg
September 17, 2015 7:30 pm

Do they make long term forecasts then?

Reply to  PiperPaul
September 17, 2015 8:30 pm


Reply to  Terry Oldberg
September 17, 2015 8:54 pm

Aran (Sept. 17 at 6:24 pm):
“…I don’t believe climate models can make very accurate predictions.
Aran (Sept. 17 at 6:44 pm):
“…I did not claim the models make predictions.”

Reply to  Terry Oldberg
September 17, 2015 9:50 pm

Terry, if I say that I don’t believe that pigs can fly, I am not claiming pigs can fly, am I?

Reply to  Terry Oldberg
September 18, 2015 7:07 am

Hiding behind semantic quibbles. In this instance, the difference does not matter.

Reply to  MarkW
September 18, 2015 3:20 pm

To call my argument a “semantic quibble” is to set up a strawman and knock it down.

Reply to  Terry Oldberg
September 18, 2015 5:08 pm

“Today’s climate models don’t make predictions.”
Today’s climate models purely provide employment for people who would not otherwise be able to to obtain a proper job, such as shelf stacking in a supermarket.

Reply to  Terry Oldberg
September 20, 2015 10:37 am

Terry Oldberg says:
Today’s climate models don’t make predictions.
Agreed. But as always, it’s the public’s perception that matters most in politics — and the “dangerous man-made global warming” narrative is most certainly politics, not science.
The ‘science’ part is only a thin veneer that covers up the hoax; science is the “hook”, and the taxpaying public is the mark. Elmer Gantry would be green with envy at this scam.
The goal is the passage of a ‘carbon’ tax, which would give the government what every government throughout human history has craved: the means to tax the air we breathe.
Don’t let them do it. Their models are bogus. Push back!

Reply to  Aran
September 17, 2015 7:17 pm

I agree that ENSO and other oscillations averaging to zero in the long term is a reasonable proposition. Their great significance here is in Footnote 4.

Reply to  Mike Jonas
September 17, 2015 7:48 pm

Ah yes, that’s a fair point. Would be a good subject for some number crunching as the ENSO index goes back to 1950

Reply to  Mike Jonas
September 17, 2015 8:14 pm

Although ENSO, PDO, ADO, etc do average to zero in the long term, these models aren’t making predictions in the long term. They are making predictions in 15 years, well inside of the 60 year cycles of the ADO & PDO.
1) Yes, I have already noticed that some claim the models only make “projections”. Sophistry will get you nowhere. You are using it as a prediction, so prediction it will be called.
2) One of the greats of the climate alarmism world (if somebody in that world could ever be called great), once said that the “pause” didn’t matter until it reached 15 years. Well it reached 15 years a long time ago – the response was that now it doesn’t matter until it reaches 20 years.

Reply to  Mike Jonas
September 17, 2015 10:08 pm

As far as 1) goes, I really don’t care what name you give them. As said earlier, I am not going to discuss semantics. I hope we can agree that the model outcomes are aimed at estimating temporal changes in climate, not at predicting the weather in x years, where x is any number you can think of. Personally I would not take any model output for the shorter term too seriously since there are many unpredictable phenomena such as oscillations and volcanoes that have much stronger influence on those time scales than the processes that these models actually try to cover. As for 2) I really don’t see the relevance, nor do I feel any need to defend the words of some anonymous person, particularly if they are about statistically insignificant phenomena.

Reply to  Aran
September 17, 2015 11:13 pm

These factors may be convergent or divergent, but their coincidence may for example result in a sharp increase in ice in the Arctic and an increase in the albedo in the summer. For example, the low solar activity and the negative phase of the AMO.

September 17, 2015 6:26 pm

The entire CAGW hypothesis is based on the wrong assumption that CO2 forcing generates a 3~5 fold “runaway positive feedback loop” involving increased atmospheric water vapor concentrations, which means that 1C of gross CO2 forcing will ultimately generate about 3C~5C of NET global warming….
The problem with runaway feedback loops, is that once the sum of the feedbacks exceeds 1, the feedbacks soon run to infinity (Dr. Hansen’s boiling oceans)… To prevent CO2’s runaway feedback loop from going to infinity, CAGW modelers assume that particulates released from fossil fuel burning act as a negative feedback to obtain a Goldilocks’ constant that is just enough to warming to scare small children and politicians to get more research grants and waste 10’s of $trillions on CO2 mitigation, but not too little to cause model projections to exceed reality by 3+ standard deviations and disconfirm their precious hypothesis, and put them in the unemployment line…
The problem is that CAGW model mean projections already exceed reality by 2 standard deviations, and in 5~7 years, the discrepancies will likely exceed 3 standard deviations, at which point, the CAGW hypothesis gets run through the wood chipper…
Oh, what a tangled web the CAGW hypothesis has become….

Reply to  SAMURAI
September 17, 2015 6:59 pm

Interesting post! It sounds as though the “Goldilocks constant” is the equilibrium climate sensitivity. Rather than being a conclusion reached by scientific research the existence of this “constant” was established by the logically and ethically illegitimate process of placing “the” in front of “equilibrium climate sensitivity.” Placing “the” in front of it implied that the “equilibrium climate sensitivity” was single valued when there was no reason for belief in the proposition that it was not multi-valued. A result was for information to be fabricated that was conducive to the financial interests of global warming climatologists.

Reply to  Terry Oldberg
September 17, 2015 11:29 pm

Terry– Based on empirical evidence and the physics, Equilibrium Climate Sensitivity seems closer to 0.5C by 2100, rather than the 5C, which is usually quoted to press.
The CAGW sycophants need to keep ECS about 2C or else there isn’t a catastrophe, and they can’t make it more than 5C, or they’ll be laughed at….
They need to keep ECS at the Golidilocks level to keep this scam going until they can retire with full pensions. The Leftist also need to keep CAGW going to steal as much taxpayer money as possible before the gig is up…
So much money… little time… It’s sad…
Historians will shake their heads and think this generation went collectively mad….

Reply to  SAMURAI
September 18, 2015 9:11 am

I prefer to call the concept TECS rather than ECS as it makes the swindle that is worked by placing “the” in front of “equilibrium climate sensitivity” more obvious.

September 17, 2015 6:33 pm

The climate models outputs fall into the category of “not even wrong”.The CAGW meme and by extension the climate and energy policies of most Western Governments are built on the outputs of these climate models. In spite of the inability of weather models to forecast more than about 10 days ahead, the climate modelers have deluded themselves, their employers, the grant giving agencies, the politicians and the general public into believing that they could build climate models capable of forecasting global temperatures for decades and centuries to come with useful accuracy.
The modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex:
Models are often tuned by running them backwards against several decades of observation, this is
much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
In addition to these general problems of modeling complex systems , the particular IPCC models have glaringly obvious structural deficiencies as seen below (fig 2-20 from AR4 WG1- this is not very different from Fig 8-17 in the AR5 WG1 report)
The only natural forcing in both of the IPCC Figures is TSI, and everything else is classed as anthropogenic. The deficiency of this model structure is immediately obvious. Under natural forcings should come such things as, for example, Milankovitch Orbital Cycles, lunar related tidal effects on ocean currents, earth’s geomagnetic field strength and most importantly on millennial and centennial time scales all the Solar Activity data time series – e.g., Solar Magnetic Field strength, TSI, SSNs, GCRs, (effect on aerosols, clouds and albedo) CHs, CMEs, EUV variations, and associated ozone variations and Forbush events.
The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it.
Temperature drives CO2 and water vapor concentrations and evaporative and convective cooling independently. The whole CAGW – GHG scare is based on the obvious fallacy of putting the effect before the cause. Unless the range and causes of natural variation, as seen in the natural temperature quasi-periodicities, are known within reasonably narrow limits it is simply not possible to even begin to estimate the effect of anthropogenic CO2 on climate.
Because of the built in assumption in all the models that CO2 is the main driver, the actual temperature projections are relatively insensitive as to the particular IPCC climate model used, and, in fact, the range of outcomes depend almost entirely simply on the RCPs chosen. The RCPs depend on little more than fanciful speculations by economists. The principal component in the RCPs is whatever population forecast/speculation will best support the climate and energy policies of the IPCCs client Western governments.
The successive uncertainty estimates in the successive “Summary for Policymakers” take no account of the structural uncertainties in the models and almost the entire the range of model outputs may well lay outside the range of the real world future climate variability.
The climate models on which the entire Catastrophic Global Warming delusion rests are built without regard to the natural 60 and more importantly 1000 year periodicities so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb – July and projecting it ahead linearly for 20 years or so. They back tune their models for less than 100 years when the relevant time scale is millennial. This is scientific malfeasance and incompetence on a grand scale. Here is a picture which shows the sort of thing they did when they projected a cyclic trend in a straight line..–pAcyHk9Mcg/VdzO4SEtHBI/AAAAAAAAAZw/EvF2J1bt5T0/s1600/straightlineproj.jpg
The temperature projections of the IPCC – UK Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted. For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blog-post at

Reply to  Dr Norman Page
September 17, 2015 7:18 pm

Bob – I don’t doubt the underlying sinusoidal curve called climate variability, but you need to superimpose an upward trend….

Steve Case
Reply to  Barry
September 18, 2015 1:22 am

And that upward trend is somewhere around 0.75 K over the last 165 years right?

Reply to  Barry
September 18, 2015 7:11 am

The majority of which, was caused by something other than CO2 and hence could reverse at any time. (And may have already)

Reply to  Barry
September 18, 2015 7:46 am

Barry. Yes the 60 year cycle has been detrended. The models completely ignore the underlying rising leg of the millennial cycle. See the Fig from Christiansen et al below
It is blatantly obvious that the earth is just getting near to, is just at or just past the peak warmth of a 1000 year cycle and we are likely headed to another LIA type minimum at about 2600 – 2700.
The solar data suggest that the activity peak was at about 1991.The temperature peak is delayed because of the thermal inertia of the oceans. It is seen in the RSS data at about 2003.

Reply to  Dr Norman Page
September 17, 2015 7:39 pm

Dr Norman Page commented: “….the climate modelers have deluded themselves, their employers, the grant giving agencies, the politicians and the general public into believing that they could build climate models capable of forecasting global temperatures for decades and centuries to come with useful accuracy.”
You assume that the purpose of the models is to determine the truth. If you believe the amount of money, politics, and media attention being applied to AGW is to find truth then you are the deluded one. We can point out holes in the AGW narrative forever and it won’t change the current vector and success of it’s goal to make CO2 a bogeyman. CO2 as a pollutant? Look what propaganda has done to/for us. As much as I enjoy reading these posts I realize the skeptic side is all talk…..and I agree with much of it. We’re relying on nature to vindicate our beliefs. And we’re finding out that even that can be corrupted by mushrooming the people.

Reply to  markl
September 18, 2015 3:09 am

“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” ~ H. L. Mencken

Gregory Lawn
Reply to  markl
September 18, 2015 7:35 am

Remember Obama commenting that “conservatives are clinging to their guns and religion”. That was nonsense. So is conspiracy theory of a vast left wing conspiracy. Stick to the science, its a winning argument.
Most alarmists actually believe CAGW is a real threat. In the past the consensus was that the earth is flat and the center of the solar system. The consensus did not change the fact that the earth is round and the sun is the center of our solar system. The fake CAGW consensus cannot endure indefinitely, and the fact is a majority of people are not concerned about it.

Reply to  Dr Norman Page
September 18, 2015 1:49 am

I was going to post that – you beat me to it – excellent presentation!

Reply to  Dr Norman Page
September 18, 2015 2:55 am

Doesn’t your Figure A refute the facts asserted in the main article? This chart seems to suggest that there are multiple forcings besides CO2, water vapor, and clouds considered in the climate models. Thus, I question the accuracy of the essay’s assertion of a 22% effect of water vapor, 41% for clouds etc. since the numbers add to 100 yet none of the forcings in your FIG. A are mentioned in the guest essay except CO2. Certainly FIG. A directly contradicts the assertion that solar activity is not factored into the models.
Also, I read the IPCC quote from the 4th assessment report (at the beginning of the guest essay) as saying that the attribution studies (not the models themselves) are tuned by running individual forcings or sets of forcings separately through the models and then linearly weighting the results to best fit observations. For example, the model output from CO2 forcing (nonlinear response) would be linearly weighted with the model output of solar forcing (nonlinear response) etc. Water vapor and cloud feedback would not change during this procedure. Thus, the “unknowns” being changed are not the parameters on cloud feedback for example, but the strength of the forcings. How the models themselves are tuned is not addressed in that IPCC subsection. If I’m right (and possibly I’m not) this is a VERY important distinction.
This, of course, doesn’t change the conclusion that the IPCC’s conclusions are garbage, but it does change the reason. I’d never looked at that clause of the IPCC report before, but its stupidity is mind numbing – more so than the original essay gives it (credit?) for. What this seems to suggest is that the IPCC first starts with a bunch of computer models that each presume they know all they need to know about the strength of each forcing, e.g. climate sensitivity to CO2 etc. as well as the knowing all the feedbacks present, etc., and then tune their MODELS by adjusting internal unknown parameters to best fit the observations, as the original essay suggests. But because the climate system is nonlinear and chaotic, this presents a huge problem for determining what effect the change of each component has on the total output, i.e. ATTRIBUTION; if they re-run the models with only each input individually, the sum of outputs no longer fits the observational curve. So how do they solve this problem? They just linearly weight the outputs to best fit the curve and call it a day.
All of this suggests that the IPCC assumes that: (1) climate does not change naturally due to something other than TSI – something universally believed NOT to be true; (2) respective outputs of a nonlinear chaotic system to changes in each of a number of inputs can be linearly weighted and still produce a useful result – something that is just laughably stupid, making me think that they do this because it is the ONLY thing that can be done given the inherent complexity of nonlinear combinations of outputs and because you can’t change the parameters of the model without putting the model itself out of whack with the observations; and (3) the transient response of the Earth’s climate system to an impulse change in an input is known – something that only the terminally gullible would believe.
I want to expand on this latter point. if we don’t know the output effect of CO2’s forcing relative to aerosol’s forcing, for example, or what the transient response of a change in Total Solar Irradiance (TSI) is on the climate system – the natural response prior to manmade inputs, then how the hell can climate scientists know the first thing about the inner workings of the climate system? How can they know what the strength of the feedbacks are? Just where does this fountain of knowledge come from? It certainly can’t be from measurements and experimentation. From an engineering perspective this is completely backwards. To figure out what goes on in the black box, you first have to determine the response at the output to a change in each of input 1, input 2 etc. The very idea that you can know more about the inner workings of the black box than you do about the effect that each input individually has on the output – well, it just boggles the mind.

Reply to  perturbed
September 18, 2015 12:48 pm

The IPCC had their conclusion firmly set before they did one tiny bit of investigating.
Same with Hansen, and any other warmistas, back in the 1980’s.
There was simply no where near enough justification to even begin to ring an alarmist bell.
To have believed any of it, even back then, meant either being ignorant of or forgetting all of Earth history and what was known DECADES AGO, about past cycles of warming and cooling.
I was just wrapping up an education that studying all of the relevant subjects, on top of what had already been years and years of studying Earth history, particularly past glacial epochs and more recent glacial advances and declines.
And yet, by now, the scare stories have been told so many times and repeated so often and to so many ears and in so many aspects, that in the minds of a large number of people, it is already happening…we are well along the way to catastrophe.
Large numbers of individuals, including some in positions such as leading key regulatory agencies, do not know the difference between CO2 and a real pollutant…or else they are such sociopathic liars that they do what they do anyway.
Either way, it is bad news for those who envision a sane future guided by truth and pragmatism.

Reply to  perturbed
September 24, 2015 10:23 am

The people in need of an apocalypse to satisfy their misanthropy were in trouble in the 1980’s. Most of them were post-Christian and so the predictions (projections?) of the Book of Revelations no longer satisfied. With the coming of Gorbachev, Glasnost, and summits with Reagan, the satisfying prospect of Nuclear Winter faded. Acid Rain and the enlarging Ozone hole were tried out, but honestly they were pretty silly and never more than minor threats, not enough to get the moralistic people-haters a good “arousal”, shall we say.
But then it was noticed that temps were up since the 1970’s, and CO2 was up, and Greenhouse Gas theory seemed so very delicious and unimpeachably scientific. Since everybody knows that Coal is dirty, and Big Oil is evil, and consumption by the capitalist West unfair, a theory that rising CO2 dooms us all if we do not repent our evil lifestyles was perfect.
I’ve known a good many scientists. A sadly high percentage of them think that ordinary folks are stupid, barely above animals, and in constant need of having people like scientists run the world so that the ordinary people can be saved from their tendency to stupidly self-destruct. CO2-based CAGW as a theory is a perfect fit for these scientists—wonderfully esoteric (only special people can understand it); prescriptive of hair-shirt, self-denying, “sustainable” lifestlyes, and allowing those satisfying fantasies of the burning destruction or the immoral that sinners in the hands of an angry god once faced.
How unfortunate for the scientist/misanthropes that CO2 has continued to rise, but temps have not. They are in a tizzy. Another decade and they’ll be looking for a new Apocalypse.

Reply to  Dr Norman Page
September 18, 2015 6:11 am

The scale for ‘solar irradiance’ is way, way off. It is the #1 determining factor controlling whether we are in an ice age or interglacial. So many people are very anxious about the sun and are desperate to make it this steady state star that doesn’t change gears with little warning.

Reply to  Dr Norman Page
September 18, 2015 12:30 pm

Mr. Page,
I sure do appreciate reading some honest and frank and cogent words from a sane and educated person (Not implying that there are not many such individuals here).
Thank you sir.

Reply to  Dr Norman Page
September 21, 2015 2:50 am

According to the sine wave pattern in Bob Tisdale’s chart, global temperatures should have started to dip well before now. In GISS, the trend between the lowest month in 1975 and the warmest in 2005 was 0.186C per decade:
Between the lowest month in 1975 and the present (to August 2015) the trend is just fractionally lower than that, at 0.172 C per decade:
It has to be said that the exaggerated-looking trend projection in Bob’s chart looks a lot closer to the observed trend than the down-sloping sine wave does.

September 17, 2015 6:37 pm

I have pointed out much of the same to folks, yet the still seem eager to place faith in models.
With the increase in the number, resolution, and accuracy of speleothem studies I see an nrv that is growing wider and the stistically significant contribution by co2 grows further out of reach.
All I can say is post modern science has taken over pop culture and we now have a stupidization of society thanks to co2 hysteria.

Nick Stokes
September 17, 2015 6:57 pm

“Parameters controlling the unknowns in the models were then fiddled with (as in the above IPCC report quote) until they got a match.”
I think there is an elementary misunderstanding that crops up here frequently. They aren’t talking in that section (which is in 9.2, BTW, not 9.1.3) about modifying the GCM. They are talking about back-calculating forcings. Despite common misunderstanding here, the forcings are not a GCM input. They are a device for interpretation, or, if you like, attribution. They fit the forcings; that is not fitting the GCM.

Reply to  Nick Stokes
September 17, 2015 7:37 pm

Yes, 9.2 not 9.1.3. Apologies. But they don’t just “fit the forcings”, they fit the response patterns as calculated by the climate models. “[] a climate model is used to calculate response patterns [] which are then combined linearly to provide the best fit []”. So it is as I said.

Lewis P Buckingham
Reply to  Nick Stokes
September 17, 2015 10:13 pm

Maybe so.
However the concept of ‘forcings’ made the stage in Australia in a totally different way.
Professor Flannery, Climate Commissioner and Australian of the Year spoke on national TV about how increasing CO2 in the atmosphere would create a climate forcing that increased the temperature of the atmosphere. All sorts of bad things then would follow.
So this use was not a ‘device for interpretation’ but a certain prediction of effect.
I had never heard the term ‘forcing’ before his interview and was carried by his eloquence and certainty.
Its taken some years for me to unlearn his insights based on these models.
Its a pity he had not realised that ‘forcing’ was in fact a ‘device for interpretation’ of GCM’s that have been shown to be unreliable.
In other words the forcing is just a construct unable to be validated against reality by these models.
Just like the other models that told us of dams that would never fill.

Reply to  Lewis P Buckingham
September 18, 2015 8:36 am

Lewis P Buckingham:
Right. In the earliest of its assessment reports the IPCC claimed that its climate models had been validated. In the paper entitled “Spinning the Climate,” Vincent Gray reports confronting IPCC management with the fact that the models had not been validated. Management reacted by instituting the policy of changing “validation” to “evaluation” in its subsequent assessment reports but though “validation” was a logically meaningful concept the same was not true of “evaluation.” For an IPCC management that was bent on curbing CO2 emissions, “evaluation” had the merit of sounding like “validation” thus being easy to confuse with that word. Like many of the articles that are published here at wattsupwiththat, the one under discussion is based upon evaluation, validation being impossible for the lack of the statistical population underlying the model.

Reply to  Lewis P Buckingham
September 18, 2015 12:56 pm

To the average unsuspecting and credulous person, a scientist using the word “forcing” and spelling out a disaster scenario, sounds lie there is no possibility of it being wrong, as the word “forcing” usually means something that is made to happen in a definitive and unequivocal way.
If someone or something is forced to do something…they have no choice…it is going to happen.
In this way, the word is misdirection.

Reply to  Nick Stokes
September 18, 2015 1:19 am

They are a device for interpretation, or, if you like, attribution self-deceit.
Geometric or exponential error progression does not lend itself to interpretation; the end result is untimely oblivion which is meaningless. Performing iterative interdependent calculations encompassing non-linear relationships (some of them unknown) cannot be dressed up to be valid, no matter how much statistical bullshit you throw at it.
Mind you, numerous folk appear to be making a good living by pretending otherwise. It’s a form of farce Nick, very akin to an Inspector Clouseau plot (the most extreme being the concept of an ensemble mean).

September 17, 2015 7:00 pm

Mike Jonas writes;
“Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments.”
Yes indeed this is not in doubt. However the result of this phenomenon in the climate is still very much in doubt. Especially with regard to the “average” temperature. Aside from the fact that an “average temperature” has no useful meaning. I’m reminded of the old observation that if one of your feet is in ice water and the other is in boiling water you are “on average” quite comfortable overall.
Here is where the alleged “GHE” breaks down. There are numerous examples of human designed optical systems (aka applied radiation physics) that exhibit “back radiation”. Including the optical integrating sphere and the multi layer optical interference filter. In both cases “back radiation” certainly exists, but it can be difficult to measure. In neither case does the “back radiation” alone cause the source to “reach a higher temperature”.
In the specific case of an optical integrating sphere the interior surface of the sphere (highly reflective) becomes a “virtual light source”. This concept of a virtual source is somewhat specific to the optical engineering community. It helps with understanding (and predicting) the paths that photons will follow through a system. However (and this is a very big however) it DOES NOT predict the energy present at any point in the system.
In the case of an optical integrating sphere with an incandescent filament (aka a light bulb) inside this “back radiation” merely delays the elapsed travel time of the photons flowing through the system. This is a result of the photons “bouncing back and forth” inside the sphere until they find an “exit port”.
This is known as the “transient response” of an optical integrating sphere. This is a somewhat obscure but still well understood concept. If you inject an input “pulse” of light (off, then quickly on, then quickly off again) this transient response function will create a “stretched” pulse of output light. Specifically this square input pulse is no longer a square output pulse since some photons will quickly find an exit port and others will “bounce near and far” before exiting the sphere.
The gaseous atmosphere of the Earth is quite like an optical integrating sphere in this regard. The photons arriving from the Sun and being converted to emitted IR radiation (still a form of light or electromagnetic radiation and following all of the same rules/laws) simply bounce “back and forth” between the atmosphere and the surface. All this bouncing merely delays the flow of energy through the system as the energy alternates between light energy and thermal energy.
Given the dimensions of the atmosphere (about 5 miles high) and the velocity of light (still considered quite speedy) this alleged “GHE” merely delays the flow of energy (arriving as sunlight) through the system by a few tens of milliseconds. The specific delay for any given photon is of course described by a statistical distribution.
Since the period of the arriving light is about 24 hours this delay of a few tens of milliseconds has no effect on the “average temperature” at the surface of the Earth.
Another example of “back radiation” and its practical uses is the multi layer optical interference coating. This is the highly engineered coating on most modern optical lenses. It appears slightly purple when observed off-axis. The purpose of this coating is to reduce reflections from the surface of a lens.
These coatings have greatly improved the quality of photographs and videos by increasing contrast and reducing “ghost images” (images that are created by the individual surfaces inside a modern optical lens).
These coatings function by delaying “following photons” by a time equivalent to a fraction of the wavelength of the arriving light. By creating exactly the correct delay interval the reflected light is exactly “out of phase” from the arriving light and destructive optical interference occurs. This moves the optical energy to a location inside the optical lens where it is no longer subject to surface reflections.
Both of these “applied radiation physics” effects/techniques have been applied for decades and are quite well understood.
The alleged “radiative greenhouse effect” merely delays the flow of energy through the system and has no effect on the “average temperature”. It does change the response time of the gases in the climate. Since the gases have the smallest thermal capacity of all the components present (Oceans, land masses, atmosphere) the idea that they are controlling the “average temperature” is quite ludicrous.
Modeling these radiative effects in the climate is probably impossible. The required spatial distances are sub-micron the the time steps necessary are in the nanosecond range. There would need to be a increase of computing power of about ten orders of magnitude to even begin to attempt this.
There is of course a gravitational greenhouse effect whereby the effects of gravity acting on the gases in the atmosphere of the Earth predict quite well (see the US standard atmosphere model last updated in 1976) the temperature of the atmosphere of the Earth with no use of radiative effects at all.
It is quite sad that all this effort has been wasted on modeling the “unmodelable”.
Cheers, KevinK.

Bubba Cow
Reply to  KevinK
September 17, 2015 8:01 pm

I would like you and george e smith to elevate this to a post and send to Anthony.
Thanks for your input.

Reply to  Bubba Cow
September 17, 2015 8:24 pm

Bubba, thank you.
I did submit a somewhat whimsical explanation of this delay line effect to Anthony several years ago.
I have submitted a more detailed explanation to other climate science sites as well.
The “radiative greenhouse effect” is merely a form of hybrid optical/thermal delay line. It has no effect on the “average” temperature at the surface of the Earth.
Cheers, KevinK

Reply to  Bubba Cow
September 17, 2015 10:45 pm

KevinK – Your comment is at a greater level of detail than my article, so as suggested would be better as a separate article. I note your “Since the gases have the smallest thermal capacity of all the components present (Oceans, land masses, atmosphere) the idea that they are controlling the “average temperature” is quite ludicrous.“, but to my mind the GHG theory whereby some outgoing IR is in effect turned back and thus affects surface temperature is at least prima facie credible. I’m prepared to work with this version (even though, just like everything else, science may one day overturn it) while there are such glaring errors elsewhere.

Reply to  KevinK
September 18, 2015 2:55 am

I think that is the best comment I have read here in several weeks at least. (a high complement considering the quality of the comments here)
I do hope that you will offer that comment as a post, that it is posted, and that then the moderation allow a full and complete debate on all parts of it. There are many of us who think the mass of the atmosphere along with gravity is the main reason for the misnamed “green house effect” along with H2O in all its phases.
~ Mark

Reply to  KevinK
September 18, 2015 7:18 am

If this delaying of the photon by a few milliseconds has no impact on the “average temperature”, please explain the well documented phenomena of heat retention on humid nights compared to dry one.

Reply to  MarkW
September 18, 2015 6:25 pm

The thermal capacity of water is much greater than CO2.
This is why the main purpose of indoor air conditioning is to remove the water vapor first and then secondarily reduce the temperature of the now dryer air.

Reply to  MarkW
September 19, 2015 6:26 am

Kevin, Liquid water yes, because of it’s much greater density. However the difference between water in the vapor stage and CO2 is much, much smaller.
Regardless, the warming affect of water occurs even when it is the air aloft that is damp and the air at the surface is dry. IE, clouds.

Reply to  KevinK
September 18, 2015 7:35 am

Has anyone calculated the average delay for a photon that is within one of CO2 absorbtion bands?
I strongly suspect that it is more than a few milliseconds. Given that the direction of the photon when it is re-emitted is random, it could be down as easily as up, if it is sideways, it will have many miles of dense atmosphere to traverse compared to up.

Gloria Swansong
Reply to  MarkW
September 18, 2015 2:23 pm

At about 22 minutes, Dr. Happer shows the “xylophone effect” on a CO2 molecule.

Here is an email exchange between Dave Burton and Will Happer concerning the issue of “re-emitting” a photon v. collisions with other molecules in the air, mostly N2 of course:
A portion of their discussion:
After hearing Will’s lecture, Dave asks:
1. At low altitudes, the mean time between molecular collisions, through which an excited CO2 molecule can transfer its energy to another gas molecule (usually N2) is on the order of 1 nanosecond.
2. The mean decay time for an excited CO2 molecule to emit an IR photon is on the order of 1 second (a billion times as long).
Did I understand that correctly?
Dave: You didn’t mention it, but I assume H2O molecules have a similar decay time to emit an IR photon. Is that right, too?
Dave: So, after a CO2 (or H2O) molecule absorbs a 15 micron IR photon, about 99.9999999% of the time it will give up its energy by collision with another gas molecule, not by re-emission of another photon. Is that true (assuming that I counted the right number of nines)?
Dave: In other words, the very widely repeated description of GHG molecules absorbing infrared photons and then re-emitting them in random directions is only correct for about one absorbed photon in a billion. True?

Reply to  MarkW
September 22, 2015 6:51 am

MarkW why not first define a photon. Normally this refers to a pulse of visible light (which has a fixed speed in a vacuum) and has a wavelength of 0.45 to 0.7 micron. Such a photon is not absorbed or emitted by CO2. Maybe you do not understand photons which Nobel prize winning physicist W Lamb Jr wrote do not exist.

Reply to  KevinK
September 18, 2015 2:07 pm

Average between boiling water and ice water is 50C…pretty darn hot for anyone.
Just sayin’.
Otherwise I agree with the sentiment.
If the world warms up, and it is mostly all in the Arctic, I would say that is a very good thing…less of the Earth is frozen wasteland.

Reply to  Menicholas
September 19, 2015 6:28 am

Maybe it was dry ice?

Reply to  KevinK
September 18, 2015 5:04 pm

“The gaseous atmosphere of the Earth is quite like an optical integrating sphere in this regard. The photons arriving from the Sun and being converted to emitted IR radiation (still a form of light or electromagnetic radiation and following all of the same rules/laws) simply bounce “back and forth” between the atmosphere and the surface.”
I don’t believe this is a correct analogy, as the radiation from GHGs don’t “bounce back and forth” between the atmosphere and the surface. The radiation is ABSORBED at either end and is re-emitted in the opposite direction only after the respective boundary has increased its temperature. Your “filament in a light bulb” example is one of reflection/scattering and not one of absorption/re-radiation. A perfect reflector doesn’t have to increase its temperature to send the light back in the opposite direction. A CO2 molecule, and the earth’s surface does because neither is close to an ideal reflector.

Reply to  perplexed
September 19, 2015 4:42 pm

Perplexed, you are quite perplexed.
Of course, IR does bounce back and forth, between GHGs and other GHGs and the surface, until ultimately (within microseconds to a max of 22 minutes according to Prof. Happer in the comments below) finding a “hole” to escape to space, exactly as KevinK explained what happens in an optical integrating sphere, a perfect analogy indeed to the so-called Arrhenius radiative “GHE”:
Also, IR-active gases like CO2 are indeed “perfect reflectors” of IR, emitting & absorbing the exact same 15 micron IR, but much more commonly transfer absorbed IR to molecular collisions with N2/O2, thereby accelerating convective COOLING.

Reply to  perplexed
September 19, 2015 5:15 pm

Typo correction: Of course, IR does bounce back and forth, between GHGs and other GHGs and the surface, until ultimately (within microseconds to seconds) according to Prof. Happer in the comments below) finding a “hole” to escape to space,…”

Ed Bo
Reply to  perplexed
September 20, 2015 4:28 pm

You are, of course, correct. KevinK’s and hockeyschtick’s understandings of the basics of thermodynamics are so confused that they go off the rails before they even start. Not discerning the difference between reflection and absorption/re-emission is just the start.
Let’s say the CO2 molecule is in an atmospheric level that has half the absolute temperature of the earth’s surface (just to make the math easy). Its emission at a given wavelength would be 1/16 of the surface’s (given the same emissivity/absorptivity), and half of that would be downward. So only 1/32 of the radiant energy received at that wavelength would be passed upward; 1/32 would be returned down; the other 30/32 would increase the thermal energy of that level of the atmosphere.
If this area of the atmosphere were at 255/288 of the surface’s absolute temperature, its emission at a wavelength would be 61%, and again, half up half down. Completely different from a reflective integrating sphere.

Reply to  perplexed
September 20, 2015 7:40 pm

If the analysis by KevinK were correct, then there would be no significant greenhouse effect at all, and there would have to be some other plausible explanation for the 33C increase in average temperature of the Earth. None is offered. If the analysis by KevinK were correct, dark surfaces like asphalt wouldn’t heat significantly in sunlight.
The time it takes for radiation to be absorbed and re-emitted doesn’t seem to be relevant to me. The only important fact is that for re-emission to occur, the material absorbing the radiation must increase its temperature to regain equilibrium. KevinK’s analogy is one that ignores absorption entirely. Perfect reflectors don’t absorb. They are polar opposites where absorption and reflection sum to 1.

Ed Bo
Reply to  perplexed
September 20, 2015 8:52 pm

Yeah, it’s amazing what you can come up with when you don’t understand the basics. To KevinK, everything is an optics problem. The fact that the photon is absorbed, and a completely separate set of conditions govern whether and how often another photon will be emitted, is completely beyond his comprehension, when all he knows how to deal with is reflective optics.

Reply to  perplexed
September 21, 2015 11:59 am

Ed bo, you are equally as perplexed as perplexed
Ed bo says “Not discerning the difference between reflection and absorption/re-emission is just the start.”
Straw man and/or lie. Of course there is a difference, which is a time delay of a fraction of a second in the case of absorption/emission vs. reflection. You completely missed the whole point KevinK’s comment, which is that absorption/emission by CO2 is analogous to an optical delay line and only delays the ultimate escape of IR photons to space by milliseconds to seconds, easily reversed and erased at night for no net warning effect on a daily to annual to multi-decadal basis.
Ed bo and perplexed also are of the grossly mistaken belief that all photons are created equal, and that 15 micron low-Energy/frequency IR (equivalent to the thermal radiation of a cold blackbody at 193K) can warm a much warmer blackbody (earth) at 255K by 33K up to 288K. Absolutely false! Google “frequency cutoff for thermalization ” for 643,000 references explaining why I’m right and you’re wrong, then Google Planck’s theory of blackbody radiation for millions of references explain why low-E/frequency 15 micron CO2 IR cannot warm/increase the temperature/E/frequency of a warmer blackbody emitting at a peak ~10 microns (Earth).
There is indeed a 33C gravito-thermal GHE (of Maxwell, Clausius, Carnot, Boltzmann, Feynman, US Std Atm, HS greenhouse eqn) caused by mass/pressure/gravity. The Arrhenius radiative GHE confuses the cause (gravito-thermal) with the effect (IR absorption/emission from GHGs.

Ed Bo
Reply to  perplexed
September 21, 2015 7:30 pm

No, hockeyschtick, you can’t understand this very basic point even when it is completely spelled out for you. Emission is a separate process from absorption, so you cannot just say that there is re-emission with a short delay after absorption. This is fundamentally different from reflection, not just in timing.
Emission is proportional to the 4th power of absolute temperature, so if the cooler absorbing body has half the absolute temperature of the warmer body, its emission at a given wavelength will be 1/16 (1/[2^4]) of the warmer body (for the same emissivity absorptivity). So for every 16 photons of a given wavelength it absorbs, it only emits one. It is NOT a simple time delay.
This is some of the most basic stuff in radiative heat transfer, and you don’t understand it at all!
You are also completely wrong about “cutoff frequencies”. Lasers with 10.6 micron wavelength, which by your reckoning is “equivalent to the thermal radiation of a cold blackbody” at 273K, are used all the time to cut through steel, which requires them to heat steel at it melting point of 1640K. According to you, this is not possible, but it is done every day!

Reply to  perplexed
September 21, 2015 8:21 pm

Ed Bo, you are way beyond pathetically confused, and say “Emission is a separate process from absorption, so you cannot just say that there is re-emission with a short delay after absorption.”
I have a degree in physical chem, and a grad degree, so stop continuing to make ridiculous straw man arguments that I allegedly think emission is the same thing as absorption. Didn’t you read Will Happer’s explanation right above? To wit:
The mean decay time for an excited CO2 molecule to emit an IR photon is on the order of 1 second (a billion times as long).
Did I understand that correctly?
Dave: You didn’t mention it, but I assume H2O molecules have a similar decay time to emit an IR photon. Is that right, too?

Ed bo says “Emission is proportional to the 4th power of absolute temperature, so if the cooler absorbing body has half the absolute temperature of the warmer body, its emission at a given wavelength will be 1/16 (1/[2^4]) of the warmer body (for the same emissivity absorptivity). So for every 16 photons of a given wavelength it absorbs, it only emits one. It is NOT a simple time delay.”
Hahahahahahah, you don’t even know the difference between the Stefan-Boltzmann Law of Blackbody radiation and the NON-blackbody molecular bending transitions and microstates of a CO2 MOLECULE, which is NOT a BLACKBODY and for which the SB law for true blackbodies only cannot be applied! Read Happer right above, again.
Ed bo says “So for every 16 photons of a given wavelength it absorbs, it only emits one.” Thus, according to Ed bo, CO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!
LOLOL: “This is some of the most basic stuff in radiative heat transfer, and you don’t understand it at all!”
The N2/CO2 laser faux-argument has been shot down a billion times before, including here:
Pathetic. I sure hope you’re not a heat transfer engineer, or climate scientist. You could kill someone with such logic.

Ed Bo
Reply to  perplexed
September 21, 2015 11:39 pm

You quote me as discussing “emissivity/absorptivity” and you cannot understand that I am NOT talking about blackbodies. And your understanding of radiative heat transfer is so poor that you do not understand that even for non-blackbodies, thermal radiative output at a given wavelength is still proportional to the 4th power of absolute temperature. (It is also proportional to the emissivity at that wavelength.)
You cannot have been paying attention in your PChem classes!
You say that I believe that “CO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!”
Once again you demonstrate absolutely no understanding of the most basic concepts of thermodynamics. I clearly stated that the remainder “would increase the thermal energy of that level of the atmosphere.” Not everything is a steady state system, and I did not claim this was. By your logic, you could not heat a pot of water on a stove, because it would have to give off as much energy as it received from the burner.
As is typical, you completely misunderstand Happer’s points. A CO2 or H2O molecule excited by absorbing an IR photon is virtually certain to transfer this energy by collision with other gas molecules before it can emit a photon to “relax”. But the chances of it getting “re-excited” by another collision so it can emit a photon at this same wavelength are heavily dependent on the local temperature of the atmosphere. These chances are far less at lower temperatures.
And you haven’t begun to come to grips with 10.6um radiation boiling water and melting steel. According to you, this should not be possible.

Reply to  perplexed
September 22, 2015 10:11 am

Nonono EDbo
First of all, the SB Law for TRUE BBs or greybodies is absolutely NOT applicable to CO2 (or H2O) MOLECULES, and as proven by observations, emissivity decreases with temperature, unlike a true blackbody:
I said: You say that I believe that “CO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!”
Look, Ed boo boo, I’ve said, and Happer said, the chance of an absorbed 15 micron photon by CO2 giving up it’s exact same quantum E via collisions with N2/O2 is about ONE BILLION times greater than emitting an identical 15 micron photon. And by preferentially transferring the E via collisions, that ACCELERATES convective COOLING, NOT ‘heat trapping’, of the troposphere.
Your ridiculous claim that CO2 black holes trap 16 photons before giving up ONE photon is too dumb to address further.
Boo boo says “But the chances of it getting “re-excited” by another collision so it can emit a photon at this same wavelength are heavily dependent on the local temperature of the atmosphere. These chances are far less at lower temperatures.”
15 micron FIXED CO2 absorption/emission by Wein’s law corresponds to a true BB emission temperature of 193K. The 1976 US Std Atmosphere clearly shows the ENTIRE atmosphere 0-100km is much WARMER than 193K, and reaches a minimum of 220K in the tropopause. Thus, the silly fallacy that CO2 has to emit less 15 um IR due to a colder surrounding kinetic temperature is FALSIFIED throughout the entire atmosphere 0-100km.
Yes of No: Can radiation from a 193K BB cause a 255K blackbody to warm 33K up to 288K?
Don’t bother answering – I already know UR answer is “YES!”
And I already gave you the link that completely shoots down your N2/CO2 laser faux argument.
I’m done wasting any more time here with boo boo.

Ed Bo
Reply to  perplexed
September 22, 2015 11:30 pm

Why do I bother? First, I talk about bodies of a certain emissivity/absorptivity, and you say I am talking about blackbodies. You say I am wrong to treat CO2 as a blackbody (which I didn’t) and then you apply Wien’s Law of blackbody radiation to CO2. Gobsmacking!
You emphasize that a CO2 molecule that absorbs a 15um photon is a billion times more likely to pass on the absorbed energy to an adjacent molecule than to re-emit the photon. Fine. But incredibly, you claim this energy cools the atmosphere rather than heats it. And then you claim that this same molecule must re-emit the same number of 15um photons that it absorbs.
I specifically refer to emissivity and emission at a given wavelength and you respond with overall emissivity/emission over all wavelengths. Have you ever even read a heat transfer textbook? The difference between those two is one of the first things discussed.
But since you are completely unfamiliar with the concepts involved, I will lay them out for you. Emissivity at a given wavelength is constant for a particular substance, so its emission AT THAT WAVELENGTH is proportional to T^4, and equal to (emissivity * sigma * T^4).
Overall emissivity covers all wavelengths, and is the ratio of the integral of emissivities over these wavelengths, weighted by the Planck blackbody emissions at the specific temperature, compared to the Planck blackbody curve. For non-blackbodies/non-graybodies, this can vary with temperature.
But until you really understand the differences between these, you can’t do any competent analysis.
You still have not dealt with the EMPIRICAL FACT that CO2 lasers with pure 10.6um output are used to melt steel at 1640K. Your theory says this is a physical impossibility, yet it is a standard industrial process. Your link does not even begin to address the issue.
So maybe it is best that you retreat to your own website, where you won’t let anyone call you out on your egregious mistakes. I especially like your post on effective radiating level, where you:
1.) Claim that non-radiating gases will have an ERL at the vertical center of mass of the atmosphere.
2.) Start your derivation for temperature lapse rate as a function of height by assuming temperature is constant over height. — Yes, you did! When you took the integral over height to derive your expression for pressure as a function of height, you took T out from under the integral. This is only valid if T is constant over height! You don’t even understand high school math!
3.) You treat the atmosphere as a blackbody radiator even though you insist that you cannot treat even a radiatively active gas as a blackbody!
Keep it up! Pure comedy gold!!!

Reply to  perplexed
September 23, 2015 10:51 am

Don’t keep it up bob boo, all you do is grossly mis-state and distort things I’ve said to create your own false straw men to attack.
You are too pathetically confused to tutor, don’t understand the T^4 S-B law only applies to SOLD TRUE BLACKBODIES, not CO2 GASES, don’t understand that CO2 preferentially passes E via collisions which increases the adiabatic kinetic expansion, rising, and cooling of air parcels, thereby ACCELERATING convective cooling.
The N2/CO2 LASER transitions at 9.6 and 10.6 do not occur in our atmosphere and are not applicable to the “GHE” In addition, those wavelengths are of far higher energy, COHERENT without destructive interference, and very highly concentrated with extremely high flux. Has NOTHING to do with passive 15 micron IR absorption/emission in the NON-LASER atmosphere.
In a pure N2 atmosphere, as I’ve said repeatedly, the equilibrium T with the Sun is located at the center of mass, the ERL is = surface and the “average” kinetic temperature of the whole troposphere is 255K.
In a N2 + H2O atmosphere the ERL = center of mass where T=255K
and show a pure N2 atmosphere Boltzmann distribution has a much steeper lapse rate and thus ~25C warmer surface.
My lapse rate derivations are all standard meteorology text mathematics, and perfectly reproduce the 1976 US Std Atmosphere model, thus your claim of an incorrect derivation of the LR is clearly false.

Ed Bo
Reply to  perplexed
September 24, 2015 7:16 pm

“I got the answer I wanted, so my math must be correct!”
Ask any high school math teacher — when you took “T” out from under the integral over height, you were using T as constant over height. You need to go back to high school! You can’t do this and then say your subsequent analysis proves that temperature will vary over height.
“Forget my blanket denials that longwave radiation cannot heat any substance of a higher temperature than the temperature corresponding to that which has peak radiation at that temperature. I’ll carve out some exceptions by creating a special class of ‘coherent’ photons.”
You’re not convincing anyone with your floundering around.
Oh, and your latest post has another great one. You claim that the reason the “wet” adiabatic lapse rate is so much less than the dry adiabatic lapse rate is that water vapor increases the Cp of the atmosphere. Did you even bother to do any calculations??? Obviously not!
With a relative humidity of 50% at 25C, the Cp of the atmosphere increases by only 2% (1.03 kJ/kgK vs 1.01 for 0% RH). You would need a 50% increase to get the figures you claim.
You obviously have absolutely no understanding of the physics behind the difference in the two rates (or even the reason either of these exist at all).
Can you get anything right???

Reply to  perplexed
September 25, 2015 12:33 pm

Boo boo, why do I bother -you once again create fake straw men in “quotes” to grossly misrepresent what I’ve said and done.
Every single thing I’ve done is in 1st year meteorology texts, including calculation of the dry adiabatic lapse rate,
dT/dh = -g/Cp = 9.8K/km
as well as calculation of the wet adiabatic lapse rate formula and value = 5K/km
and the observed average lapse rate of 6.5K/km
Thus, either Boo boo is dead wrong, or all the meteorology texts, Poisson, Maxwell, Helmholtz, Carnot, Clausius, Feynman, US Std Atmosphere, Intl Std Atmsophere, the HS greenhouse eqn, etc etc are wrong.
Clearly, boo boo doesn’t understanding anything about 1st year freshman meteorology
“Can you get anything right???”

Ed Bo
Reply to  perplexed
September 25, 2015 5:17 pm

You said over at your own site, and I quote:
“Since water vapor has a much higher heat capacity Cp than air or pure N2, addition of water vapor greatly decreases the lapse rate (dT/dh) by almost one-half (from ~9.8K/km to ~5K/km)”
You are claiming that the increased Cp of the atmosphere due to the presence of water vapor cuts the lapse rate in half. This means that you believe that it doubles the Cp of the atmosphere (or increases it by 50% to get from 9.8 to 6.5).
But if you had bothered to do an actual calculation (as I did), you would have seen that water vapor can only increase the Cp of the atmosphere by a few percent. I gave the specific example of 50% RH at 25C increasing Cp from 1.01 kJ/kgK to 1.03, a 2% increase.
Like so many, you just plug some numbers into equations that you find, without the foggiest notion of when or why those equations would apply. You have no clue when the “wet” ALR would occur, or what the physical reason it is significantly different from the dry ALR. (If you understood a basic meterology text, you would know it instantly.)
I guess that shouldn’t be surprising, because you have no clue either as to when and why the dry ALR applies. Again, if you actually understood an introductory meteorology text, and the difference between stable and unstable lapse rates, a very basic concept, you wouldn’t be so confused.

Reply to  KevinK
September 19, 2015 3:55 pm

Excellent comment KevinK, with which I fully agree and have elevated to a post here (with my comments and other comments here):
Thanks and cheers!,

Reply to  hockeyschtick
September 21, 2015 7:11 pm

HS, thanks.
some “bo” body commented;
“Yeah, it’s amazing what you can come up with when you don’t understand the basics. To KevinK, everything is an optics problem. The fact that the photon is absorbed, and a completely separate set of conditions govern whether and how often another photon will be emitted, is completely beyond his comprehension, when all he knows how to deal with is reflective optics.”
Beyond my comprehension, well if you say so. But the fact is that the interior of most modern optical integrating spheres is an diffuse absorptive surface, somewhat akin to Teflon ™. The Photons are in fact absorbed close to the surface of the interior of the sphere and warm the material. Then they are re-emitted and the material cools. Exactly akin to the gases in the atmosphere.
And indeed I have dealt with all types of optics including; reflective, refractive, diffractive, and scattering. I probably have modeled more optical systems with far more predictive power than any climate model so far produced.
Oh and yes we do indeed model things right down to the photon level. I was part of a team that helped calibrate the current Digital Globe Earth imaging satellites to NIST standards for absolute radiometry, I can assure you I can count individual photons with the best of them.
Maybe open your horizons a wee bit and consider the totally obvious possibility that the now infamous “radiative GHE” merely changes the response time of the gases in the climate, that is just as plausible as blindly accepting that the “temperature must rise”.
Cheers, KevinK.

Ed Bo
Reply to  hockeyschtick
September 22, 2015 7:52 am

And yet, with all that background, you still think a colder body will re-emit photons of a given wavelength one-for-one for each photon of that wavelength absorbed from a warmer body? Amazing…

September 17, 2015 7:06 pm

Thank you for an excellent analysis.

September 17, 2015 7:43 pm

Mike Jonas, you forgot one factor : the ability of the human mind passionately to believe in something without necessarily relying on rational thinking. To conclude from models a certain result is easily done if you are a normal human. Our minds are somehow geared to attain such feats : we can just close our eyes and say, “I believe”.
But I do think that we all have a different propensity for such cerebral gymnastics : some of us are much better at it than others. You appear to be no good at it!

September 17, 2015 7:46 pm

The most disturbing fact being ignored is, all previous Interglacials didn’t last much longer than the present one. History is clear: we are in far more danger of another Ice Age than not.

Reply to  emsnews
September 19, 2015 6:13 pm

I have never ignored it, and many others are cognizant as well.
But I think large numbers of people may well starve to death from famines caused by cooling, that is well short of a return to full glacial conditions.
All it would really take, in my estimation, as a serious cold snap, or perhaps a hard frost or snowstorm, in the midst of the northern hemisphere growing season, or a very late Spring in the same year as a very early Autumn, which wrecks crops before they are harvested.
The world has about a thirty day supply of food on hand at any given time, according to accounts I have seen from numerous sources.
Precarious, to say the least.

September 17, 2015 7:47 pm

Looking at the summary table, it seems obvious to me that the elements that are 0% are all natural “forces” for which Mankind has absolutely no control. Even for the three that are >0%, only one (C02) do have any sense that we can control. The other two water vapour and clouds have some intrinsic relationship which is not well understood. From that perspective its no surprise that CAGW supporters see C02 as some sort of control knob.

Svante Callendar
September 17, 2015 8:07 pm

So much for the mid-tropospheric chart you show in the article, the one with the force-aligned begin points at 1979. Got an up to date one for global surface temperatures?

Christopher Hanley
September 17, 2015 8:40 pm

“If your model started off a long way from reality then inevitably the end result is that a large part of your model’s findings come from unknowns …”.
And what if the “reality” they are attempting to tune their models to is not really and truly reality?comment image

September 17, 2015 9:15 pm

I’ve said it before
So I’ll say it again,
Trying to model chaos
Borders on the insane;
Garbage in garbage out
Has never been more true,
Perhaps there’s an agenda
They want to pursue?

September 17, 2015 9:26 pm

As a long time coastal resident, I am still amazed by the lack of positive “feedbacks” for surface temperatures in places like Crescent City, CA where the daily high temp. matches the daily low temp. This past August one day I remember was 56F for the hi and lo temp. for the whole day. It’s utterly common to have a daily temperature swing of just couple degrees. You might want to bring a jacket or sweater while you are adjusting your models…just in case.

September 17, 2015 9:27 pm

So you picked a random point at which the GCMs and observations peaked at roughly the same time, equalized them at that level, then misleadingly compare them on that basis to come to predetermined conclusions that they do not work? You also ignore that volcanoes are in fact included in the set of natural forcers used in historical reconstructions. Wow, the lengths you will go to just to support your delusion of having smashed the consensus…I am in awe. If you wish, I have a bridge I can sell to you…

Reply to  Ross
September 17, 2015 11:57 pm

Well the Climate modelers have their own bridge to sell. And they have been doing a fair job of it so far.
BTW. What is the value of the coupling coefficients for the various coupled processes?

Reply to  Ross
September 18, 2015 2:10 pm

Ross, that is quite enough Kool-Aid for you, young man.

Reply to  Ross
September 19, 2015 6:30 am

There is a chart that was produced by the IPCC that is almost identical to the one above,
In your conspiratorial fantasies, is the IPCC also trying to discredit the models?

September 17, 2015 10:35 pm

The divergence of models from actuality has been being mentioned a lot lately. There are usually two reasons:
1: Some of the graphs showing divergence either include (sometimes have only) RCP 8.5 of the CMIP5 models, which seem to have been based on an overprediction of greenhouse gases, especially methane. The RCPs lower than 8.5, such as 6.0 and 4.5, seem more realistic for total forcing through manmade greenhouse gases.
2: In general, these models seem to be tuned for success at hindcasting the past, but without consideration for multidecadal oscillations. I think about .2, maybe .22 degree C of the warming from the early 1970s to the 2004-2005 peak was from upward swing of one or a combination of multidecadal oscillations, and the pause period starting in the slowdown just before cresting the 2004-2005 peak seems to me as likely to last for a similar amount of time.
I suspect a possible third reason: The graph does not state what the temperature is of. However, balloon and satellite datasets are usually of the lower troposphere. The IPCC models are not even named as CMIP5 ones, let alone which RCP / RCPs. The model curve looks familiar to me as a composite of CMIP5 models that has often been compared not only to lower troposphere data, but also surface data. This has me thinking that the model forecast is of surface temperature. During the period starting with the beginning of 1979, balloon data indicate that the surface-adjacent 100-200 meters of the troposphere have warmed about .03 degree/decade more than the “satellite-measured lower troposphere” as a whole. See Figure 7 of
This indicates surface temperature (after smoothing by a few years) being about .3 – .31 degree C warmer than in 1979, as opposed to ~.2 degree C. (The surface temperature dataset that agrees with this the best is (more like was but still is) HadCRUT3. This is still cooler than composites of lower-RCP CMIP5 models, but I think figuring for this instead of lower troposphere being what is being predicted (and hindcasted), and correcting models so that they are aware of multidecadal oscillation(s) accounting for some early-1970s to ~2004-2005 warming, will get composites of lower-RCP (4.5 and 6) CMIP5 models into doing an impressively good job.

Reply to  Donald L. Klipstein
September 17, 2015 10:58 pm

re: “ I think about .2, maybe .22 degree C of the warming from the early 1970s to the 2004-2005 peak was from upward swing of one or a combination of multidecadal oscillations” – When I look at, say, the sunspot cycle, where the cycle amplitude varies a lot from cycle to cycle, I understand that just eyeballing (or even exhaustively analysing) the temperature graph will not tell you much at all about how much of the late 20thC warming was from those oscillations. You guess 0.2 -0.22 deg C. That’s a very tight range. I would have thought that we just don’t know to anything like that accuracy.

September 17, 2015 10:48 pm

Real clouds function way below the huge grid cell sizes of the climate models. A tropical cumulus nimbus can short circuit the surface with the upper layers and transport a lot of energy up or down. They are too small and too numerous too be processed by even the fastest computers.

September 17, 2015 11:36 pm

17 Sept: JoanneNova: Scandal Part 3: Bureau of Meteorology homogenized-the-heck out of rural sites too
The Australian Bureau of Meteorology have been struck by the most incredible bad luck. The fickle thermometers of Australia have been ruining climate records for 150 years, and the BOM have done a masterful job of recreating our “correct” climate trends, despite the data. Bob Fernley-Jones decided to help show the world how clever the BOM are. (Call them the Bureau of Magic)….

September 17, 2015 11:51 pm

What are the coupling coefficients? (numeric value)

JJM Gommers
September 18, 2015 1:48 am

What surprises me is the steep climb of 1oC for the period 1995-2025, this is such a big incremental change. You would expect that if such result is possible. This increase is counteracted by the Boltzmann equation and contains the temperature T to the 4th power.

old construction worker
September 18, 2015 1:49 am

We don’t live in a greenhouse type atmosphere. We live in an atmosphere that more or less acts like a swamp cooler coupled with a chiller unit. I bet if some engineer could model that it would be closer to reality than computer models based on “CO2 drives the climate” theory.

Reply to  old construction worker
September 18, 2015 2:46 am

I agree that an honest engineer could model the climate closer to reality than the present crop of computer games. But I also think that a drunken plowboy (or farmhand these days) could best the IPCC also.

Reply to  old construction worker
September 18, 2015 2:12 pm

Where I live, the atmosphere is more like a sauna bath with the window cracked a little, a fan in the corner with a randomly variable rheostat controlling it, and a bucket of ice thrown in every once in a while in Winter.

September 18, 2015 1:54 am

Climatologists are as accurate in predicting the climate as seismologists are in predicting earthquakes.
All climate models costing taxpayers billions of dollars, sterling and euros have gone wrong. All predictions about climate have gone wrong. Today we are supposed to be seeing an ice-free Arctic summer, a rise of 4 meters in ocean levels, the desertification of the northern Mediterranean shores, 50 million climate refugees, food shortages, a warming Antarctica which is actually getting colder, snowless northern countries which in fact are having more snow and many other predictions that have gone awry.

richard verney
September 18, 2015 2:07 am

Presently, it is not known whether ENSO events cancel out and thus have no long term impact upon climate when viewed in the long run, or more particularly on a climatology timescale of say circa 30 to 50 or perhaps even 30 to 100 years.
However, there are reasons why ENSO events may not simply cancel each other out and why it may be the case that they do have an impact on short term climatology (ie., periods of 30 years, or at any rate less than 100 years).
1. Due to the difference in latent energy contained within the atmosphere and the oceans, the atmosphere cannot heat the oceans. It is well known that it is extremely difficult to heat water in a container open only to the atmosphere above by warm air from above.
2. A warm ocean surface (El Nino) heats the atmosphere above and since hot air rises, it also alters convection rates.
3. The same is not so with a cool ocean surface.
4. Consider a chest freezer. Open the lid, and since cold air sinks, there will be very little impact upon the temperature in the room (at least over short periods). Contrast this with the same chest freezer but one that has been converted to a BBQ at the bottom. Open the lid and it will have an immediate impact on the temperature of the room. One warms the atmosphere, the other does not.
Thus in summary, if there is a short period (lets say 30 or so years) where there are more El Ninos than La Ninas (or where the El Ninos , or some of them,are particularly strong), on a short time scale (lets say 30 years or so) one would expect to see warming. But even if there was exactly the same number of El Ninos as La Ninas (or they were of equal strength), it does not automatically follow that on short time scales (say circa 30 years) the effect is neutral; that La Ninas will cancel out El Ninos. They may do, but since the energy flux is different and since one may have a greater impact upon convection, and thereby energy transport, it does not automatically follow that ENSO cancels out on short climatology time scales.
Further, it should not be overlooked that if one views the satellite data (from 1979 to date), there is no steady linear warming trend. There is simply a one off step change in and around the Super El Nino of 1998. Prior to that event temperatures were trending essentially flat. Following that event, temperature are trending essentially flat. In the satellite data, one can clearly see an ENSO signal and one that has left a marked signature following the extremely strong 1997/1998 Super El Nino.
The satellite data supports the view that ENSO may leave a signature, and that ENSO does not necessarily cancel out when viewed on short climatology time scales.
Just saying that the ENSO assumption is something requiring further consideration and one should remain sceptical as to the correctness of that assumption, at any rate as to its impact on short climatology time scales with which we are dealing with and during which we have some data.

DD More
Reply to  richard verney
September 18, 2015 11:01 am

“Thus in summary, if there is a short period (lets say 30 or so years) ”
Let’s say 1978 (low temp point in Hansen’s earliest charts) to the 1998 temp spike. This is the entire CAGW time frame, but only 20 years. Another 10 years for ENSO to cancel.

William Astley
September 18, 2015 2:11 am

Carbon Dioxide: Contribution to percentage of general circulation model (GCM) warming: IPCC assertion 37%
Highest possible warming based on fundamental science rather than fudging of science to create an issue: 0.2C/3C = 6.7% 0.25 watts/m^2 without ‘feedbacks’. Actual best estimate warming for doubling of atmospheric is less than 0.1C.
If the assertion that the warming for a doubling of atmospheric CO2 without ‘feedbacks’ is 0.1C to 02C and likely less than 0.1C is correct (see below for support for that assertion is correct), there is no CAGW problem.
The majority of the warming in the last 150 years was due to solar cycle changes, not due to the increase in atmospheric CO2. There is no CAGW, there is in fact almost no AGW due to the increase in atmospheric CO2. If that assertion is correct global warming is reversible, if there is a sudden slow down or interruption to the solar cycle.
The GCM models have more than a hundred ‘variables’ and hence can be ‘tuned’ to produce 3C to 6C warming for a doubling of atmospheric CO2. The also could be tuned to produce 0.1C warming.
The one ring that rules the GCM is the initial so called 1-dimensional no ‘feedbacks’ study which determined the surface warming for a doubling of atmospheric CO2 is 1.2C, a forcing of 3.7 watts/m^2.
We had all assumed or at least I had assumed that the 1-dimensional no ‘feedbacks’ study, has scientifically accurate, on the correct page.
I had assumed the problem with why the planet has warmed less than the IPCC models predicted is due to the earth resisting forcing (negative feedback) rather than amplifying (positive feedback) forcing change, by an increase in cloud cover, increase in cloud albedo, and an increase in cloud duration in the tropics. That is the explanation for there being almost no warming in the tropics.
Negative feedback would for example explain why there has been also no warming in the tropical region.
Negative feedback does not however explain 18 years without warming and does not explain the fact that there has been 1/5 of the predicted warming of the tropical troposphere at 5km. Those observational facts support the assertion that the 1-dimensional no ‘feedbacks’ calculation the expected warming for a doubling of atmospheric CO2 is fundamentally incorrect.
The infamous without ‘feedbacks’ cult of CAGW’s calculation (this is the calculation that predicted 1.2C to 1.4C surface warming for a doubling of atmospheric CO2) incorrectly/illogical/irrationally/against the laws of physics held the lapse rate constant to determine (fudge) the estimated surface forcing for a doubling of atmospheric CO2. There is no scientific justification for fixing the lapse rate to calculate the no ‘feedback’ forcing of greenhouse gases.
Convection cooling is a physical fact not a theory and cannot be ignored in the without ‘feedbacks’ calculation. The change in forcing at the surface of the planet is less than the change in forcing higher in the atmosphere due to the increased convection cooling caused by greenhouse gases. We do not need to appeal to crank ‘science’ that there is no greenhouse gas forcing to destroy the cult of CAGW ‘scientific’ argument that there is a global warming crisis problem to solve.
There is a forcing change due to the increase in atmospheric CO2 however that forcing change is almost completely offset by the increase in convection. Due to the increased lapse rate (3% change) due to convection changes (the 3% change in the lapse rate, reduces the surface forcing by a factor of four, the forcing higher in the atmosphere remains the same) therefore warming at the surface of the planet is only 0.1C to 0.2C for a doubling of atmospheric CO2, while the warming at 5 km above the surface of the planet is 1C. As a warming of 0.1C to 0.2C is insufficient to cause any significant feedback change, the zero feedback change for a doubling of CO2 is ballpark the same as the with feedback response.
P.S. The cult of CAGW no ‘feedbacks’ 1-dimensional calculation also ignored the overlap of the absorption of water vapor and CO2. As the planet is 70% covered in water there is a great deal of water vapor in the atmosphere at lower levels, particularly in the tropics. Taking the amount of water vapor overlap into account (before warming) in the no ‘feedbacks’ 1 dimension calculation also reduces the surface warming due to a doubling of atmospheric to 0.1C to 0.2C. Double trump. If the both water vapor/CO2 absorption spectrum absorption overlap and the increased convection cooling of greenhouse gases is taken into account the forcing change due to a doubling of atmospheric CO2 is without feedbacks less than 0.1C.

Collapse of the Anthropogenic Warming Theory of the IPCC

4. Conclusions
In physical reality, the surface climate sensitivity is 0.1~0.2K from the energy budget of the earth and the surface radiative forcing of 1.1W.m2 for 2xCO2. Since there is no positive feedback from water vapor and ice albedo at the surface, the zero feedback climate sensitivity CS (FAH) is also 0.1~0.2K. A 1K warming occurs in responding to the radiative forcing of 3.7W/m2 for 2xCO2 at the effective radiation height of 5km. This gives the slightly reduced lapse rate of 6.3K/km from 6.5K/km as shown in Fig.2.

The modern anthropogenic global warming (AGW) theory began from the one dimensional radiative convective equilibrium model (1DRCM) studies with the fixed absolute and relative humidity utilizing the fixed lapse rate assumption of 6.5K/km (FLRA) for 1xCO2 and 2xCO2 [Manabe & Strickler, 1964; Manabe & Wetherald, 1967; Hansen et al., 1981]. Table 1 shows the obtained climate sensitivities for 2xCO2 in these studies, in which the climate sensitivity with the fixed absolute humidity CS (FAH) is 1.2~1.3K [Hansen et al., 1984].
In the 1DRCM studies, the most basic assumption is the fixed lapse rate of 6.5K/km for 1xCO2 and 2xCO2. The lapse rate of 6.5K/km is defined for 1xCO2 in the U.S. Standard Atmosphere (1962) [Ramanathan & Coakley, 1978]. There is no guarantee, however, for the same lapse rate maintained in the perturbed atmosphere with 2xCO2 [Chylek & Kiehl, 1981; Sinha, 1995]. Therefore, the lapse rate for 2xCO2 is a parameter requiring a sensitivity analysis as shown in Fig.1.

The followings are supporting data (William: In peer reviewed papers, published more than 20 years ago that support the assertion that convection cooling increases when there is an increase in greenhouse gases and support the assertion that a doubling of atmospheric CO2 will cause surface warming of less than 0.3C) for the Kimoto lapse rate theory above.
(A) Kiehl & Ramanathan (1982) shows the following radiative forcing for 2xCO2.
Radiative forcing at the tropopause: 3.7W/m2.
Radiative forcing at the surface: 0.55~1.56W/m2 (averaged 1.1W/m2).
This denies the FLRA giving the uniform warming throughout the troposphere in
the 1DRCM and the 3DGCMs studies.
(B) Newell & Dopplick (1979) obtained a climate sensitivity of 0.24K considering the
evaporation cooling from the surface of the ocean.
(C) Ramanathan (1981) shows the surface temperature increase of 0.17K with the
direct heating of 1.2W/m2 for 2xCO2 at the surface.

Transcript of a portion of Weart’s interview of James Hansen.

This was a radiative convective model, so where’s the convective part come in. Again, are you using somebody else’s…
That’s trivial. You just put in…
… a lapse rate…
Yes. So it’s a fudge. That’s why you have to have a 3-D model to do it properly. In the 1-D model, it’s just a fudge, and you can choose different lapse rates and you get somewhat different answers (William: Different answers that invalidate CAGW, the 3-D models have more than 100 parameters to play with so any answer is possible. The 1-D model is simple so it possible to see the fudging/shenanigans). So you try to pick something that has some physical justification (William: You pick what is necessary to create CAGW, the scam fails when the planet abruptly cools due to the abrupt solar change). But the best justification is probably trying to put in the fundamental equations into a 3-D model.

Reply to  William Astley
September 19, 2015 4:48 pm

Amen. Thanks for posting that. Proves even the canonical assumption that sensitivity to doubled CO2 in absence of feedbacks is ~1C. Wrong! And not only due to the false fixed-lapse rate false assumption, but also due to the false assumption of fixed atmospheric emissivity, a basic mathematical error in calculation of the Planck feedback parameter!

September 18, 2015 2:41 am

lf the UK Met Office weather models are any thing to go by its their forecasts of the tracking of the jet stream is where there going wrong. They tend to forecast that when high pressure builds it will come up from the south and so will push the track of the jet stream northwards. But recently there has been a trend of the jet stream taking a more southwards track then their models forecast at least during the summer months. Which has increased the amount of high pressure patterns forming to the north of the jet stream. Which helps to bring down cooler air from the north rather then pushing warmer air up from the south.

September 18, 2015 2:42 am

The great majority of model runs, from the high-profile UK Met Office’s Barbecue Summer to Roy Spencer’s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high. Why?

As the essay points out, there are so very many things we don’t understand about the weather machine. There are most likely factors that we don’t even know about, much less understand. Then there are things that we claim to know about but are very much wrong about. Take CO2 which the essay says is 37% of the models. (whatever that means) We claim to understand CO2 and its function concerning weather and climate but we are very much wrong on that. There are several credible theories of climate that do not have CO2 doing what the IPCC thinks CO2 does and I wager one of those theories will win out after we return to climate science and stop giving the paymasters the answers they want for political reasons.
Why don’t the models work? They are political constructs and not scientific ones. (my best answer)
~ Mark

Reply to  markstoval
September 18, 2015 7:40 am

From the article, I would guess that the 37% came from the fact that before fiddling, the models created only about 1/3rd of the observed warming when only CO2 was changed. I’m guessing that “about a third” from the text of the article, and 37% from the table, refer to the same thing.

Reply to  MarkW
September 18, 2015 1:25 pm

Thanks. I bet that is it.

Reply to  MarkW
September 19, 2015 6:02 am

Yes, the “about a third” does relate to the 37%. The 37% is the proportion of the models’ predicted 0.2 deg C per decade that comes from CO2 itself.

henri Masson
September 18, 2015 2:50 am

September 17, 2015 at 8:25 pm
“One thing I haven’t seen in a discussion of chaotic system models, is the idea of attractors. After enough runs, valid models of chaotic systems should show where the attractors are”.
If you consider the Vostok data of temperature and submit them to a “phase plan” analysis, you ‘ll find two attractors: the glacial periods and the temperate ones. The system evolutes from one to the other along two tracks : a progressive cooloing and a fast heating one (which can also be seen directly from the time series.
All the rest is only made of chaotic fluctuations around the attractors or around the trajectories going from one attractor to the other(not to be confused with random fluctuations…in which case the phase plan will be filled completely and evenly).

Julian Williams in Wales
September 18, 2015 3:23 am

A very well written summary – thank you. I have never seen the models explained in this way and it gives me a much clearer understanding of how the models have been constructed our of junk conjecture.

M Seward
September 18, 2015 4:01 am

I broadly agree with the author’s assessment of the models and the ‘science’ that is engrossed into them. My suspicion however is that there is also an issue with the mesh size in that it is too coarse to possibly model much of the water cycle mechanisms, particularly in the tropics. As a consequence fiddle/fudge factors have to be introduces to guesstimate their quantitative contribution. Basically if the mechanism takes place at a smaller scale than the mesh size then the Navier Stokes equations are no longer governing the maths but some fudge factor is.
From my work using CFD I came across the mesh size phenomenon where a first cut mesh model, my very first attempt after reading the software manual, converged to a solution that was manifestly wrong ( since I had model test data to compare it with). In my case the software was always using the N-S model but it was the mesh size that was causing poor results. I had gone ‘coarse’ in order to reduce computation time and got burned for my lack of trouble. It was a salutary lesson and an sharp introduction to the ‘uncertainty principle’ of mesh models.
Accuracy is inversely related to computation time and thus cost and convenience.
On the ‘plus’ side (for ‘climate science’ that is), may also serve a useful marketing purpose in that it allows junk output to be marketed and its various financial and reputational rewards reaped with a viable ‘get out of gaol free’ excuse kept in the back pocket. And lets face it the press release does not need to contain such boring detail. ‘OH we never guessed that our models were too coarse and converging to false solutions. We had no way of verifying them against future field data’. That of course is true but micro models could be test run against others with different mesh size for solution comparison and iteration cycle behavior.
Just thinking out loud folks.

Reply to  M Seward
September 19, 2015 6:17 am

No mesh size will ever work over decadal+ time scales. See the IPCC quote in footnote 2. But the problems go much much further than just mesh size.

henri Masson
September 18, 2015 4:13 am

“henri Masson wrote, September 18, 2015 at 2:50 am
“If you consider the Vostok data of temperature and submit them to a “phase plan” analysis, you ‘ll find two attractors: the glacial periods and the temperate ones. The system evolutes from one to the other along two tracks : a progressive cooloing and a fast heating one (which can also be seen directly from the time series)”.
For the one interested, please find hereunder the dropbox link presenting very briefly and graphically my story under the form of a few PowerPoint slides:

John W. Garrett
September 18, 2015 4:27 am

Excellent piece !!!
Anybody who has any experience of computer models of highly complex, dynamic, multivariate, non-linear systems knows full well that they are a largely a waste of time.
“Give me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.”
-John von Neumann

September 18, 2015 5:07 am

The IPCC says:
“we are dealing with a coupled nonlinear chaotic system”
Easy to “kill” on its own terms. Ask them “what are the coupling coefficients used in the models?” Electronics deals with this all the time in transformers. We call it the “coupling coefficient”. Climate has way more processes each with its own coupling. And we haven’t even touched “nonlinear”.
Defeating them on their own terms is the easiest way.

Reply to  M Simon
September 18, 2015 2:16 pm

” Give me six, and I can put Babar to bed without his dinner for breaking his sister’s piano.”

Reply to  Menicholas
September 19, 2015 2:01 am

LOL. I can give you 5 with one hand.

Reply to  Menicholas
September 19, 2015 6:34 pm


September 18, 2015 5:18 am

Very nice. I was keeping a list of “what was good” in their AGW world (to paraphrase Gene Kranz), but the blank sheet of paper was misplaced.
I usually evaluate models based on their component’s contribution to the stated error vice to the stated product. But, in this case since the product is so divergent from observed reality, maybe their skill is error. To the best of current computational ability we seem to have identified that CO2 is NOT a significant factor in short-term climate trends. This will have to verified by decades of observations.
You should consider adding a major source of unquantified error to your discussion; the spatial error introduced by the models gridding systems is generally not defined or accounted for. Spatial stats is specialized discipline and the value of gridded data versus data aggregated by major correlations (Land/Sea, Altitude, Latitude, Pop Density, etc.) needs some investment. At a minimum the grids need to be adjusted for the modern ellipsoidal earth models. We’re spinning thru space on an egg not a cue ball.

henri Masson
September 18, 2015 5:28 am

A fable about proxies and other indicators illustrating the importance of systemic thinking
“ The Blind Men and the Elephant ”
an Hindu fable of which the subject is originating from Jainism
retranscripted by the American poet John Godfrey SAXE ( 1816 – 1887 )
There were six men of Hindustan
To learning much inclined,
Who went to see the Elephant
( Though all of them were blind ),
That each by observation
Might satisfy his mind.
The First approached the Elephant,
And happening to fall
Against his broad and sturdy side,
At once began to bawl :
“ God bless me ! – but the Elephant
Is very like a wall ! ”
The Second, feeling of the tusk,
Cried : “ Ho ! – what have we here
So very round and smooth and sharp ?
To me ‘t is mighty clear
This wonder of an Elephant
Is very like a spear !
The Third approached the animal,
And happening to take
The squirming trunk within his hands,
Thus boldly up and spoke :
“ I see, ” quote he, “ the Elephant
Is very like a snake
”The Fourth reached out his eager hand,
And felt about the knee.
“ What most this wondrous beast is like
Is mighty plain, ” quote he ;
“ It is clear enough the Elephant
Is very like a tree ! ”
The Fifth, who chanced to touch the ear,
Said : “ I am the blindest man
Can tell what this resembles most ;
Deny the fact who can,
This marvel of an Elephant
Is very like a fan ! ”
The Sixth no sooner had begun
About the beast to grope,
Then, seizing on the swinging tail
That fell within his scope,
“ I see, ” quote he, “ the Elephant
Is very like a rope ! ”
And so these men of Hindustan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong !
So, often in theologic wars
The disputants, I weep,
Rail on in utter ignorance
Of what each other mean,
And prate about an Elephant
Not one of them has seen !”
And, of course, I personally think that IPCC suppo(r)ts are involved in a theological war (al what they do and say is finally just aimed to venerate Goddess Gaia to avoid being sinned for all the excess of our industrial economically developed world, in which they have huge difficulties to find their own place….without grants and subsidies)

September 18, 2015 5:56 am

In light of recent research, should ozone depletion be included in the list?
“Stratospheric Ozone Depletion: The Main Driver of Twentieth-Century Atmospheric Circulation Changes in the Southern Hemisphere” Polvani et al, 2011

September 18, 2015 6:51 am

While it is true that the cyclic nature of the various oceanic cycles means that over time, they play no/little role in the climate. The problem comes from the tuning of the models. There was no attempt made to remove the affects of el nino/PDO/AMO and other cycles from the raw data prior to using that data to tune the models. As a result, short term warming that was caused by the oceanic cycles was assumed to have been due to CO2, as a result, they assumed that the warming from the mid-1970s to around 2000 would continue, even accelerate.

September 18, 2015 7:02 am

The fact that ocean currents and ENSO contribute 0% consideration to the climate models predicted future , makes them of 0% value in predicting global temperatures for the near term, decadal or the 60 year climate cycle period . We might as well stop comparing their predictions ,which are running 4 times higher than reality, because the models will be out in the left field constantly unless they change their methods . In my opinion they will not change because they are getting too much free money without any accountability for their clearly failed predictions . The models are paraded as PR material to the public and to exaggerate the non existing climate threat. It seems to me that . the reason they want to stop all future climate debate , is to prevent the public from really finding out how uncertain their models are and how uncertain their science really is which they have been pushing on the public, the politicians and the media as settled This science and their models are so uncertain that it should have never played a part in public policy. We will pay dearly for this lack of oversight and accountability to the public of these alarmist scientists.

September 18, 2015 7:15 am

Just flip a coin and be right half the time. Cost to taxpayers, as low a 1 cent (assuming you want to reuse it indefinitely).

September 18, 2015 7:29 am

Having written and designed numerical models for petroleum reservoirs, it was necessary to take the model back and run it to confirm a history match of the actual results before the projections of the model were used to make economic decisions. It appears that the programmer’s failure of climate models to even remotely project accurately have caused them to abandon attempts to correct the models. Rather than tweak the model to match historical reality, today they seem to be going back and adjusting the temperature history record to match the models.

Curious George
September 18, 2015 7:44 am

Climate models are only as reliable as people maintaining them. A failure to correct a known flaw – and running known flawed models thousands of times instead – does destroy any confidence.

September 18, 2015 7:45 am

Ignoring cyclic events in the models, means that the models are only good for predicting steady state averages of climate. IE, weather, averaged over a long enough period that all the cycles have averaged out.
That means that any claims that we are going to warm X degrees in the next Y years are worthless, since the models simply are not capable of discerning what the temperature is going to be in a particular year.
Beyond that, as long as CO2 levels are changing, we are not in a steady state condition.
If the models were any good (which they aren’t) the only thing they would be useful for is to declare that once CO2 levels have stabilized at a particular level. Wait a couple hundred years for things to stabilize, then average the temperatures over 3 or 4 centuries, and this is what the average will be.

September 18, 2015 8:05 am

Answer: Reliable enough to ignore in policy messaging over reach efforts

Tom O
September 18, 2015 8:15 am

How reliable are the models? A very simple fact – you cannot model that which you do not know. You cannot model based on anything but DATA, and proxy information is NOT data. How reliable, then, are climate models? Far less reliable than expecting to make a windfall at any casino.

September 18, 2015 8:20 am

You make one serious error in the way you lay out your table. GCRs and aerosols belong in completely different boxes, because GCRs are not well understood and have a questionable effect (and what we know of them at all has largely come from work done after the GCMs were originally written) and hence are neglected but aerosols are not. In fact, in the GCMs aerosols are included and produce a strong cooling effect.
This is one of their major problems. One of the ways they balanced the excessive warming in the reference period is by tweaking the aerosol knob and turning it up until it balanced the rapidly increasing CO2 driven effect. Then as time passed, CO2 was kept going up and aerosols were not across the rapidly warming part of the late 80’s through the 90’s. In this way, the CO2 sensitivity had to be turned up still more to match the rapid rise against the large background cancellation that allowed the 70’s and early 80’s to be fit well enough not to laugh at the result.
Of course, then came the 1997-1998 super-ENSO that produced one last glorious burst of warming, and then warming more or less stopped and the infamous “pause” began. CO2 continued increasing, faster than ever, aerosols didn’t, the climate models that used this pattern clearly showed the world metaphorically catastrophically “cooking” from a climate shift (if an increase in average temperature that represents moving from one city in NC to another 40 miles away can be called a climate shift at all) but the world refused to cooperate. Since the late 90’s temperatures have been at the very least close to flat, even as frantic efforts have continued to find some excuse not to fail the models, if necessary by changing the data they do not predict.
The models clearly overestimate the impact of aerosols. This is bad news for the entire hypothesis that radiative chemistry is the dominant forcer in climate change, because if aerosols are not a major cooling factor, then the large CO2 sensitivity needed to fit the rapid increase in temperature in single 15 year stretch where the planet significantly warmed at all in the last 70 years (back to 1945) is simply wrong, far too high. They have to go back to a sensitivity that is too low to produce a catastrophe and that fits the other 55 years of the last 70 decently but that fails to do a good job on the 15. Which is just as well, because a burst of warming almost identical to the late 20th century warming occurred over a 15 year period in the early 20th century, and this warming was already being skated right over by the GCMs (although this hindcast error was ignored).
If you reduce the aerosol’s contribution to “very little”, you have to reduce CO2 sensitivity including all feedbacks to ballpark 1 to 1.5 C per doubling in order to come close to fitting — or at least not disagreeing with to the point of obvious failure — the data. But this is still in GCMs that, as you point out, do not credibly include the modulation of cooling efficiencies of things like the multidecadal oscillations and thermohaline transport that are self-organized phenomena that obviously have a huge impact on the absorption, transport, and dissipation of heat and that are clearly correlated with both climate change and weather events, major and minor, across the instrumental and proxy-inferred record. Once those were accurately included, what would sensitivity be? What would the net feedbacks be? Nobody knows.
And does it matter? Research demonstrating fairly convincingly that aerosol cooling is overestimated is not a year or two old, but people are still trying to argue that increased volcanic aerosols are the cause of the pause (when they aren’t trying to erase the pause altogether). The only real problem is that they can’t find the volcanoes to explain the increase in aerosols, or demonstrate that atmospheric transmittance at the top of Mauna Loa has been modulated, or come up with a credible model for the impact of even large volcanoes on climate except to note that it is transient and very small until the volcanoes get up to at least VEI 5, if not VEI 6 or higher. And those volcanic eruptions are rare.
So no, it does not matter. What matters is that in the world’s eyes, the energy industry is accurately portrayed in Naked Gun 2 1/2 — a cabal that would willingly use assassination and lies and bribes and corruption to maintain its “iron grip” on society and the wealth of its owners. One isn’t just saving the world ecologically — this may not even be the real point of it all. It is all about the money and power. In the real world, it is a lot simpler to co-opt and subvert a political movement than it is to fight it. And that’s what the power industry has done.
Who makes, and stands to make, the most money out of completely retooling the energy industry to be less efficient? Oh, wait, would that be the energy industry? Do they really give a rat’s ass if they get rich selling power generated by solar panels (if that’s what we are taught to “want”) rather than coal? Not at all. They’ll make even larger profits from the margin on more expensive power. They want power to be as expensive as possible, which means as scarce as possible, especially in a nearly completely inelastic market.
If climate catastrophism didn’t exist, the energy industry would probably invent it. Come to think of it, they probably did. And there is no Leslie Nielsen to come to the rescue.

Reply to  rgbatduke
September 18, 2015 8:57 am

I have sat in a room with a US Senator and Nuclear Power executives from all the major US energy companies; who when asked what they wanted, universally answered “a price for carbon”. Why? It was explained that the regulated market for power in the US restricted their ability to obtain project financing from todays customers for building new plants without showing a cost to consumers for the alternatives.

Reply to  rgbatduke
September 18, 2015 1:44 pm

RGB I was sorry to see the last three paragraphs of your comment. Your comments on scientific matters have always been objective ,rational and illuminating. Your views on the energy industry are ill informed and seem to adopt the naïve ” evil fossil fuel company “paradigm. The “energy” Industry is far from monolithic and while all companies will try to influence policy to their advantage, the interests of the different segments eg wind, solar, nuclear, biomass ,fossil fuel and indeed of the companies within those segments e,g international major oil companies, national oil companies, independent operators are very different.
What is true is that the energy companies at the moment ,by and large, blindly go along with the consensus CAGW meme as a basis for looking at the future and where convenient will use it to influence politicians where it suits their particular ends.
An interesting example of how belief in CAGW influences individual companies is Shell Oil, who are betting billions in their Arctic offshore drilling program while other companies have withdrawn, They must believe that Arctic sea ice is going to decrease in the next 20 years or so .I think the opposite.
For forecasts if the timing and extent of the coming cooling see

Reply to  Dr Norman Page
September 18, 2015 2:17 pm

For the most part, the actual companies that are building wind and solar are not the same companies that are building oil, gas and coal.
Conflating all companies that build plants that make electricity into a monolithic “power industry” is the kind of tactic used by one who either has no knowledge about which he is talking, or who hopes that his listeners don’t.

Reply to  rgbatduke
September 18, 2015 2:15 pm

Do you have any evidence that the power industry is behind the global warming movement? Or are you just letting your normal paranoia take over again?

Reply to  rgbatduke
September 19, 2015 7:05 am

Thanks for your comment, rgb. In my article “understood” was in the context of prediction. Aerosols may well have a strong cooling effect, but what matters is how much they contribute (+ or -) to the predicted future warming. Since the modellers don’t know how aerosols will change in future they can’t predict their future impact. Since GCRs are known to create aerosols, and also can’t be predicted, I lumped the two together for simplicity. Maybe I should have just done them separately. They are both “understood : No; contribution : 0%” because no-one knows how they will change in future, and there is no contribution (+ or -) from them to future warming in the models. The three non-zero factors I cited do deliver 100% of the predicted future warming, according to the IPCC.

henri Masson
September 18, 2015 8:37 am

“Tom O said (September 18, 2015 at 8:15 am)
.” You cannot model based on anything but DATA, and proxy information is NOT data”.
And when the system is a dynamical one (chaotic system), it is responding so monstruously to the tiniest change in intial conditions that, regarding the experimental and data processing errors associated to these conditions, you cannot make any prediction. Also the data are not spread normally (gaussan distribution) around a given mean value or trend line, and you cannot define any confidence interval, because the underlying statisitcal hypotheses are not met…..
And, as I recalled and showed earlier in this discussion (henri Masson September 18, 2015 at 2.15 am and at 4.13am ) the climate system is of dynamical nature.
This means that speaking about scenarios fed in models for limiting the global temperature to 2°C by the end of the century (or by doubling CO2) is absolute mathematical and statisitcal non-sense. (by the way, the “magic limit” is now 1.5°C because even a large number of the models with their latest settings predict an increase in temperature of less than 2°C)

Reply to  henri Masson
September 19, 2015 6:39 pm

Even more preposterous is the idea that 2 degrees of warming will somehow be in any way a problem, let alone a catastrophe, on a planet which has large areas perpetually frozen solid, and much of the habitable surface cold enough, during large stretches of any given year, to kill any person who gets caught without sufficient protections and stores of supplies.

September 18, 2015 8:54 am

It seems to me that if you ignore ocean currents and Enso in you models ,you end up barking up the wrong tree with your climate modeling. We are all well aware of the back ground warming of about 0.75 C /decade during the past century. We are also aware that for reasons that we do not yet completely understand , this back ground temperature has not always wamed but really fluctuates as we saw with the LITTLE ICE AGE , MEDIEVAL WARM PERIOD ,Middle ages cold period , ROMAN WARM PERIOD , etc. Man was not responsible for these fluctations .Recent evidence shows that EL Ninos, especially the strong ones raise this back ground global temperature in series of steps. We are also aware that superimposed on the background temperatures is a 60 year climate cycle with aproximately 30 years of warm and temperatures and 30 years of cooler temperatures . These changes are caused by changes in ocean temperatures and currents . So in a 100 year period , the rate of temperature change is greatly modified and the expected rise is much less than if you ignore them in your models . Two cold troughs were centered around 1910 and 1979 greatly modified our climate during the last 100 years . There will be two during next 100 years . If you cannot simulate these , your temperature predictions are worthless and all your alarmism is unjustified
So along come the alarmists and they first blamed mankind for the warming in the post 1970,s which was reallycaused caused by the major oceans when both were in their warm mode and 3-4 strong El Ninos . They then changed their tune and blamed mankind for all the warming of the background( 0.75C/DECADE) after the start of the industrial reveolution going back some 100 years We know this to be wrong also since the planet has been naturally warming after the Little ice age . Now they have again changed the tune blaming all weather events and extreme events on mankind and telling us their models tell them so.
If you use models that do not refelct realty your out put is worthless and the public should be told so.

Reply to  herkimer
September 18, 2015 8:56 am

The background warming should read O.75 C/ CENTURY not per decade

September 18, 2015 9:12 am

Your article has much better writing than prior articles — so much better I suspected a ghost writer!
Unfortunately, your science knowledge and logic have not improved, so the greatly improved writing quality now makes you dangerous!
YOU wrote:
“Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary – Understood? Yes. Contribution in the models : about 37%”
Your statement is wrong.
CO2 is NOT “quite well understood”.
You seem confident scientists know what CO2 does to Earth’s average temperature.
You are wrong.
If CO2 was really so well understood, this website probably would not exist.
Climate science would be “settled”.
Mr. Watts would probably have a blog where he posted pictures of his family and his vacations. (ha, ha)
Geologists and other non-climate modeler scientists (i.e.; real scientists) report huge past changes in CO2 levels with absolutely no correlation with average temperature, from climate proxy studies of Earth’s climate history.
Do you dismiss all the work of non-climate modeler scientists over the past century — if so, you are practicing “data mining”.
Laboratory experiments prove little about what CO2 actually does to the average temperature.
They suggest warming, but with a rapidly diminishing warming effect as CO2 levels increase.
There is no scientific proof that manmade CO2 is the dominant cause of the minor warming over the past 100 years.
Where is that proof written down for us to see?
No actual proof exists (based on what the word “proof” means in science).
Is there an upper limit of CO2 concentration where there is no more “greenhouse” warming, or too little warming to be measured?
— What is that upper limit?
Does the first 100 ppmv of CO2 cause warming?
— Probably.
How much warming does the next +100 ppmv of CO2 cause?
— No one knows.
Why was there a mild cooling trend from 1940 to 1976?