How reliable are the climate models?

Guest essay by Mike Jonas

michaels-102-ipcc-models-vs-reality

There are dozens of climate models. They have been run many times. The great majority of model runs, from the high-profile UK Met Office’s Barbecue Summer to Roy Spencer’s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high. Why?

The answer is, mathematically speaking, very simple.

The fourth IPCC report [para 9.1.3] says : “Results from forward calculations are used for formal detection and attribution analyses. In such studies, a climate model is used to calculate response patterns (ā€˜fingerprintsā€™) for individual forcings or sets of forcings, which are then combined linearly to provide the best fit to the observations.”

To a mathematician that is a massive warning bell. You simply cannot do that. [To be more precise, because obviously they did actually do it, you cannot do that and retain any credibility]. Let me explain :

The process was basically as follows

(1) All known (ie. well-understood) factors were built into the climate models, and estimates were included for the unknowns (The IPCC calls them parametrizations – in UK English : parameterisations).

(2) Model results were then compared with actual observations and were found to produce only about a third of the observed warming in the 20th century.

(3) Parameters controlling the unknowns in the models were then fiddled with (as in the above IPCC report quote) until they got a match.

(4) So necessarily, about two-thirds of the models’ predicted future warming comes from factors that are not understood.

Now you can see why I said “You simply cannot do that”: When you get a discrepancy between a model and reality, you obviously can’t change the model’s known factors – they are what they are known to be. If you want to fiddle the model to match reality then you have to fiddle the unknowns. If your model started off a long way from reality then inevitably the end result is that a large part of your model’s findings come from unknowns, ie, from factors that are not understood. To put it simply, you are guessing, and therefore your model is unreliable.

OK, that’s the general theory. Now let’s look at the climate models and see how it works in a bit more detail.

The Major Climate Factors

 

The climate models predict, on average, global warming of 0.2 deg C per decade for the indefinite future.

What are the components of climate that contribute to this predicted future warming, and how well do we understand them?

ENSO (El Nino Southern Oscillation) : We’ll start with El Nino, because it’s in the news with a major El Nino forecast for later this year. It is expected to take global temperature to a new high. The regrettable fact is that we do not understand El Nino at all well, or at least, not in the sense that we can predict it years ahead. Here we are, only a month or so before it is due to cut in, and we still aren’t absolutely sure that it will happen, we don’t know how strong it will be, and we don’t know how long it will last. Only a few months ago we had no idea at all whether there would be one this year. Last year an El Nino was predicted and didn’t happen. In summary : Do we understand ENSO (in the sense that we can predict El Ninos and La Ninas years ahead)? No. How much does ENSO contribute, on average, to the climate models’ predicted future warming? 0%.

El Nino and La Nina are relatively short-term phenomena, so a 0% contribution could well be correct but we just don’t actually know. There are suggestions that an El Nino has a step function component, ie. that when it is over it actually leaves the climate warmer than when it started. But we don’t know.

Ocean Oscillations : What about the larger and longer ocean effects like the AMO (Atlantic Multidecadal Oscillation), PDO (Pacific Decadal Oscillation), IOD (Indian Ocean Dipole), etc. Understood? No. Contribution in the models : 0%.

Ocean Currents : Are the major ocean currents, such as the THC (Thermohaline Circulation), understood? Well we do know a lot about them – we know where they go and how big they are, and what is in them (including heat), and we know much about how they affect climate – but we know very little about what changes them and by how much or over what time scale. In summary – Understood? No. Contribution in the models : 0%.

Volcanoes : Understood? No. Contribution in the models : 0%.

Wind : Understood? No. Contribution in the models : 0%.

Water cycle (ocean evaporation, precipitation) : Understood? Partly. Contribution in the models : the contribution in the climate models is actually slightly negative, but it is built into a larger total which I address later.

The Sun : Understood? No. Contribution in the models : 0%. Now this may come as a surprise to some people, because the Sun has been studied for centuries, we know that it is the source of virtually all the surface and atmospheric heat on Earth, and we do know quite a lot about it. Details of the 11(ish) year sunspot cycle, for example, have been recorded for centuries. But we don’t know what causes sunspots and we can’t predict even one sunspot cycle ahead. Various longer cycles in solar activity have been proposed, but we don’t even know for sure what those longer cycles are or have been, we don’t know what causes them, and we can’t predict them. On top of that, we don’t know what the sun’s effect on climate is – yes we can see big climate changes in the past and we are pretty sure that the sun played a major role (if it wasn’t the sun then what on Earth was it?) but we don’t know how the sun did it and in any case we don’t know what the sun will do next. So the assessment for the sun in climate models is : Understood? No. Contribution in the models : 0%. [Reminder : this is the contribution to predicted future warming]

Galactic Cosmic Rays (GCRs) : GCRs come mainly from supernovae remnants (SNRs). We know from laboratory experiment and real-world observation (eg. of Forbush decreases) that GCRs create aerosols that play a role in cloud formation. We know that solar activity affects the level of GCRs. But we can’t predict solar activity (and of course we can’t predict supernova activity either), so no matter how much more we learn about the effect of GCRs on climate, we can’t predict them and therefore we can’t predict their effect on climate. And by the way, we can’t predict aerosols from other causes either. In summary for GCRs : Understood? No. Contribution in the models : 0%.

Milankovich Cycles : Milankovich cycles are all to do with variations in Earth’s orbit around the sun, and can be quite accurately predicted. But we just don’t know how they affect climate. The most important-looking cycles don’t show up in the climate, and for the one that does seem to show up in the climate (orbital inclination) we just don’t know how or even whether it affects climate. In any case, its time-scale (tens of thousands of years) is too long for the climate models so it is ignored. In summary for Milankovich cycles : Understood? No. Contribution in the models : 0%. (Reminder : “Understood” is used in the context of predicting climate).

Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary – Understood? Yes. Contribution in the models : about 37%.

Water vapour : we know that water vapour is a powerful greenhouse gas, and that in total it has more effect than CO2 on global temperature. We know something about what causes it to change, for example the Clausius-Clapeyron equation is well accepted and states that water vapour increases by about 7% for each 1 deg C increase in atmospheric temperature. But we don’t know how it affects clouds (looked at next) and while we have reasonable evidence that the water cycle changes in line with water vapour, the climate models only allow for about a third to a quarter of that amount. Since the water cycle has a cooling effect, this gives the climate models a warming bias. In summary for water vapour – Understood? Partly. Contribution in the models : 22%, but suspect because of the missing water cycle.

Clouds : We don’t know what causes Earth’s cloud cover to change. Some kinds of cloud have a net warming effect and some have a net cooling effect, but we don’t know what the cloud mix will be in future years. Overall, we do know with some confidence that clouds at present have a net cooling effect, but because we don’t know what causes them to change we can’t know how they will affect climate in future. In particular, we don’t know whether clouds would cool or warm in reaction to an atmospheric temperature increase. In summary, for clouds : Understood? No. Contribution in the models : 41%, all of which is highly suspect.

Summary

The following table summarises all of the above:

Factor Understood? Contribution to models’ predicted future warming
ENSO No 0%
Ocean Oscillations No 0%
Ocean Currents No 0%
Volcanoes No 0%
Wind No 0%
Water Cycle Partly (built into Water Vapour, below)
The Sun No 0%
Galactic Cosmic Rays (and aerosols) No 0%
Milankovich cycles No 0%
Carbon Dioxide Yes 37%
Water Vapour Partly 22% but suspect
Clouds No 41%, all highly suspect
Other (in case I have missed anything) 0%

The not-understood factors (water vapour, clouds) that were chosen to fiddle the models to match 20th-century temperatures were both portrayed as being in reaction to rising temperature – the IPCC calls them “feedbacks” – and the only known factor in the models that caused a future temperature increase was CO2. So those not-understood factors could be and were portrayed as being caused by CO2.

And that is how the models have come to predict a high level of future warming, and how they claim that it is all caused by CO2. The reality of course is that two-thirds of the predicted future warming is from guesswork and they don’t even know if the sign of the guesswork is correct. ie, they don’t even know whether the guessed factors actually warm the planet at all. They might even cool it (see Footnote 3).

One thing, though, is absolutely certain. The climate models’ predictions are very unreliable.

###

Mike Jonas

September 2015

Mike Jonas (MA Maths Oxford UK) retired some years ago after nearly 40 years in I.T.

Footnotes

 

1. If you still doubt that the climate models are unreliable, consider this : The models typically work on a grid system, where the planet’s surface and atmosphere are divided up into not-very-small chunks. The interactions between the chunks are then calculated over a small time period, and the whole process is then repeated a mammoth number of times in order to project forward over a long time period (that’s why they need such large computers). The process is similar to the process used for weather prediction but much less accurate. That’s because climate models run over much longer periods so they have to use larger chunks or they run out of computer power. The weather models become too inaccurate to predict local or regional weather in just a few days. The climate models are less accurate.

2. If you still doubt that the climate models are unreliable, then perhaps the IPCC themselves can convince you. Their Working Group 1 (WG1) assesses the physical scientific aspects of the climate system and climate change. In 2007, WG1.said ā€œwe should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.ā€

3. The models correctly (as per the the Clausius-Clapeyron equation) show increased atmospheric water vapour from increased temperature. Water vapour is a greenhouse gas so there is some warming from that. In the real world, along with the increased water vapour there is more precipitation. Precipitation comes from clouds, so logically there will be more clouds. But this is where the models’ parameterisations go screwy. In the real world, the water cycle has a cooling effect, and clouds are net cooling overall, so both an increased water cycle and increased cloud cover will cool the planet. But, as it says in the IPCC report, they had to find a way to increase temperature in the models enough to match the observed 20th century temperature increase. To get the required result, the parameter setttings that were selected (ie, the ones that gave them the “best fit to the observations“), were the ones that minimised precipitation and sent clouds in the wrong direction. Particularly in the case of clouds, where there are no known ‘rules’, they can get away with it because, necessarily, they aren’t breaking any ‘rules’ (ie, no-one can prove absolutely that their settings are wrong). And that’s how, in the models, cloud “feedback” ends up making the largest contribution to predicted future warming, larger even than CO2 itself.

4. Some natural factors, such as ENSO, ocean oscillations, clouds (behaving naturally), etc, may well have caused most of the temperature increase of the 20th century. But the modellers chose not to use them to obtain the required “best fit“.If those natural factors did in fact cause most of the temperature increase of the 20th century then the models are barking up the wrong tree. Model results – consistent overestimation of temperature – suggest that this is the case.

5. To get their “best fit“, the chosen fiddle factors (that’s the correct mathematical term, aka fudge factors) were “combined linearly“. But as the IPCC themselves said, “we are dealing with a coupled nonlinear chaotic system”. Hmmmm ….

5 1 vote
Article Rating
350 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Quinn the Eskimo
September 17, 2015 5:04 pm

Mike, you might on Trenberth’s 10 most-wanted list for a RICO prosecution.

Reply to  Quinn the Eskimo
September 17, 2015 6:23 pm

Racketeering? Just how does Mike Jonas qualify for Racketeering charges?
Collusion? No.
Conspiring with others to injure or harm others or damage their careers? No.
Gain benefit through collusion? No.
Now try those questions against many of the ‘climate team’.

Badazz
Reply to  ATheoK
September 24, 2015 11:15 am

[snip – fake email address – a valid email address is required to comment here -mod]

AnonyMoose
Reply to  Quinn the Eskimo
September 17, 2015 9:23 pm

Oh, they shouldn’t start invoking RICO in the climate field. There’s so much opportunity for that to gloriously backfire on them.

Reply to  AnonyMoose
September 18, 2015 6:31 pm

What a delicious thought!

Steve Zell
Reply to  Quinn the Eskimo
September 25, 2015 8:44 am

Thank you for a great article on many reasons why climate models over-predict future temperatures.
You mentioned the Clauseus-Clapeyron equation for concentrations of water vapor in air. But it actually only predicts the vapor pressure of water as a function of temperature, which is proportional to the concentration of water in saturated air. Climate modelers tend to arbitrarily assume that if the air gets warmer, the relative humidity stays constant, meaning that the water vapor concentration (saturation concentration * relative humidity) increases.
But this additional water vapor has to come from somewhere–evaporation of water from a body of water (ocean or lake) in contact with air, and this evaporation requires heat input either from the air or water. Since absorption of IR radiation by CO2 only affects air temperature, if the temperature of the air in contact with the ocean increases, the heat to evaporate the water needed to maintain constant relative humidity must be provided by the air. Depending on temperature and the assumed relative humidity, this can require 50 to 80% of the heat required to raise the air temperature, meaning that a “constant relative humidity” assumption imposes a negative feedback of -0.5 to -0.8 on the energy increase due to increasing CO2.
By assuming constant relative humidity, climate modelers like to take “credit” for the IR radiation absorbed by the additional water vapor in the air, but do not account for the heat loss required to vaporize the water, so the models tend to exaggerate the overall warming effects.

pippen kool
September 17, 2015 5:14 pm

You average the models. Why do you not average the surface temps, which they are modeled for?
Rather than using an apples and oranges comparison, which is troposphere to models?

Reply to  pippen kool
September 17, 2015 7:36 pm

If the factoring for the two volcanic eruptions are removed, the two lines (model and actual) look like two divergent lines with zero correlation.

MarkW
Reply to  pippen kool
September 18, 2015 6:54 am

Your question makes no sense.

DWR54
Reply to  MarkW
September 20, 2015 9:10 pm

Presumably a reference to the lead graphic. It shows weather balloon and satellite observations of the lower troposphere. Not sure which CMIP5 models are represented (by an average of them all), but the IPCC report mainly focuses on the CMIP5 surface temperature models.
Comparing these against observations to July 2015 gives this: http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2015b.png
Still below the multi-model average, but well within the projected range. 2014 and 2015 have pushed observations a lot closer to the surface models’ average, but still low.

old44
September 17, 2015 5:17 pm

The IPCC calls them parametrizations ā€“ in UK English : parameterisations.
In common English : BS.

Curious George
September 17, 2015 5:18 pm

Most IPCC climate models don’t even have a latent heat of water right – at least CAM 5.1 does not have it right, and they keep it in their assembly. It assumes the latent heat of water vaporization to be independent of temperature, but actually at 30 degrees C it is 3% lower than at the freezing point, so it overestimates the transfer of heat by water evaporation from tropical seas by 3%, see http://judithcurry.com/2013/06/28/open-thread-weekend-23/#comment-338257

Reply to  Curious George
September 17, 2015 5:26 pm

“…latent heat of water…”
Sensible heat of air (incl CO2): 0.24 Btu/lb – F
Sensible heat of liquid water: 1.0 Btu/lb – F
Latent heat of evap/cond: 1,000 Btu/lb
When water evaporates into dry air it takes about 1,000 Btu/lb of water with it cooling the air by bunches. Ever notice how cool the rain makes the air?

Curious George
Reply to  Nicholas Schroeder
September 17, 2015 5:41 pm

Thanks, Nick. The issue is that that the latent heat of evaporation is not always 1,000 Btu/lb. At a boiling point it is 970, at 30 C it is 1040, and at a freezing point it is 1070, which the models use (all numbers rounded).

Anthony Zeeman
Reply to  Nicholas Schroeder
September 17, 2015 5:42 pm

Careful, you’re mixing physics with climatology and the result could be dangerous. In physics, average temperature has no basis while it’s climatology’s base.
In climatology, adding one ice cube to your drink is the same as adding 10 as the average temperature is the same. Physics begs to differ.
Sort of like the difference between astronomy and astrology. They’re both about stars and planets, but come up with completely different results

PiperPaul
Reply to  Nicholas Schroeder
September 17, 2015 7:06 pm

…mixing physics with climatology and the result could be dangerous
As dangerous as the engineering world’s occasional unfortunate spontaneous rapid uncontrolled disassemblies?

Arsten
Reply to  Nicholas Schroeder
September 18, 2015 5:37 am

“As dangerous as the engineering worldā€™s occasional unfortunate spontaneous rapid uncontrolled disassemblies?”
Hey, don’t be rash. The engineering world’s occasional unfortunate spontaneous rapid uncontrolled disassembly is the demolition worlds’ bread and butter.

Reply to  Nicholas Schroeder
September 24, 2015 2:04 pm

Nichplas – ( RE ” Ever notice how cool the rain makes the air ?” )… It is my belief that the air makes the rain cool – as the cooler air condenses the water vapor to form the rain.

PA
Reply to  Curious George
September 17, 2015 8:59 pm

http://faculty.sdmiramar.edu/fgarces/zCourse/All_Year/Ch100_OL/aMy_FileLec/04OL_LecNotes_Ch100/07_Gas/701_GasLaws/701_pic/vaporpressure.jpg
The rate of evaporation is highly dependent on temperature and the partial pressure of water at 30Ā°C is 695% of the partial pressure at 0Ā°C.

Erik Magnuson
Reply to  PA
September 17, 2015 10:18 pm

In environments near room temperature, vapor pressure of water roughly doubles for every ~11C (20F) increase in temperature. With a dew point of 30C, the vapor pressure of water is over 4% of the standard sea level pressure and the buoyancy of humid air is significant – no wonder the NHC states 26C as the SST needed to support hurricanes – and also Willis’s comment about a governor kicking in when SST’s approach 30C.

September 17, 2015 5:21 pm

ā€œHow reliable are the climate models?ā€
IPCC AR5 text box 9.2 – mostly not very, low confidence, un-robust, PoS.

John
Reply to  Nicholas Schroeder
September 18, 2015 9:01 am

Isn’t this the smoking gun related to consensus?
How can there be a consensus if the models don’t share common code related to Physics and other aspects which are considered resolved?
One other obvious issue, why is everyone trying to reinvent the wheel instead of parallel processing using nodes which specialize on an aspect of the climate system making it easier to maintain corrections to all models?
Do the models represent state of the understanding, if no then what is?

V. eng
Reply to  John
September 18, 2015 10:40 pm

Why, yes the models do represent the present “state of the understanding.” They are incorrect, loaded with parameterisations, biases, and fudged to output a desired result. Look at the author’s list of forcings and some of the most important such as the Sun’s influence are not really used properly because they are not predictable. Add to that the use of the obviously fudged input temperature data (now with an attempt to remove the present warming hiatus) and what results is fantasy, nothing more, though it does keep the grant money flowing and pushes a political narrative.

Mark Cooper
September 17, 2015 5:21 pm

Mike,
What is the source of that plot at the top of your post- Did you make it yourself ? (That’s the impression I get from your post)
the nearest thing I can find to your plot is this:
http://www.globalwarming.org/wp-content/uploads/2013/05/ChristyJR_130530_McKinley-PDF-of-PPT.pdf
Why are we still seeing plots on WUWT without error bars,especially on a plot of an average of 102 models? It’s very misleading…

philincalifornia
Reply to  Mark Cooper
September 17, 2015 5:47 pm

…. because the model error bars wouldn’t fit on most people’s computer screens ??

TonyL
Reply to  philincalifornia
September 17, 2015 5:58 pm

You Bad Person. +1

AnonyMoose
Reply to  philincalifornia
September 17, 2015 9:25 pm

I’ll drink to that, as soon as I find an error bar.

Editor
Reply to  Mark Cooper
September 17, 2015 7:08 pm

I didn’t supply it, and I don’t know its source. It’s similar to the one you provided a link to, and of course the “Epic Fail” chart could have been used too. No matter where you look, the models overestimate global temperature.

george e. smith
Reply to  Mike Jonas
September 18, 2015 7:27 am

“””””….. The Sun : Understood? No. Contribution in the models : 0%. Now this may come as a surprise to some people, because the Sun has been studied for centuries, we know that it is the source of virtually all the surface and atmospheric heat on Earth, and we do know quite a lot about it. …..”””””
Could you change the word ” heat ” to the word ” heating ” please.
We get energy from the sun; but not in the form of heat.
We make all of our surface and atmospheric heat right here on earth; home grown you might say.
But I’ll accept ” heating ” as a true statement.

VikingExplorer
Reply to  Mike Jonas
September 18, 2015 8:50 am

George, as has been explained to, you can’t have your own definition of Heat In the context of Physics:
Heat is energy in transfer other than as work or by transfer of matter.

george e. smith
Reply to  Mike Jonas
September 18, 2015 12:23 pm

And I won’t even ask what reference Physics text book Viking Explorer got HIS definition of ” heat ” out of.
So the other day we got expert opinion that UV is light, and now we also have expert opinion that UV also is heat, and my AM radio picks up heat as does my television set.
And I guess that convection is now demoted to just dwarf heat standing since it involves transfer of matter.
Now that is quite a revelation that transfer of hot matter is not heat, but a photon is heat.
I’ll have to remember that.

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 5:06 pm

George, technically you are correct. The energy does not ARRIVE in the form of heat. Nevertheless, what you might consider being a stickler for detail, others might consider nitpicking. We do know what he meant.

Reply to  Mike Jonas
September 19, 2015 5:32 pm

“Ultraviolet (UV) light is an electromagnetic radiation with a wavelength from 400 nm to 100 nm, shorter than that of visible light but longer than X-rays.”
https://en.wikipedia.org/wiki/Ultraviolet
“UV, or ultraviolet, light is an invisible form of electromagnetic radiation that has a shorter wavelength than the light humans can see. It carries more energy than visible light and can sometimes break bonds between atoms and molecules, altering the chemistry of materials exposed to it. UV light can also cause some substances to emit visible light, a phenomenon known as fluorescence. This form of light ā€” which is present in sunlight ā€” can be beneficial to health, as it stimulates the production of vitamin D and can kill harmful microorganisms, but excessive exposure can cause sunburn and increase the risk of skin cancer. UV light has many uses, including disinfection, fluorescent light bulbs, and in astronomy”
“http://www.wisegeek.org/what-is-uv-light.htm”
Light, Definition:
“Physics. a.Also called luminous energy, radiant energy. electromagnetic radiation to which the organs of sight react, ranging in wavelength from about 400 to 700 nm and propagated at a speed of 186,282 mi./sec (299,972 km/sec), considered variously as a wave, corpuscular, or quantum phenomenon.
b.a similar form of radiant energy that does not affect the retina, as ultraviolet or infrared rays.”
http://dictionary.reference.com/browse/light?s=t
Just sayin’.
We have ultraviolet cameras. They are sensitive to UV radiation. This radiation is commonly referred to as UV light. I go to a dermatologist to get treatments for a mild case of psoriasis, and one of the treatments consists of spending time in a “lightbox”, which contains special bulbs which emit some stuff that everyone calls, and are sold as, ultraviolet lights.
Some organisms have eyes that can detect UV. Is it only “light” if humans can see it?
But the dictionary, which contains the definition of the words we use, lists clear references to light which cannot necessarily be detected by our retina.
We use language to communicate. When someone says UV light, few, if any, are baffled by this phrase, hence it is a valid reference to call it thusly.

DWR54
Reply to  Mike Jonas
September 21, 2015 2:37 am

“No matter where you look, the models overestimate global temperature.”
__________________________
That’s not correct when you compare CMIP3 and CMIP5 surface models to surface observations, e.g.: http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2015b.png
Observations are now well inside the multi-model range for CMIP5 (more so for the earlier but longer running CMIP3), though still below the multi-model mean.
It’s correct to say that most surface models have so far overestimated surface observations; but several have also underestimated them and the model range remains valid, though on the low side.

Keitho
Editor
Reply to  Mike Jonas
September 21, 2015 10:10 am

Hi Mike, if you remove the CO2 effect from the models do they get closer to the actual measured temperature or further away?

George E. Smith
Reply to  Mike Jonas
September 21, 2015 11:40 am

I’m finding it virtually impossible to read or write at WUWT any more. ground script peculiar to WUWT uses up all my CPU time.
So Viking Explorer says I make up my own Physics.
Anne Ominous says I’m sticklikly correct.
Both of them can’t be correct.
And I have cited the commonly recognized definition of ‘Light’ several times.
That comes from an international body of recognized experts; not from Sam’s Bar Club dictionary.
Bottom line:
Light ‘by definition’ IS visible. (to humans)
And it is not a physical phenomenon, but a psycho-physical one, with its own set of defined measurements and units. (it’s all in your head )
And WUWT is supposed to be a place where truth and accuracy are expected
it is NOT true that we all understand what is meant by someone’s post. Some do, some don’t; we are supposed to be helping those who don’t; which often times includes ourselves.
The Physics Department at my alma mater freely admits that the still don’t teach Ohm’s Law correctly ( R = constant) , but at least the don’t tell me I’m a stickler for correctness.
R = E / I is not Ohm’s law; it is the definition of R.

Reply to  Mike Jonas
September 24, 2015 5:29 pm

George obviously doesn’t understand the concept of radiant heat transfer. We get heat from the sun. Period. Convection is not the only form of heat transfer. Anne is wrong. Heat can be transferred without the exchange of mass.
Do you have some reference for your “commonly recognized definition of ā€˜Lightā€™?” Or is it a personal thing that you made up in a bar yourself? “it is not a physical phenomenon, but a psycho-physical one?” LOL

Anne Ominous
Reply to  Mike Jonas
September 26, 2015 12:09 pm

Chris:
No, I was not wrong. You are neglecting the context of his comment.
Of course heat can be transferred without mass transfer. Who claimed otherwise? What I was referring to was the fact that the energy input to the Earth arrives as radiation, not “heat”. Heat doesn’t occur until the radiation is absorbed.
So in the context given, George is technically correct. The energy DOESN’T arrive as “heat”. It arrives as radiation.

Tucci78
September 17, 2015 5:27 pm

It’s always worth quoting from Jeff Glassman’s ā€œConjecture, Hypothesis, Theory, Law: The Basis of Rational Argumentā€ (December 2007) regarding the climate models conjured up by the warmist quacks:

The consensus relies on models initialized after the start of the Industrial era, which then try to trace out a future climate. Science demands that a climate model reproduce the climate data first. These models donā€™t fit the first-, second-, or third-order events that characterize the history of Earthā€™s climate. They donā€™t reproduce the Ice Ages, the Glacial epochs, or even the rather recent Little Ice Age. The models donā€™t even have characteristics similar to these profound events, much less have the timing right. Since the start of the Industrial era, Earth has been warming in recovery from these three events. The consensus initializes its models to be in equilibrium, not warming.

Note that these observations were published by Dr. Glassman long before the FOIA2009.zip tranche enabled those of us outside “the consensus” to directly examine the computer programming that drew the infamous hockey stick curve from Brownian “red noise” random numbers.

September 17, 2015 5:49 pm

Jonas, who wrote:

The great majority of model runs, from the high-profile UK Met Officeā€™s Barbecue Summer to Roy Spencerā€™s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high.

Do we really know each model run?
They pick out subsets, don’t show every one.
Many runs did not fit preconceptions;
Those are binned. They don’t allow exceptions.
It may just be that, if we saw them all,
The runs, on average, made a closer call.
But “closer calls” that leave the market free
Are not allowed: They need catastrophe!
===|==============/ Keith DeHavelle

TonyL
September 17, 2015 5:56 pm

Aerosols?
I thought aerosols were the magic control knob for adjusting the models to fit observation. It works like this:
You put in way too much warming due to CO2, then cancel it out with way too much aerosol cooling. At this point your model matches observation. Now you carry forward, reducing the aerosols as you go. This allows the way-too-much warming to emerge. Presto, Global Warming with GCMs.
But there are much easier ways to generate AGW. For instance, I was playing around with the UAH data set in R-Studio, and I generated 3.0 deg/century warming in just a few minutes.

Mike Smith
September 17, 2015 5:59 pm

It’s really rather simple. The models were built on the assumption that CO2 drives warming. Hence the models showed that CO2 causes warming. The fact that the models don’t correlate with reality is completely irrelevant to the modellers and the AGW True Believers.

catweazle666
September 17, 2015 6:10 pm

Anyone who claims that a computer game simulation of an effectively infinitely large open-ended non-linear feedback-driven (where we don’t know all the feedbacks, and even the ones we do know, we are unsure of the signs of some critical ones) chaotic system – hence subject to inter alia extreme sensitivity to initial conditions – is capable of making meaningful predictions over any significant time period is either a charlatan or a computer salesman.
Ironically, the first person to point this out was Edward Lorenz – a climate scientist.
You can add as much computing power as you like, the result is purely to produce the wrong answer faster.
So the fact that they DO appear to give relatively consistent answers – albeit entirely incorrect ones – is evidence that someone is extracting the urine.

commieBob
Reply to  catweazle666
September 17, 2015 8:25 pm

Judith Curry quotes Edward Lorenz on her blog. Here’s the part she chose to highlight:

Provided, however, that the observed trend has in no way entered the construction or operation of the models, the procedure would appear to be sound. link

What I take from that is this; your model should contain the physics and the starting conditions. A valid model will not be an exercise in curve fitting.
One thing I haven’t seen in a discussion of chaotic system models, is the idea of attractors. After enough runs, valid models of chaotic systems should show where the attractors are.

brambo4
Reply to  commieBob
September 17, 2015 9:33 pm

Could you elaborate on this idea for me?

commieBob
Reply to  commieBob
September 17, 2015 10:34 pm

brambo4 says:
September 17, 2015 at 9:33 pm
Could you elaborate on this idea for me?

There are actually two ideas.
First, it seems to me that Lorenz is agreeing with Mike Jonas. If you have to tune a model to match the historical record, it means that you don’t understand the system well enough to model it correctly from basic principles. That means your model’s predictions aren’t reliable.
Second, even though a chaotic system is hard to predict, it will tend to certain behaviours. For instance, Detroit is warmer in July than January. July will have a certain average temperature (about 76). If you run your model once, it probably won’t achieve exactly that temperature. The model might give 90 or it might give 60. It’s a chaotic system so that’s OK. If you run the model enough times you might notice that many of the results end up somewhere around 76. We would say that 76 is the attractor for July. wiki

Reply to  commieBob
September 17, 2015 11:48 pm

commieBob, Yes, that is why they average model runs. But the modelled surface temperatures have no realistic attractors. They should stop and notice that.
If the modelled surface temperatures have an attractor it’s somewhere so much hotter than anything anyone has ever seen in the historical record – it needs real evidence to support it. But there isn’t any evidence. Just unreliable modelled guesswork.
The models keep getting hotter.
And the planet can’t keep up.

benofhouston
Reply to  commieBob
September 18, 2015 5:49 am

I would actually argue the other direction. Even exceedingly simple models can take into account the basic effects through parameterization, but they must be taken as very rough estimates of future behavior.
For example, the simplest possible climate model is to simply do a two-point fit of a logarithmic curve between CO2 and temperature. This gives a rough estimate of ~1.5-2C/doubling of CO2. This assumes that all warming is due (directly or indirectly) due to CO2 and nothing else is happening. Due to having evidence that we are in a naturally warming period, this gives us a reasonable upper limit of warming that we can expect.
However, that is all that a parameterized model can do. Give us a general guide of what to expect from a certain effect absent any outside forces. While you can use a lot more points and a lot more computing power, the core, horrific assumption is still present: nothing that isn’t parameterized has an effect on the result. In climate, where most of the known forces have vaguely known large scale effects or and many effects have simply unknown causes, this core assumption is fatal for any long term forecast.
The problems come when you try to take these rough estimates and make strong predictions from it.

Anne Ominous
Reply to  commieBob
September 19, 2015 5:00 pm

My guess is that the main attractor is somewhere between the Minoan Warm Period and the Little Ice Age… kind of like what we have now.
That it tends to cycle between those extremes — albeit somewhat unpredictably, with today’s knowledge — suggests that there is at least a weak attractor somewhere between.

DD More
Reply to  catweazle666
September 18, 2015 8:32 am

To sum up the computer modeling tuning.
From my high school math teacher’s post board –
Flannegan’s Finangling Fudge Factor – “That quantity which, when multiplied by, divided by, added to, or subtracted from the answer you get, gives you the answer you should have gotten.”
Also known as SKINNER’S CONSTANT
Only successfully used when the correct answer is known.

Gilbertl K. Arnold
Reply to  DD More
September 18, 2015 1:04 pm

Actually in my engineering classes the FFFF was called the UEFF (Universal Engineering Fudge Factor) or even more simply: Finagles Constant.

catweazle666
Reply to  Gilbertl K. Arnold
September 18, 2015 4:31 pm

Often referred to as the ‘Factor of Safety’!

Anne Ominous
Reply to  catweazle666
September 19, 2015 5:15 pm

benofhouston:
Not even the simplest model, but any simple model, probably should not have CO2 as a component at all. The Monckton et al. simple model is an example. Despite all the criticism, it arguably models climate better than 97% of the “official” climate models do, including CO2.

Reply to  Anne Ominous
September 19, 2015 5:45 pm

Anne Ominous:
The Monckton model has the logical shortcoming that the “global warming” in a given interval of time is multivalued, negating the law of non-contradiction (LNC). Thus, for example, if the proposition is true that there has been no global warming in a given interval the proposition may also be true that there has been global warming in this interval. For mutually exclusive propositions to be true is impossible under the LNC but possible under negation of the LNC. (A proposition is “negated” through operation on it by the NOT operator.)

Anne Ominous
Reply to  Anne Ominous
September 19, 2015 8:27 pm

Terry:
Can you provide a reference for this? Obviously if what you say is true then it must be invalid, but I must see evidence before accepting the assertion.
Regardless, it was only an example and the main point still holds: There are simple climate models that do not include CO2 at all, which do a much better job than, say, the CMIP5 average. Or actually nearly all of the CMIP5 models.

Reply to  Anne Ominous
September 19, 2015 8:51 pm

Anne Ominous:
Looks like you’ve confused me with benofhouston.

thingadonta
September 17, 2015 6:11 pm

I would add this point from experience from working within government research:
Where there are multiple variables, the variable which specifically pertains to government research is often elevated above other variables, for a variety of reasons, but usually it also happens to benefit those doing the research.
Moreover they usually don’t see anything wrong with this, as it solves certain problems and makes management of complex issues much easier, and at the same time promotes the agency’s profile. But they genuinely believe there is nothing wrong with assigning high importance/influence to their chosen variable.
Whilst they are aware that there are other variables and many of these are uncertain, they often justify their diminished importance by the 90/10 (or sometimes also called the 80/20) rule, i.e. something that happens only 10% of the time must have more or less only a 10% effect, or be of 10% importance, and so on. (They assume linearity to an absurd degree). They fail to notice that frequency of occurrence or probability doesn’t equate to magnitude or effect, and moreover that within some systems, the entire system collapses without the 10% that they assign as ‘less important’. And in still other systems, the so called 10% controls most, or all, of the rest of the system. Their assignment of high importance to their chosen variable is in many cases purely arbitrary.

September 17, 2015 6:19 pm

Mike:
The models that generated the red line of your graphic do not make “predictions.” They make “projections.” When the two words are used as synonyms the result is to enable applications of the equivocation fallacy that lead people to false or unproved conclusions and the resulting public policies.

braddles
Reply to  Terry Oldberg
September 17, 2015 6:55 pm

You should tell this to Barack Obama and all the other world leaders who treat them as predictions.

Reply to  braddles
September 17, 2015 7:27 pm

braddles:
The most charitable interpretation of the evidence is that Obama et al have been duped by applications of the equivocation fallacy.

Curious George
Reply to  braddles
September 18, 2015 7:32 am

As models make no predictions at all, the question is How reliable are their projections?

Editor
Reply to  Terry Oldberg
September 17, 2015 6:57 pm

“Predictions” passes the duck test, so “predictions” it is.

Reply to  Mike Jonas
September 17, 2015 8:17 pm

Mike Jonas:
It may pass the duck test. However, it does not pass the logic test.

Reply to  Mike Jonas
September 17, 2015 9:10 pm

Mike:
With reference to the following argument:
Major premise: A plane is a carpenterā€™s tool.
Minor premise: A Boeing 737 is a plane.
Conclusion: A Boeing 737 is a carpenterā€™s tool.
does “plane” pass the duck test? If not why not?

Editor
Reply to  Mike Jonas
September 17, 2015 10:28 pm

A Carpenter’s Plane does not look like a Boeing 737, it does not swim like a Boeing 737 and it does not quack like a Boeing 737. So it fails your duck test.

Reply to  Mike Jonas
September 18, 2015 8:59 am

Mike Jonas:
I had hoped that you would apply the logic test rather than the duck test. Had you done so you would have observed that the faulty conclusion had been drawn from an argument that a Boeing 737 was a carpenter’s plane. Upon further reflection you could have discerned the cause: the word “plane” is polysemic and changes meaning in the midst of the argument. An argument in which a term changes meaning is called an “equivocation.” It looks like an argument having a true conclusion (a syllogism) but isn’t one. Thus, though one can properly draw a conclusion from a syllogism the same is not true of an equivocation.
In the literature of global warming the word “predict” is polysemic. Thus through the use of this word faulty conclusions can be and often are drawn from arguments. Conventionally this problem is avoided by reserving the word “predict” for one meaning and using the word “project” for the other. Under this convention, past and current global warming models make projections. They do not make predictions. These models are incapable of making predictions because the underlying statistical populations have yet to be identified.

Arsten
Reply to  Mike Jonas
September 18, 2015 5:50 am

Terry,
If I say “I believe that on December 24, 2015, mankind will be wiped out,” is that a prediction? Yes. It just happens to be a prediction that is also my own belief.
On the same token, if I say “My model run projects that on December 24, 2015, mankind will be wiped out,” then my model has made a prediction, even if it is only projected from model inputs. It is a prediction that just happens to be based off of my models’ projection.
All predictions, no matter their source or origin, must be vetted against reality. Otherwise, it is not science.

Reply to  Arsten
September 18, 2015 2:53 pm

Arsten:
A “prediction” is an example of a proposition but “I believe that on December 24, 2015, mankind will be wiped outā€ is not an example of a proposition.

Reply to  Mike Jonas
September 18, 2015 11:56 am

Predict:
“1540-50; < Latin praedictus, past participle of praedÄ«cere to foretell, equivalent to prae- pre- + dic-, variant stem of dÄ«cere to say + -tus past participle suffix; see dictum"
"verb (used with object)
1. to declare or tell in advance; prophesy; foretell:
to predict the weather; to predict the fall of a civilization.
verb (used without object)
2.to foretell the future; make a prediction.
"Synonyms
1, 2. presage, divine, augur, project, prognosticate, portend. Predict, prophesy, foresee, forecast mean to know or tell (usually correctly) beforehand what will happen. To predict is usually to foretell with precision of calculation, knowledge, or shrewd inference from facts or experience: The astronomers can predict an eclipse;it may, however, be used without the implication of underlying knowledge or expertise: I predict she'll be a success at the party. Prophesy usually means to predict future events by the aid of divine or supernatural inspiration: Merlin prophesied the two knights would meet in conflict;this verb, too, may be used in a more general, less specific sense. I prophesy he'll be back in the old job. To foresee refers specifically not to the uttering of predictions but to the mental act of seeing ahead; there is often (but not always) a practical implication of preparing for what will happen: He was clever enough to foresee this shortage of materials. Forecast has much the same meaning as predict; it is used today particularly of the weather and other phenomena that cannot easily be accurately predicted: Rain and snow are forecast for tonight. Economists forecast a rise in family income."
Project:
noun (ĖˆprɒdŹ’É›kt)
1.a proposal, scheme, or design
2.a.a task requiring considerable or concerted effort, such as one by students
b.the subject of such a task
3. (US) short for housing project
verb (prəĖˆdŹ’É›kt)
4. (transitive) to propose or plan
5. (transitive) to predict; estimate; extrapolate: we can project future needs on the basis of the current birth rate
6. (transitive) to throw or cast forwards
7.to jut or cause to jut out
8.(transitive) to send forth or transport in the imagination: to project oneself into the future
9.(transitive) to cause (an image) to appear on a surface
10. to cause (one's voice) to be heard clearly at a distance
11. (psychol) a.(intransitive) (esp of a child) to believe that others share one's subjective mental life
b.to impute to others (one's hidden desires and impulses), esp as a means of defending oneself Compare introject
12. (transitive) ( geometry) to draw a projection of
13. (Intransitive) to communicate effectively, esp to a large gathering
Word Origin
C14: from Latin prōicere to throw down, from pro- 1 + iacere to throw
Synonyms
1. proposal. See plan. 6. contrive, scheme, plot, devise. 8. predict. 18. bulge, obtrude, overhang. "
http://dictionary.reference.com/browse/project
I see considerable overlap in the dictionary definition of these two words you want to pretend are so different, Terry.
The definition of each includes the other within the range of definitions.
They are even each listed as synonyms of each other.
You are engaging in sophistry.
And as for what the most charitable interpretation of anything is, what has that got to do with the reality of the situation?
Are you seriously suggesting that everyone must give the benefit of the doubt to warmistas and all of their jackassery?
That everyone is merely "taken in"?
It is all a big misunderstanding?
Everyone here that willing to be even slightly honest and frank knows better.
Except maybe you, I suppose.

Reply to  Mike Jonas
September 18, 2015 3:12 pm

Menicholas:
Your argument fails to come to grips with the role of the equivocation fallacy in global warming arguments. See the peer-reviewed article at http://wmbriggs.com/post/7923/ for details.

Reply to  Mike Jonas
September 18, 2015 4:10 pm

Lewis P Buckingham
It is the paper that was peer reviewed.

Reply to  Mike Jonas
September 18, 2015 4:50 pm

Lewis P Buckingham
Actually, I did post a link.

RACookPE1978
Editor
Reply to  Mike Jonas
September 18, 2015 4:56 pm

All Iā€™m asking you is to provide me with the reference to the peer reviewed journal where the article was published.

Heck, I’d be impressed if he could tell us who peer-reviewed all these climate studies papers that have been found exaggerated and false.

Reply to  Mike Jonas
September 18, 2015 5:02 pm

Lewis P Buckingham
I didn’t claim the blog was peer reviewed. I claimed the paper was peer reviewed. It was. End of story.

Reply to  Mike Jonas
September 18, 2015 5:24 pm

Lewis P Buckingham
The paper was published without fee and under peer review in the blog of William M. Briggs. Briggs is a meteorologist, PhD level statistician and professor of statistics.

Reply to  Mike Jonas
September 18, 2015 5:47 pm

Lewis P Buckingham
I made no such error. Also, my alleged error is not an application of the equivocation fallacy contrary to your assertion. If you have nothing other than false or misleading statements to contribute to our conversation its time to end it.

Reply to  Mike Jonas
September 18, 2015 7:08 pm

Terry Oldberg,
You’re just having fun with Lewis Buckingham, I can see that.
I’ll bet you like to pull the wings off flies, too. ā˜ŗ
I easily found the link to the peer reviewed paper after reading your 3:12 post above. But apparently Buckingham is convinced he’s got you ā€” when it’s just the opposite.
He’s not too smart, you see.
Regarding the climate peer review system, Lewis B ought to read the Climategate email leaks. It’s clear that climate peer review is very corrupted, so those of us who have read their emails know that climate peer review isn’t worth spit. If Lewis likes, I will provide links to the Climategate emails and related commentary. He could learn a lot, if he wanted to.
Buckingham is also unaware that in addition to Dr. Briggs, Terry Oldberg is a published, peer reviewed author. Since Bucky is so impressed by peer review, it’s a good time to ask him how many peer reviewed papers he has had published?
Really, I shouldn’t have written this. You could have kept spinning up Buckingham until his head exploded by merely not doing his homework for him.
Bucky, the answer is there, just like Terry told you. It was always there. He was just having some fun with you.

Reply to  dbstealey
September 18, 2015 8:44 pm

As always I am in awe of the moral courage that has been displayed by dbstealy in ceaselessly pointing out, against the prevailing view, that none of the IPCC climate models have been validated. That they have not been validated has the significance that these models are not scientific. Thus, the impetus toward regulation of CO2 emissions lacks a basis in science.

Reply to  Mike Jonas
September 19, 2015 11:01 am

Terry,
you said:
“Menicholas:
Your argument fails to come to grips with the role of the equivocation fallacy in global warming arguments. See the peer-reviewed article at http://wmbriggs.com/post/7923/ for details.”
I was not attempting to “come to grips” with anything.
I was demonstrating the two words which you claim have such distinct and separate meanings actually do not.
They are synonymous in common usage.
Below you found a very long word which is not in common usage to make a point that everyone understands: Words have several meanings, and some of these can be completely distinct, based on context.
I am very much aware of this, and that is why I did not edit out all the other definitions of the two words in my post…I wanted to show that there are several ways to use each word.
And words mean what people understand them to mean, and how people use them to convey ideas and thoughts.
The origins of two words may be separate and distinct, but if the meanings overlap, then all that matters is the context, and how the words were intended, and how the words are interpreted…it matters not that is the vernacular of some obscure profession the words are use to convey some subtle or not so sublte difference of meaning…what matters is that in common usage, these words convey the meaning that everyone thinks they mean.
And the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.
You have failed to come to grips with my point that words mean what people intend them to mean, and the idea that they convey in a communication.
And synonymous words convey the same meaning when used in identical context.

Reply to  Mike Jonas
September 19, 2015 11:05 am

BTW, it is no big revelation to point out that alarmists use any number of fallacies to push their meme.
Call it by whatever name you want…it amounts to the same thing…equivocation fallacy or outright lie…is is just another not-so tasty layer in the layer cake of warmista BS.

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 8:46 pm

Menincholas:
“And the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.”
Actually that IS his whole point. That distinctions were not made where they should have been.
I can appreciate that it is difficult to separate his argument from semantic nitpicking, but again that is a large part of his point. Subtle context-shifting has taken place in order to press an agenda.

Reply to  Anne Ominous
September 19, 2015 9:28 pm

Anne Ominous:
Bravo!

Anne Ominous
Reply to  Mike Jonas
September 19, 2015 8:49 pm

Menincholas:
I think maybe some people here would appreciate the point more when it is put in terms of context-shifting rather than semantics. We all know what context shifting does.

Reply to  Mike Jonas
September 19, 2015 9:02 pm

Lewis P Buckingham :
Your statement that: “Since neither Stealey nor Oldberg can point out where Terryā€™s ā€œarticleā€ was published…” is a lie.

Reply to  Mike Jonas
September 20, 2015 8:59 am

Lewis P Buckingham
That I “got caught misusing the term ā€œpeer-reviewā€ is an outrageous lie.

Reply to  Mike Jonas
September 20, 2015 10:28 am

L. Buckingham says:
Briggs blog is not peer reviewed”.
Misdirection. No one ever said it was.
(And to be clear, the reference, as always, is to the Briggs et al. peer reviewed paper.)
Terry Oldberg:
Don’t show him where it’s linked! You already posted it once, make Bucky find it himself.
Don’t spoil the fun!

Reply to  Mike Jonas
September 20, 2015 10:56 am

L. Buckingham says:
“Stealey, I see you are unable to post the citation either.”
Au contraire
, my amusing friend. I am fully able to post the citation. I found it easily yesterday.
But I am currently unwilling to give it to you, since it is so much fun watching you dig your hole deeper with every comment.
Since I found it so easily, surely you could, too… or can’t you?

Reply to  Mike Jonas
September 20, 2015 11:18 am

Buckingham sez:
PS Stealeyā€¦..when you post ā€œ, is to the Briggs et al. peer reviewed paper.ā€ you have incorrectly attributed authorship to Briggs, as the paper Oldberg was referring to was his paper.
Keep diggin’ your hole deeper, Bucky. ā˜ŗ

Reply to  Mike Jonas
September 20, 2015 11:19 am

Anne Ominonus,
I do appreciate your effort to bridge the gap here, if one indeed exists.
I said:
ā€œAnd the people who use climate models for their alarmist purposes make none of the distinction that you are so fervently trying to press your point about.ā€
To which you said:
“Actually that IS his whole point. That distinctions were not made where they should have been.”
It seems to me that if this were the whole point being made, then there would be no discussion (or is it an argument?) on this topic. I am not going to recount the entire thread here, but reading it above and bellows shows clearly that it is not the whole point Terry is making.
He is also making a point about people on this thread not knowing what they are talking about if they do not agree that he is the final arbiter of what these words mean. And what is logical. And who is to blame for the damage being done…implying several times that skeptics who make his so-called semantic error are contributing to the problem.
Again, I appreciate your input, but now I for one am tired of kicking this horse.
– Me, Nicholas

Reply to  Mike Jonas
September 20, 2015 1:40 pm

Lewis P Buckingham
It is unscrupulous of you to repeatedly claim that a citation has not been provided to you when it has been provided and to use this false claim in an attack on my character. Are you aware of the fact that personal attacks are illegal when unjustified?

Anne Ominous
Reply to  Mike Jonas
September 20, 2015 9:24 pm

Menincholas:
With all respect, you are mistaken. You do not seem to be able to separate his technical points from semantic gamery; again the fact that people use those words in different ways is indeed a very major part of his point, which apparently has not sunk in.
I can appreciate your weariness from what you seem to see as boxing with shadows, but the shadows are of your own making.
Once again: the points he is making are very technical, but you do not seem willing to embrace them even as hypotheticals, then see where that leads. Instead you dismiss them out of hand.
That is something about which I have no power to help you.

Hivemind
Reply to  Terry Oldberg
September 17, 2015 8:06 pm

A projection is a prediction that lost it’s nerve.
They need to make predictions in order to do science. If you can’t predict something from your theory, you have no theory. But they know they’re garbage. So they pretend they aren’t predictions – hence the fake name.

Reply to  Hivemind
September 17, 2015 8:41 pm

Hivement:
No. A “projection” lacks the the logical structure of a “prediction.”

Reply to  Hivemind
September 18, 2015 9:59 am

A projection is an “estimation of what might happen, based on the current trend”.
Also, humans make predictions, not models or computers.
Just my 2 cents.

Reply to  Hivemind
September 18, 2015 12:13 pm

Exactly Dahlquist.
Computer models do not issue press releases written in breathless and urgent tones of impending doom, but the people who engage in “climate science” sure do.
The writers and newsmen and women who pass these stories along make no distinction, and neither do the teachers who scare our children and indoctrinate them with lies and made up scare stories, telling them they are doomed, that they are cursed to be living on a poisoned world, and it is their parents and grandparents fault, and especially the fault of those who attempt to speak some sanity and truth to them…the so-called skeptics and deniers.
Terry, no such distinction, as you want to try to tell everyone here they must hew to, is made by the policymakers and the regulatory agencies and governments who are engaged in this massive hoax and fraud.
I challenge you to find me some instances of any of the above named people equivocating in their pronouncements, that these are mere “projections”, subject to verification.
The truth is that it is all sold as a settled subject…it is going to happen.
Are you seriously arguing that it is not?

Jaakko Kateenkorva
Reply to  Hivemind
September 19, 2015 3:24 am

A ā€œprojectionā€ lacks the the logical structure of a ā€œprediction.ā€

If you argue climate models cannot ‘predict’ because they ‘lack logical structure’, IMO consider the mission complete.

Reply to  Hivemind
September 19, 2015 4:46 pm

Jaakko Kateenkorva:
Yes, it is accurate to state that climate models cannot predict because they lack logical structure. This structure can be described by reference to the idea of a “proposition.” In logic, every proposition has a probability of being true. This probability has a numerical value.
Probabilities belong to the theoretical side of science. Relative frequencies belong to the empirical side. When values are claimed for probabilities by a model these claims are testable by making counts called “frequencies” in a sample drawn from the underlying statistical population. A relative frequency
is a ratio of two of these frequencies. A model that survives testing of all of its claimed probability values is said to be “validated.” A model that does not survive testing is said to be “falsified by the evidence.” For brevity I’ve glossed over complications resulting from sampling error.
A “prediction” is a kind of proposition. Associated with this proposition is the logical structure that was developed in the previous two paragraphs. For the climate models none of this structure exists.
Replacing this structure are applications of the equivocation fallacy that lead innocent folks to believe they are looking at scientific findings when they are not. In their innocence these folks cannot believe their governments would stoop so low.

Keith Willshaw
Reply to  Terry Oldberg
September 18, 2015 1:17 am

The use of the word ‘projection’ is aimed at giving them a get out. The reality is that the climate scientists treat their model results as predictions when they say ‘this is what will happen if you don’t cut CO2 emissions’
That is prediction but by using the weasel words they aim to cover their asses if they get it wrong.

Reply to  Keith Willshaw
September 18, 2015 1:56 pm

Keith Willshaw:
No. Trenberth was truthful when he wrote in a post to the blog of Nature that the climate models made projections rather than predictions. Predictions require the existence of the statistical population underlying the model. Here there isn’t one.

TYoke
Reply to  Keith Willshaw
September 18, 2015 8:09 pm

Terry,
You’ve ignored Keith’s point, which is the correct one.
Remind me again why we’re supposed to believe that increased CO2 will cause the earth to burn up. Oh yeah, the reason is: BECAUSE THAT’S WHAT THE MODELS CLAIM WILL HAPPEN.
To now say that any disagreement of the model “projections” with observations should be disregarded, provokes the obvious question. Why should we pay attention to warmist “projections” in the first place? Apparently they bear no relation to real temperatures out in the real world. Who cares what they say.

Reply to  TYoke
September 18, 2015 9:33 pm

TYoke:
The IPCC is working a scam that is based upon application of the equivocation fallacy. I am aware of this scam. You are oblivious to it.
The scam leads well meaning but scientifically naĆÆve people to believe that increasing the CO2 concentration will cause the earth to burn up or something similar. The antidote to this scam is to expose it. Being oblivious to it you resist exposure of this scam thus playing into the hands of people that you and I despise.

Reply to  Keith Willshaw
September 19, 2015 10:46 am

Terry, your “antidote” seems to many of us like providing cover for the alarmists.
You seem to be of the opinion that your high minded fascination with words for which there is a distinction without a difference places you in a superior moral position, or somehow gives you a leg up in the argument.
It is evident to me from many of the preceding comments that I am not the only one who thinks you have it exactly backwards.
The letter calling for RICO prosecution of skeptics makes your position not just silly but dangerous, as this debate is showing signs of morphing into an actual war.

Chris Wright
Reply to  Terry Oldberg
September 18, 2015 2:45 am

That’s complete nonsense. Of course they’re predictions. If I publish a “projection” of what will happen in the future it is also a prediction.
Of course, I might use the term “projection” if I had little confidence that my prediction turned out wrong.
The complicating factor is that there are in fact a set of predictions, each one for a different future CO2 scenario. So it’s more precise to call them a set of conditional predictions. But they are predictions, pure and simple. And predictions that have turned out to be hopelessly wrong.

whiten
Reply to  Chris Wright
September 18, 2015 2:00 pm

Chris Wright
September 18, 2015 at 2:45 am.
Thatā€™s complete nonsense. Of course theyā€™re predictions. If I publish a ā€œprojectionā€ of what will happen in the future it is also a prediction.
—————-
Hello Chris.
Maybe you are right….but from my point of view, you seem not to be and most likely what you say, as per above selection, is nonsense.
Especially in climate issue the difference between predictions and the projections is huge.
These two are completely different matters of view and approach to climate and climate science.
Being able to pretend the possibility of accurate climate projections of one kind or another does not mean the same in the ability of climate predictions.
The climatology and climate science claim to have some kind of projections about climate, by simply claiming that, what the expected range of natural climate variation would or could be, in temps variation and CO2 concentration variation, at the very least, regardless how accurate that could be or not.
Officially that stands at 4-7C variation and 120ppm variation more or less.
That is a projection not a prediction, about climate, the estimated range of expected natural variation in climate. The projected range, that climate variation must be within the natural term, always.
That range means that climate could be in an interglacial period or glacial period at any given moment regardless, and inside that range of variation, or what ever kind of climatic period.
In climate, that is the projections, the range the boundaries that the climate change, and climate variation must be within at any given moment and period.
In the other hand the predictions mean an ability to tell and to be able to estimate the actual moment that climate is at, at the time period in question.
You see, we can assume, about why the YD period, the Warm Roman period or why and how LIA happened, but we do not actually know.
That means that there is a lack of ability to predict, as predictions need a lot more knowledge to be assessed than projections.
Leaving aside for a moment the AGW and the modern era, the climate projection, while accurate enough or not, in combining with the climate data, suggest and lead to a conclusion that climate definitely must be at the point of the end of interglacial and the beginning of the next glacial period,,,,,,, when actually the inability to predict can not allow for an estimation of the exact path, which could be through a steady change or a drastic change in climate, whether it could be through a pass of another LIA, another YD or a global arming period, an equilibrium or a transient climate period, at the present and or the near future.
For as long as projections have enough room for any of these, any could be possible, but lack of ability to predict means that it can not be predicted in detail how, even when the projections can show the general path to be taken, the lack of ability to predict means that the “exact” particular path may not be estimated or predicted, unless better knowledge and understanding of climate achieved.
At the moment there is not enough of it to allow for predictions, but is claimed that it is enough for projections in climate and climate change.
And the projections do not even have to show the general path the climate should be at future period or the other, for the projections still to stand as accurate or good enough……the projections only need to conform correctly and good enough with the past long and short term data, regardless of the future.
Not the same could be said for the predictions.
No computer model can predict yet the next following climate change path , but according to our climate understanding and science of climate change there is possible the computer modeling of climate projections, and maybe also a chance on getting better and moving towards the computer modeling predictions some time after that.
To me when I am told or read about climate predictions, I think in the lines of more or like of wild speculations reached in the bases of reaching to a speculative conclusion with no regard to any rationale by simply using the climate projections and misleading conceptually the rest through the claim that there is no much difference in between, when actually there is a huge one,,,,, there is why we actually have two different words and concepts in place to rely at, regardless how much relation in between the two.
Sorry for going so long with this, and maybe I am too far out of mark, or even wrong, but never the less this is my opinion and the understanding in this particular aspect.
Projections, and the ability to have projections does not mean predictions and or an ability to predict and have predictions by default.
It takes a lot lot more to be able to predict in some kind of expected “accuracy” than project and have projections.
cheers

Reply to  Chris Wright
September 18, 2015 2:02 pm

Chris Wright:
Your understanding is incorrect. To make predictions a model has to have an underlying statistical population. Here there isn’t one.

Reply to  Chris Wright
September 19, 2015 10:50 am

Even if what you say is in some way technically true, it matters not one whit, as the predictions are not made by the models, but by the alarmists who misunderstand or perhaps merely misuse them.
What the hell is the difference how many different ways they have to be wrong?
We have numerous quotes right here from several of the people who create models, and use the results to further the alarmist meme, using the word you say does not apply.

RealOldOne2
Reply to  Terry Oldberg
September 18, 2015 7:44 am

Terry:
The semantics game of attempting to differentiate a “projection” from a “prediction”, is just handwaving to obfuscate the fact that the climate models fail. Both words are interchangeable as observed in the peer reviewed literature.
A few quotes from a single paper, Lean&Rind(2009) ‘How will Earth’s surface temperature change in future decades’:
– “Smith et al. [2007] forecast rapid warming after 2008, with “at least half of the five years after 2009 … . . predicted to exceed (1998) the warmest year currently on record.” ”
– “our empirical model predicts that global surface temperature will increase at an average rate of 0.17 +/- 0.03Ā°C per decade in the next two decades. The uncertainty given in our prediction …”
– “The predicted warming rate will not, however, be constant on sub-decadal time scales over the next two decades. As both the anthropogenic influence continues and solar irradiance increases from the onset to the maximum of cycle 24, global surface temperature is projected to increase 0.15 +/- 0.03Ā°C in the five years from 2009 to 2014.”
– “However, our estimated annual temperate[sic] increase of 0.19 +/- 0.03Ā°C from 2004 to 2014 (Figure 1) is less than the 0.3Ā°C warming that Smith et al. [2007] predict over the same interval.”
– “According to our projections of annual mean regional surface temperature changes”
– “Our projections are consistent with IPCC long range forecast that warming will be greatest …”
– And from the summary, “According to our prediction, which is anchored in the reality of observed changes in the recent past …”
Clearly Lean&Rind(2009) use “forecast”, “prediction”, and “projection” interchangeably. The current attempt to distinguish a difference is mere handwaving obfuscation, due to the colossal failure of the alarmists’ forecasts/projections/predictions from their climate models. This is seen the the observed temperature changes over the very 5 and 10 year time periods that these peer reviewed papers chose:
Actual temperature change from 2004 to 2014 was a cooling of 0.03Ā°C, not a 0.19Ā°C increase as L&R2009 forecast/projected/predicted, and not a 0.3Ā°C increase as Smith2007 forecast/projected/predicted. http://www.woodfortrees.org/plot/rss/from:2004/to:2014/plot/rss/from:2004/to:2014/trend
And actual temperature change from 2009 to 2014 was a cooling of 0.13Ā°C, not a 0.15Ā°C increase as L&R2009 forecast/projected/predicted. http://www.woodfortrees.org/plot/rss/from:2009/to:2014/plot/rss/from:2009/to:2014/trend
Obviously the “anchor” line L&R2009 mentioned was not anchored to reality, as the reality of observed temperature changes expose the inability of the climate models to “forecast”/”project”/”predict” accurately. And the whole CatastrophicAGW-by-CO2 meme is built on these flawed, faulty, falsified, failed climate models, which vonStorch admitted that didn’t match real observed temperatures at even a 2% confidence level.

Reply to  RealOldOne2
September 18, 2015 3:55 pm

RealOldOne2:
My argument is unrelated to the issue of “arguing over semantics” though you assume the opposite. You could bone up on the details at http://wmbriggs.com/post/7923/ .

Alx
Reply to  Terry Oldberg
September 19, 2015 6:43 am

Let’s say an employee making $20,000 a year receives a 10% raise this year. Overjoyed the employee projects a 10% raise over the next 20 years and predicts he will be making about $150,000 in 20 years. And each day he happily goes to work knowing with mathematical certainty the brightness of his future.
That’s what linear climate models are all about; fanciful projections leading to silly, unfounded predictions.

Reply to  Alx
September 19, 2015 5:16 pm

Alx:
As you use “prediction” in your example the term is polysemic (has more than one meaning). Use of a polysemic term is common and is harmless in many contexts. It is harmful when the context is an argment and the term changes meaning in the midst of this argument. That it changes meaning makes of this argument an “equivocation.” An equivocation looks like a syllogism (an argument having a true conclusion) but isn’t one. Thus, while one can properly draw a conclusion from a syllogism the same is not true of an equivocation. To draw such a conclusion is the “equivocation fallacy.” If you wish to deceive someone in making an argument application of this fallacy is an effective way to do so because it is exceptionally hard to spot. If you wish not to deceive someone in making an argument use of monosemic terms is an effective antidote to deception because it makes equivocation impossible.

katherine009
Reply to  Terry Oldberg
September 24, 2015 9:09 am

Terry Oldberg, I think I understand your point, even though I don’t completely understand the scientific and methodological reasons underpinning it. I think it’s the same reason that news broadcasters “project” winners in elections, instead of predicting them. It’s the level of certainty that distinguishes one word from the other.

Reply to  katherine009
September 24, 2015 10:49 am

Hi Katherine. Thanks for giving me the opportunity to clarify.
In understanding what I’m trying to say it is best not to dwell on the ways in which “predict” and “project” are used in the English vernacular. Instead, one should understand that specialized meanings for the two words have evolved in the field of global warming climatology and that the two meanings differ. In making an argument about global warming many people assign both meanings to the single word “predict.” When this word changes meaning in the midst of an argument this argument is classified as an “equivocation” in logical terminology.
An equivocation has the unfortunate property of looking like a syllogism. However, while the conclusion of a syllogism is true the conclusion of an equivocation is false or unproved. Thus, while one can properly draw a conclusion from a syllogism, one cannot properly draw a conclusion from an equivocation. To draw a conclusion from an equivocation is called the “equivocation fallacy” in logical terminology.
Application of the equivocation fallacy can be eliminated through disambiguation of the language in which an argument is made such that each term belonging to this language has a single meaning. However though doing so costs nothing many people are resistant to doing so. Some of these people are well meaning but naĆÆve. Others are swindlers.
Several years ago, the chair of Earth Sciences at Georgia Tech asked me to prepare a paper on the topic of “Logic and Climatology” for publication in her blog. In the ensuing study I discovered that the literature of global warming climatology was infested by applications of the equivocation fallacy. When global warming climatology was described in a disambiguated language it became obvious that it was a pseudoscience dressed up to look like a science through applications of the equivocation fallacy. Details on my methodology and findings are available at http://wmbriggs.com/post/7923/ .

katherine009
Reply to  Terry Oldberg
September 24, 2015 1:54 pm

Terry, thanks for the response and link to your interesting paper. At the risk of exposing my ignorance for all the world to see, could you please explain what is meant by “statistical population” and what it has to do with the equivocation fallacy? I follow your explanation of the slippery language of climate science, but don’t quite see the whole picture. Thanks again.

Reply to  katherine009
September 24, 2015 5:30 pm

Katherine: Thanks for giving me another opportunity to clarify. Please keep up the good work for as long has you have questions.
A “statistical population” is the set of concrete aka physical objects that are associated with a scientific study. A subset of a statistical population that is available for observation is called a “sample.” Your doctor’s purpose in ordering a sample of your blood may be to gain a sample of your red cells. In this case the sample is a subset of your red cells while the statistical population is the complete set of your red cells. In the conduct of a scientific study one takes a sample for the purpose of gaining information about properties of the associated statistical population. If the sample is “large” this information may be nearly perfect.
The elements of a statistical population reference a “sample space,” that is, a set of mutually exclusive collectively exhaustive outcomes of events. They may additionally reference a “condition space,” that is a set of mutually exclusive collectively exhaustive conditions of the same events. Conditions and outcomes are examples of states of nature.
The elements of a statistical population (the red cells in my example) are called “sampling units.” Sampling units belonging to a sample and sharing the same outcome or condition-outcome pair can be counted. This count is called the “frequency.” The ratio of one of these frequencies to the sum of all of them is called the “relative frequency.”
A science has an empirical side and a theoretical side. A relative frequency belongs to the empirical side. Its theoretical counterpart is called a “probability.” When a scientific model asserts a value for a probability this assertion can be checked by reference to the corresponding relative frequency. This property of a scientific model is called “falsifiability.” A model in which each such assertion is tested without being falsified is said to have been “validated,”
For each of today’s climate models the corresponding statistical population does not exist. There are no samples, sampling units, frequencies, relative frequencies or probabilities. Not a single one of today’s climate models is falsifiable but falsifiability is the mark of a model that is “scientific.”
More seriously damaging than this fiasco is that these models convey no information to a policy maker about the outcomes from his/her policy decisions. It can convey no information because “information” is defined in terms of probabilities and relative frequencies but these do not exist for one of today’s climate models. In lieu of information about the outcomes from policy decisions, there is not the possibility for a government or group of governments to control the climate. Governments are spending their citizens’ money in massive amounts in attempts at controlling the climate under a circumstance in which it is impossible for the climate to be controlled.
This bizarre situation is able to persist because of applications of the equivocation fallacy exploiting polysemic words or word-pairs in the literature of global warming climatology that include the words “predict,” “model” and “science” plus word pairs that include “validate/evaluate” and “predict/project.” (That a word pair is “polysemic” implies that each word belonging to it has a different meaning but the two words are used as synonyms in making an argument.)
When a word or word pair is used in making an argument and changes meaning in the midst of this argument an equivocation is born. When a conclusion is drawn from an equivocation an application of the equivocation fallacy is made. Applications of the equivocation fallacy obscure the fact that there is not currently a logical or scientific basis for regulation of CO2 emissions by a government or group of governments. Global warming climatologists have blown their assignment.

katherine009
Reply to  Terry Oldberg
September 25, 2015 7:48 am

Ok, still trying to get hold of this. Wouldn’t a statistical population for climate models be all temperature readings, everywhere, for all time? Or is the problem that temperature readings are not the actual population?

Reply to  katherine009
September 25, 2015 9:41 am

Katherine:
That’s a good question. I’ve found that several bloggers think with you that the set of temperatures is the population but that’s wrong. In proper statistical jargon, the set of measured temperatures is a “time series” rather than a population.
Recall that sampling units are concrete aka physical objects e.g. red blood cells. For global warming climatology the sampling units, if they were to be identified would be the Earth plus its atmosphere in various periods of time such that the complete set of these periods was a partition (mathematical term) of the time line. One of the tasks that would have to be completed in the design of a scientific study is to identify these periods. Each such period would be associated with an event.
In climatology, the tradition is for an event to last 30 years but if this tradition were to be followed by global warming climatology there would be only five or six observed events going back to the start of the various global temperature time series in the year 1850. That’s too few by a factor of at least 30 for statistical significance of conclusions. Thus, what we are looking at is a duration of each event being no greater than one year. A prediction extends over the duration of an event. Thus, to make predictions over 50 years as climatologists imply they can do is not in the cards. They can only do this because their “predictions” are really logically nonsensical “projections.”
In the design of a scientific study another task would be to identify the sample space. I run into people who think the sample space would be comprised of the set of possible global temperature values but that is impractical because the sample space would contain values of infinite number. Divide up a finite number of measured values among an infinite number of elements of a sample space and the average number of measured values per element of the sample space is 0. The conclusions of such a study would lack statistical significance.
The elements of a sample space are classes of outcomes of events. Global warming climatology is severely short on observed events. Therefore what we are probably looking at is a sample space containing two outcome classes rather than an infinite number of them. One of many possibilities is that one of these classes is defined such that the average value of the global temperature over the duration of an event is greater than the median while the other class is defined such that the average value is less than or equal to the median.
Now that we have made the design decision that the sample space contains two outcome classes we are in a position to do something that we couldn’t do before. This is to construct an imaginary histogram. This histogram has two vertical bars. The height of each bar is proportional to the count of the observed events belonging to the associated outcome class. This count is called the “frequency.” The ratio of the frequency of an outcome class to the sum of the frequencies of the two outcome classes is called the “relative frequency.” Today’s global warming climatologists cannot construct a histogram because they have yet to identify the sample space. To identify the sample space is step one in the design of a study but after 20 years and the expenditure of 200 billion US dollars, they have yet to identify it. This is scientific malpractice on a grand scale!
In order for the model to convey information to a policy maker about the outcomes from his or her policy decisions (a necessity for control of the climate) there must be a condition space as well as an outcome space. There is a measure of the intersection of the condition space with the outcome space that is named after the inventor of information theory, Claude Shannon. Shannon’s measure of this intersection is called the “mutual information.” The condition space should be defined such that the mutual information is maximized. This can be achieved by executing a very complicated optimization. If you ping me I’ll give you a citation toward the literature on how to do this.
Do you have any more of your wonderful questions? If so, I’d like to address them.

katherine009
Reply to  Terry Oldberg
September 25, 2015 7:48 am

PS, thanks for sticking with me on this!

Aran
September 17, 2015 6:24 pm

I don’t believe climate models can make very accurate predictions. There are just too many unknowns still, as you rightly point out. However, I do understand that some of the factors you mention are not included. Milankovic cycles, you already point out act on time scales that are way longer than the projections that are being made with climate models, so the decision of not including them seems justified. ENSO and other oscillations average to zero in the long term. At least as far as we know now. There is, AFAIK, no good indication that they will have a significant long term contribution. Big volcanic eruptions typically influence climate/weather for several years and are completely unpredictable, so again no reason to include them for projections over several decades imho. All you can expect from climate models is ballpark estimates under certain conditions.

Reply to  Aran
September 17, 2015 6:31 pm

Aran:
Today’s climate models don’t make predictions.

Aran
Reply to  Terry Oldberg
September 17, 2015 6:44 pm

I don’t really want to get into a semantics discussion, but if you do, you will find that I did not claim the models make predictions.

James Hein
Reply to  Terry Oldberg
September 17, 2015 7:13 pm

But politicians and activists in places like Australia treat them as such

Reply to  James Hein
September 17, 2015 8:27 pm

James Hein:
They do.

PiperPaul
Reply to  Terry Oldberg
September 17, 2015 7:30 pm

Do they make long term forecasts then?

Reply to  PiperPaul
September 17, 2015 8:30 pm

PiperPaul:
Nope.

Reply to  Terry Oldberg
September 17, 2015 8:54 pm

Aran (Sept. 17 at 6:24 pm):
“…I donā€™t believe climate models can make very accurate predictions.
Aran (Sept. 17 at 6:44 pm):
“…I did not claim the models make predictions.”

Aran
Reply to  Terry Oldberg
September 17, 2015 9:50 pm

Terry, if I say that I don’t believe that pigs can fly, I am not claiming pigs can fly, am I?

MarkW
Reply to  Terry Oldberg
September 18, 2015 7:07 am

Hiding behind semantic quibbles. In this instance, the difference does not matter.

Reply to  MarkW
September 18, 2015 3:20 pm

MarkW
To call my argument a “semantic quibble” is to set up a strawman and knock it down.

catweazle666
Reply to  Terry Oldberg
September 18, 2015 5:08 pm

“Todayā€™s climate models donā€™t make predictions.”
Today’s climate models purely provide employment for people who would not otherwise be able to to obtain a proper job, such as shelf stacking in a supermarket.

Reply to  Terry Oldberg
September 20, 2015 10:37 am

Terry Oldberg says:
Todayā€™s climate models donā€™t make predictions.
Agreed. But as always, it’s the public’s perception that matters most in politics ā€” and the “dangerous man-made global warming” narrative is most certainly politics, not science.
The ‘science’ part is only a thin veneer that covers up the hoax; science is the “hook”, and the taxpaying public is the mark. Elmer Gantry would be green with envy at this scam.
The goal is the passage of a ‘carbon’ tax, which would give the government what every government throughout human history has craved: the means to tax the air we breathe.
Don’t let them do it. Their models are bogus. Push back!

Editor
Reply to  Aran
September 17, 2015 7:17 pm

I agree that ENSO and other oscillations averaging to zero in the long term is a reasonable proposition. Their great significance here is in Footnote 4.

Aran
Reply to  Mike Jonas
September 17, 2015 7:48 pm

Ah yes, that’s a fair point. Would be a good subject for some number crunching as the ENSO index goes back to 1950

Hivemind
Reply to  Mike Jonas
September 17, 2015 8:14 pm

Although ENSO, PDO, ADO, etc do average to zero in the long term, these models aren’t making predictions in the long term. They are making predictions in 15 years, well inside of the 60 year cycles of the ADO & PDO.
1) Yes, I have already noticed that some claim the models only make “projections”. Sophistry will get you nowhere. You are using it as a prediction, so prediction it will be called.
2) One of the greats of the climate alarmism world (if somebody in that world could ever be called great), once said that the “pause” didn’t matter until it reached 15 years. Well it reached 15 years a long time ago – the response was that now it doesn’t matter until it reaches 20 years.

Aran
Reply to  Mike Jonas
September 17, 2015 10:08 pm

As far as 1) goes, I really don’t care what name you give them. As said earlier, I am not going to discuss semantics. I hope we can agree that the model outcomes are aimed at estimating temporal changes in climate, not at predicting the weather in x years, where x is any number you can think of. Personally I would not take any model output for the shorter term too seriously since there are many unpredictable phenomena such as oscillations and volcanoes that have much stronger influence on those time scales than the processes that these models actually try to cover. As for 2) I really don’t see the relevance, nor do I feel any need to defend the words of some anonymous person, particularly if they are about statistically insignificant phenomena.

ren
Reply to  Aran
September 17, 2015 11:13 pm

These factors may be convergent or divergent, but their coincidence may for example result in a sharp increase in ice in the Arctic and an increase in the albedo in the summer. For example, the low solar activity and the negative phase of the AMO.

SAMURAI
September 17, 2015 6:26 pm

The entire CAGW hypothesis is based on the wrong assumption that CO2 forcing generates a 3~5 fold “runaway positive feedback loop” involving increased atmospheric water vapor concentrations, which means that 1C of gross CO2 forcing will ultimately generate about 3C~5C of NET global warming….
The problem with runaway feedback loops, is that once the sum of the feedbacks exceeds 1, the feedbacks soon run to infinity (Dr. Hansen’s boiling oceans)… To prevent CO2’s runaway feedback loop from going to infinity, CAGW modelers assume that particulates released from fossil fuel burning act as a negative feedback to obtain a Goldilocks’ constant that is just enough to warming to scare small children and politicians to get more research grants and waste 10’s of $trillions on CO2 mitigation, but not too little to cause model projections to exceed reality by 3+ standard deviations and disconfirm their precious hypothesis, and put them in the unemployment line…
The problem is that CAGW model mean projections already exceed reality by 2 standard deviations, and in 5~7 years, the discrepancies will likely exceed 3 standard deviations, at which point, the CAGW hypothesis gets run through the wood chipper…
Oh, what a tangled web the CAGW hypothesis has become….

Reply to  SAMURAI
September 17, 2015 6:59 pm

SAMURAI:
Interesting post! It sounds as though the “Goldilocks constant” is the equilibrium climate sensitivity. Rather than being a conclusion reached by scientific research the existence of this “constant” was established by the logically and ethically illegitimate process of placing “the” in front of “equilibrium climate sensitivity.” Placing “the” in front of it implied that the “equilibrium climate sensitivity” was single valued when there was no reason for belief in the proposition that it was not multi-valued. A result was for information to be fabricated that was conducive to the financial interests of global warming climatologists.

SAMURAI
Reply to  Terry Oldberg
September 17, 2015 11:29 pm

Terry– Based on empirical evidence and the physics, Equilibrium Climate Sensitivity seems closer to 0.5C by 2100, rather than the 5C, which is usually quoted to press.
The CAGW sycophants need to keep ECS about 2C or else there isn’t a catastrophe, and they can’t make it more than 5C, or they’ll be laughed at….
They need to keep ECS at the Golidilocks level to keep this scam going until they can retire with full pensions. The Leftist also need to keep CAGW going to steal as much taxpayer money as possible before the gig is up…
So much money…..so little time… It’s sad…
Historians will shake their heads and think this generation went collectively mad….

Reply to  SAMURAI
September 18, 2015 9:11 am

SAMURAI:
I prefer to call the concept TECS rather than ECS as it makes the swindle that is worked by placing “the” in front of “equilibrium climate sensitivity” more obvious.

September 17, 2015 6:33 pm

The climate models outputs fall into the category of “not even wrong”.The CAGW meme and by extension the climate and energy policies of most Western Governments are built on the outputs of these climate models. In spite of the inability of weather models to forecast more than about 10 days ahead, the climate modelers have deluded themselves, their employers, the grant giving agencies, the politicians and the general public into believing that they could build climate models capable of forecasting global temperatures for decades and centuries to come with useful accuracy.
The modelling approach is inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex:
https://www.youtube.com/watch?v=hvhipLNeda4
Models are often tuned by running them backwards against several decades of observation, this is
much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure ab initio.
In addition to these general problems of modeling complex systems , the particular IPCC models have glaringly obvious structural deficiencies as seen below (fig 2-20 from AR4 WG1- this is not very different from Fig 8-17 in the AR5 WG1 report)
http://1.bp.blogspot.com/-j17h1i84DYM/U8Qez85PveI/AAAAAAAAASA/D9x5WQCnbL8/s1600/Fig2-2014.jpg
The only natural forcing in both of the IPCC Figures is TSI, and everything else is classed as anthropogenic. The deficiency of this model structure is immediately obvious. Under natural forcings should come such things as, for example, Milankovitch Orbital Cycles, lunar related tidal effects on ocean currents, earth’s geomagnetic field strength and most importantly on millennial and centennial time scales all the Solar Activity data time series – e.g., Solar Magnetic Field strength, TSI, SSNs, GCRs, (effect on aerosols, clouds and albedo) CHs, CMEs, EUV variations, and associated ozone variations and Forbush events.
The IPCC climate models are further incorrectly structured because they are based on three irrational and false assumptions. First, that CO2 is the main climate driver. Second, that in calculating climate sensitivity, the GHE due to water vapor should be added to that of CO2 as a positive feed back effect. Third, that the GHE of water vapor is always positive. As to the last point, the feedbacks cannot be always positive otherwise we wouldn’t be here to talk about it.
Temperature drives CO2 and water vapor concentrations and evaporative and convective cooling independently. The whole CAGW – GHG scare is based on the obvious fallacy of putting the effect before the cause. Unless the range and causes of natural variation, as seen in the natural temperature quasi-periodicities, are known within reasonably narrow limits it is simply not possible to even begin to estimate the effect of anthropogenic CO2 on climate.
Because of the built in assumption in all the models that CO2 is the main driver, the actual temperature projections are relatively insensitive as to the particular IPCC climate model used, and, in fact, the range of outcomes depend almost entirely simply on the RCPs chosen. The RCPs depend on little more than fanciful speculations by economists. The principal component in the RCPs is whatever population forecast/speculation will best support the climate and energy policies of the IPCCs client Western governments.
The successive uncertainty estimates in the successive “Summary for Policymakers” take no account of the structural uncertainties in the models and almost the entire the range of model outputs may well lay outside the range of the real world future climate variability.
The climate models on which the entire Catastrophic Global Warming delusion rests are built without regard to the natural 60 and more importantly 1000 year periodicities so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense .It is exactly like taking the temperature trend from say Feb ā€“ July and projecting it ahead linearly for 20 years or so. They back tune their models for less than 100 years when the relevant time scale is millennial. This is scientific malfeasance and incompetence on a grand scale. Here is a picture which shows the sort of thing they did when they projected a cyclic trend in a straight line..
http://4.bp.blogspot.com/–pAcyHk9Mcg/VdzO4SEtHBI/AAAAAAAAAZw/EvF2J1bt5T0/s1600/straightlineproj.jpg
The temperature projections of the IPCC – UK Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted. For forecasts of the timing and extent of the coming cooling based on the natural solar activity cycles – most importantly the millennial cycle – and using the neutron count and 10Be record as the most useful proxy for solar activity check my blog-post at
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

Barry
Reply to  Dr Norman Page
September 17, 2015 7:18 pm

Bob – I don’t doubt the underlying sinusoidal curve called climate variability, but you need to superimpose an upward trend….
http://www.wunderground.com/blog/JeffMasters/comment.html?entrynum=3116

Reply to  Barry
September 18, 2015 1:22 am

And that upward trend is somewhere around 0.75 K over the last 165 years right?

MarkW
Reply to  Barry
September 18, 2015 7:11 am

The majority of which, was caused by something other than CO2 and hence could reverse at any time. (And may have already)

Reply to  Barry
September 18, 2015 7:46 am

Barry. Yes the 60 year cycle has been detrended. The models completely ignore the underlying rising leg of the millennial cycle. See the Fig from Christiansen et al below
http://1.bp.blogspot.com/-Mj4eZioh8C8/VdOHYKQnKrI/AAAAAAAAAX4/JU-PJhgqKEg/s1600/fig5.jpg http://www.clim-past.net/8/765/2012/cp-8-765-2012.pdf
It is blatantly obvious that the earth is just getting near to, is just at or just past the peak warmth of a 1000 year cycle and we are likely headed to another LIA type minimum at about 2600 – 2700.
The solar data suggest that the activity peak was at about 1991.The temperature peak is delayed because of the thermal inertia of the oceans. It is seen in the RSS data at about 2003.

markl
Reply to  Dr Norman Page
September 17, 2015 7:39 pm

Dr Norman Page commented: “….the climate modelers have deluded themselves, their employers, the grant giving agencies, the politicians and the general public into believing that they could build climate models capable of forecasting global temperatures for decades and centuries to come with useful accuracy.”
You assume that the purpose of the models is to determine the truth. If you believe the amount of money, politics, and media attention being applied to AGW is to find truth then you are the deluded one. We can point out holes in the AGW narrative forever and it won’t change the current vector and success of it’s goal to make CO2 a bogeyman. CO2 as a pollutant? Look what propaganda has done to/for us. As much as I enjoy reading these posts I realize the skeptic side is all talk…..and I agree with much of it. We’re relying on nature to vindicate our beliefs. And we’re finding out that even that can be corrupted by mushrooming the people.

Reply to  markl
September 18, 2015 3:09 am

“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” ~ H. L. Mencken

Gregory Lawn
Reply to  markl
September 18, 2015 7:35 am

Remember Obama commenting that “conservatives are clinging to their guns and religion”. That was nonsense. So is conspiracy theory of a vast left wing conspiracy. Stick to the science, its a winning argument.
Most alarmists actually believe CAGW is a real threat. In the past the consensus was that the earth is flat and the center of the solar system. The consensus did not change the fact that the earth is round and the sun is the center of our solar system. The fake CAGW consensus cannot endure indefinitely, and the fact is a majority of people are not concerned about it.

4TimesAYear
Reply to  Dr Norman Page
September 18, 2015 1:49 am

I was going to post that – you beat me to it – excellent presentation!

perturbed
Reply to  Dr Norman Page
September 18, 2015 2:55 am

Doesn’t your Figure A refute the facts asserted in the main article? This chart seems to suggest that there are multiple forcings besides CO2, water vapor, and clouds considered in the climate models. Thus, I question the accuracy of the essay’s assertion of a 22% effect of water vapor, 41% for clouds etc. since the numbers add to 100 yet none of the forcings in your FIG. A are mentioned in the guest essay except CO2. Certainly FIG. A directly contradicts the assertion that solar activity is not factored into the models.
Also, I read the IPCC quote from the 4th assessment report (at the beginning of the guest essay) as saying that the attribution studies (not the models themselves) are tuned by running individual forcings or sets of forcings separately through the models and then linearly weighting the results to best fit observations. For example, the model output from CO2 forcing (nonlinear response) would be linearly weighted with the model output of solar forcing (nonlinear response) etc. Water vapor and cloud feedback would not change during this procedure. Thus, the “unknowns” being changed are not the parameters on cloud feedback for example, but the strength of the forcings. How the models themselves are tuned is not addressed in that IPCC subsection. If I’m right (and possibly I’m not) this is a VERY important distinction.
This, of course, doesn’t change the conclusion that the IPCC’s conclusions are garbage, but it does change the reason. I’d never looked at that clause of the IPCC report before, but its stupidity is mind numbing – more so than the original essay gives it (credit?) for. What this seems to suggest is that the IPCC first starts with a bunch of computer models that each presume they know all they need to know about the strength of each forcing, e.g. climate sensitivity to CO2 etc. as well as the knowing all the feedbacks present, etc., and then tune their MODELS by adjusting internal unknown parameters to best fit the observations, as the original essay suggests. But because the climate system is nonlinear and chaotic, this presents a huge problem for determining what effect the change of each component has on the total output, i.e. ATTRIBUTION; if they re-run the models with only each input individually, the sum of outputs no longer fits the observational curve. So how do they solve this problem? They just linearly weight the outputs to best fit the curve and call it a day.
All of this suggests that the IPCC assumes that: (1) climate does not change naturally due to something other than TSI – something universally believed NOT to be true; (2) respective outputs of a nonlinear chaotic system to changes in each of a number of inputs can be linearly weighted and still produce a useful result – something that is just laughably stupid, making me think that they do this because it is the ONLY thing that can be done given the inherent complexity of nonlinear combinations of outputs and because you can’t change the parameters of the model without putting the model itself out of whack with the observations; and (3) the transient response of the Earth’s climate system to an impulse change in an input is known – something that only the terminally gullible would believe.
I want to expand on this latter point. if we don’t know the output effect of CO2’s forcing relative to aerosol’s forcing, for example, or what the transient response of a change in Total Solar Irradiance (TSI) is on the climate system – the natural response prior to manmade inputs, then how the hell can climate scientists know the first thing about the inner workings of the climate system? How can they know what the strength of the feedbacks are? Just where does this fountain of knowledge come from? It certainly can’t be from measurements and experimentation. From an engineering perspective this is completely backwards. To figure out what goes on in the black box, you first have to determine the response at the output to a change in each of input 1, input 2 etc. The very idea that you can know more about the inner workings of the black box than you do about the effect that each input individually has on the output – well, it just boggles the mind.

Reply to  perturbed
September 18, 2015 12:48 pm

Perturbed,
The IPCC had their conclusion firmly set before they did one tiny bit of investigating.
Same with Hansen, and any other warmistas, back in the 1980’s.
There was simply no where near enough justification to even begin to ring an alarmist bell.
To have believed any of it, even back then, meant either being ignorant of or forgetting all of Earth history and what was known DECADES AGO, about past cycles of warming and cooling.
I was just wrapping up an education that studying all of the relevant subjects, on top of what had already been years and years of studying Earth history, particularly past glacial epochs and more recent glacial advances and declines.
And yet, by now, the scare stories have been told so many times and repeated so often and to so many ears and in so many aspects, that in the minds of a large number of people, it is already happening…we are well along the way to catastrophe.
Large numbers of individuals, including some in positions such as leading key regulatory agencies, do not know the difference between CO2 and a real pollutant…or else they are such sociopathic liars that they do what they do anyway.
Either way, it is bad news for those who envision a sane future guided by truth and pragmatism.

kwinterkorn
Reply to  perturbed
September 24, 2015 10:23 am

The people in need of an apocalypse to satisfy their misanthropy were in trouble in the 1980’s. Most of them were post-Christian and so the predictions (projections?) of the Book of Revelations no longer satisfied. With the coming of Gorbachev, Glasnost, and summits with Reagan, the satisfying prospect of Nuclear Winter faded. Acid Rain and the enlarging Ozone hole were tried out, but honestly they were pretty silly and never more than minor threats, not enough to get the moralistic people-haters a good “arousal”, shall we say.
But then it was noticed that temps were up since the 1970’s, and CO2 was up, and Greenhouse Gas theory seemed so very delicious and unimpeachably scientific. Since everybody knows that Coal is dirty, and Big Oil is evil, and consumption by the capitalist West unfair, a theory that rising CO2 dooms us all if we do not repent our evil lifestyles was perfect.
I’ve known a good many scientists. A sadly high percentage of them think that ordinary folks are stupid, barely above animals, and in constant need of having people like scientists run the world so that the ordinary people can be saved from their tendency to stupidly self-destruct. CO2-based CAGW as a theory is a perfect fit for these scientists—wonderfully esoteric (only special people can understand it); prescriptive of hair-shirt, self-denying, “sustainable” lifestlyes, and allowing those satisfying fantasies of the burning destruction or the immoral that sinners in the hands of an angry god once faced.
How unfortunate for the scientist/misanthropes that CO2 has continued to rise, but temps have not. They are in a tizzy. Another decade and they’ll be looking for a new Apocalypse.

emsnews
Reply to  Dr Norman Page
September 18, 2015 6:11 am

The scale for ‘solar irradiance’ is way, way off. It is the #1 determining factor controlling whether we are in an ice age or interglacial. So many people are very anxious about the sun and are desperate to make it this steady state star that doesn’t change gears with little warning.

Reply to  Dr Norman Page
September 18, 2015 12:30 pm

Mr. Page,
I sure do appreciate reading some honest and frank and cogent words from a sane and educated person (Not implying that there are not many such individuals here).
Thank you sir.

DWR54
Reply to  Dr Norman Page
September 21, 2015 2:50 am

According to the sine wave pattern in Bob Tisdale’s chart, global temperatures should have started to dip well before now. In GISS, the trend between the lowest month in 1975 and the warmest in 2005 was 0.186C per decade: http://www.woodfortrees.org/plot/gistemp/from:1970/mean:12/plot/gistemp/from:1975.58/to:2005.83/trend/plot/none
Between the lowest month in 1975 and the present (to August 2015) the trend is just fractionally lower than that, at 0.172 C per decade: http://www.woodfortrees.org/plot/gistemp/from:1970/mean:12/plot/gistemp/from:1975.58/to:2005.83/trend/plot/gistemp/from:1975.58/trend
It has to be said that the exaggerated-looking trend projection in Bob’s chart looks a lot closer to the observed trend than the down-sloping sine wave does.

crypto
September 17, 2015 6:37 pm

I have pointed out much of the same to folks, yet the still seem eager to place faith in models.
With the increase in the number, resolution, and accuracy of speleothem studies I see an nrv that is growing wider and the stistically significant contribution by co2 grows further out of reach.
All I can say is post modern science has taken over pop culture and we now have a stupidization of society thanks to co2 hysteria.

Nick Stokes
September 17, 2015 6:57 pm

“Parameters controlling the unknowns in the models were then fiddled with (as in the above IPCC report quote) until they got a match.”
I think there is an elementary misunderstanding that crops up here frequently. They aren’t talking in that section (which is in 9.2, BTW, not 9.1.3) about modifying the GCM. They are talking about back-calculating forcings. Despite common misunderstanding here, the forcings are not a GCM input. They are a device for interpretation, or, if you like, attribution. They fit the forcings; that is not fitting the GCM.

Editor
Reply to  Nick Stokes
September 17, 2015 7:37 pm

Yes, 9.2 not 9.1.3. Apologies. But they don’t just “fit the forcings”, they fit the response patterns as calculated by the climate models. “[] a climate model is used to calculate response patterns [] which are then combined linearly to provide the best fit []”. So it is as I said.

Lewis P Buckingham
Reply to  Nick Stokes
September 17, 2015 10:13 pm

Maybe so.
However the concept of ‘forcings’ made the stage in Australia in a totally different way.
Professor Flannery, Climate Commissioner and Australian of the Year spoke on national TV about how increasing CO2 in the atmosphere would create a climate forcing that increased the temperature of the atmosphere. All sorts of bad things then would follow.
So this use was not a ‘device for interpretation’ but a certain prediction of effect.
I had never heard the term ‘forcing’ before his interview and was carried by his eloquence and certainty.
Its taken some years for me to unlearn his insights based on these models.
Its a pity he had not realised that ‘forcing’ was in fact a ‘device for interpretation’ of GCM’s that have been shown to be unreliable.
In other words the forcing is just a construct unable to be validated against reality by these models.
Just like the other models that told us of dams that would never fill.

Reply to  Lewis P Buckingham
September 18, 2015 8:36 am

Lewis P Buckingham:
Right. In the earliest of its assessment reports the IPCC claimed that its climate models had been validated. In the paper entitled “Spinning the Climate,” Vincent Gray reports confronting IPCC management with the fact that the models had not been validated. Management reacted by instituting the policy of changing “validation” to “evaluation” in its subsequent assessment reports but though “validation” was a logically meaningful concept the same was not true of “evaluation.” For an IPCC management that was bent on curbing CO2 emissions, “evaluation” had the merit of sounding like “validation” thus being easy to confuse with that word. Like many of the articles that are published here at wattsupwiththat, the one under discussion is based upon evaluation, validation being impossible for the lack of the statistical population underlying the model.

Reply to  Lewis P Buckingham
September 18, 2015 12:56 pm

To the average unsuspecting and credulous person, a scientist using the word “forcing” and spelling out a disaster scenario, sounds lie there is no possibility of it being wrong, as the word “forcing” usually means something that is made to happen in a definitive and unequivocal way.
If someone or something is forced to do something…they have no choice…it is going to happen.
In this way, the word is misdirection.

AJB
Reply to  Nick Stokes
September 18, 2015 1:19 am

They are a device for interpretation, or, if you like, attribution self-deceit.
Geometric or exponential error progression does not lend itself to interpretation; the end result is untimely oblivion which is meaningless. Performing iterative interdependent calculations encompassing non-linear relationships (some of them unknown) cannot be dressed up to be valid, no matter how much statistical bullshit you throw at it.
Mind you, numerous folk appear to be making a good living by pretending otherwise. It’s a form of farce Nick, very akin to an Inspector Clouseau plot (the most extreme being the concept of an ensemble mean).

KevinK
September 17, 2015 7:00 pm

Mike Jonas writes;
“Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments.”
Yes indeed this is not in doubt. However the result of this phenomenon in the climate is still very much in doubt. Especially with regard to the “average” temperature. Aside from the fact that an “average temperatureā€ has no useful meaning. I’m reminded of the old observation that if one of your feet is in ice water and the other is in boiling water you are “on average” quite comfortable overall.
Here is where the alleged “GHE” breaks down. There are numerous examples of human designed optical systems (aka applied radiation physics) that exhibit “back radiation”. Including the optical integrating sphere and the multi layer optical interference filter. In both cases “back radiation” certainly exists, but it can be difficult to measure. In neither case does the “back radiation” alone cause the source to “reach a higher temperature”.
In the specific case of an optical integrating sphere the interior surface of the sphere (highly reflective) becomes a ā€œvirtual light sourceā€. This concept of a virtual source is somewhat specific to the optical engineering community. It helps with understanding (and predicting) the paths that photons will follow through a system. However (and this is a very big however) it DOES NOT predict the energy present at any point in the system.
In the case of an optical integrating sphere with an incandescent filament (aka a light bulb) inside this ā€œback radiationā€ merely delays the elapsed travel time of the photons flowing through the system. This is a result of the photons ā€œbouncing back and forthā€ inside the sphere until they find an ā€œexit portā€.
This is known as the ā€œtransient responseā€ of an optical integrating sphere. This is a somewhat obscure but still well understood concept. If you inject an input ā€œpulseā€ of light (off, then quickly on, then quickly off again) this transient response function will create a ā€œstretchedā€ pulse of output light. Specifically this square input pulse is no longer a square output pulse since some photons will quickly find an exit port and others will ā€œbounce near and farā€ before exiting the sphere.
The gaseous atmosphere of the Earth is quite like an optical integrating sphere in this regard. The photons arriving from the Sun and being converted to emitted IR radiation (still a form of light or electromagnetic radiation and following all of the same rules/laws) simply bounce ā€œback and forthā€ between the atmosphere and the surface. All this bouncing merely delays the flow of energy through the system as the energy alternates between light energy and thermal energy.
Given the dimensions of the atmosphere (about 5 miles high) and the velocity of light (still considered quite speedy) this alleged ā€œGHEā€ merely delays the flow of energy (arriving as sunlight) through the system by a few tens of milliseconds. The specific delay for any given photon is of course described by a statistical distribution.
Since the period of the arriving light is about 24 hours this delay of a few tens of milliseconds has no effect on the ā€œaverage temperatureā€ at the surface of the Earth.
Another example of ā€œback radiationā€ and its practical uses is the multi layer optical interference coating. This is the highly engineered coating on most modern optical lenses. It appears slightly purple when observed off-axis. The purpose of this coating is to reduce reflections from the surface of a lens.
These coatings have greatly improved the quality of photographs and videos by increasing contrast and reducing ā€œghost imagesā€ (images that are created by the individual surfaces inside a modern optical lens).
These coatings function by delaying ā€œfollowing photonsā€ by a time equivalent to a fraction of the wavelength of the arriving light. By creating exactly the correct delay interval the reflected light is exactly ā€œout of phaseā€ from the arriving light and destructive optical interference occurs. This moves the optical energy to a location inside the optical lens where it is no longer subject to surface reflections.
Both of these ā€œapplied radiation physicsā€ effects/techniques have been applied for decades and are quite well understood.
The alleged ā€œradiative greenhouse effectā€ merely delays the flow of energy through the system and has no effect on the ā€œaverage temperatureā€. It does change the response time of the gases in the climate. Since the gases have the smallest thermal capacity of all the components present (Oceans, land masses, atmosphere) the idea that they are controlling the ā€œaverage temperatureā€ is quite ludicrous.
Modeling these radiative effects in the climate is probably impossible. The required spatial distances are sub-micron the the time steps necessary are in the nanosecond range. There would need to be a increase of computing power of about ten orders of magnitude to even begin to attempt this.
There is of course a gravitational greenhouse effect whereby the effects of gravity acting on the gases in the atmosphere of the Earth predict quite well (see the US standard atmosphere model last updated in 1976) the temperature of the atmosphere of the Earth with no use of radiative effects at all.
It is quite sad that all this effort has been wasted on modeling the ā€œunmodelableā€.
Cheers, KevinK.

Bubba Cow
Reply to  KevinK
September 17, 2015 8:01 pm

I would like you and george e smith to elevate this to a post and send to Anthony.
Thanks for your input.

KevinK
Reply to  Bubba Cow
September 17, 2015 8:24 pm

Bubba, thank you.
I did submit a somewhat whimsical explanation of this delay line effect to Anthony several years ago.
I have submitted a more detailed explanation to other climate science sites as well.
The “radiative greenhouse effect” is merely a form of hybrid optical/thermal delay line. It has no effect on the “average” temperature at the surface of the Earth.
Cheers, KevinK

Editor
Reply to  Bubba Cow
September 17, 2015 10:45 pm

KevinK – Your comment is at a greater level of detail than my article, so as suggested would be better as a separate article. I note your “Since the gases have the smallest thermal capacity of all the components present (Oceans, land masses, atmosphere) the idea that they are controlling the ā€œaverage temperatureā€ is quite ludicrous.“, but to my mind the GHG theory whereby some outgoing IR is in effect turned back and thus affects surface temperature is at least prima facie credible. I’m prepared to work with this version (even though, just like everything else, science may one day overturn it) while there are such glaring errors elsewhere.

Reply to  KevinK
September 18, 2015 2:55 am

KevinK,
I think that is the best comment I have read here in several weeks at least. (a high complement considering the quality of the comments here)
I do hope that you will offer that comment as a post, that it is posted, and that then the moderation allow a full and complete debate on all parts of it. There are many of us who think the mass of the atmosphere along with gravity is the main reason for the misnamed “green house effect” along with H2O in all its phases.
~ Mark

MarkW
Reply to  KevinK
September 18, 2015 7:18 am

If this delaying of the photon by a few milliseconds has no impact on the “average temperature”, please explain the well documented phenomena of heat retention on humid nights compared to dry one.

KevinK
Reply to  MarkW
September 18, 2015 6:25 pm

The thermal capacity of water is much greater than CO2.
This is why the main purpose of indoor air conditioning is to remove the water vapor first and then secondarily reduce the temperature of the now dryer air.

MarkW
Reply to  MarkW
September 19, 2015 6:26 am

Kevin, Liquid water yes, because of it’s much greater density. However the difference between water in the vapor stage and CO2 is much, much smaller.
Regardless, the warming affect of water occurs even when it is the air aloft that is damp and the air at the surface is dry. IE, clouds.

MarkW
Reply to  KevinK
September 18, 2015 7:35 am

Has anyone calculated the average delay for a photon that is within one of CO2 absorbtion bands?
I strongly suspect that it is more than a few milliseconds. Given that the direction of the photon when it is re-emitted is random, it could be down as easily as up, if it is sideways, it will have many miles of dense atmosphere to traverse compared to up.

Gloria Swansong
Reply to  MarkW
September 18, 2015 2:23 pm

At about 22 minutes, Dr. Happer shows the “xylophone effect” on a CO2 molecule.

Here is an email exchange between Dave Burton and Will Happer concerning the issue of “re-emitting” a photon v. collisions with other molecules in the air, mostly N2 of course:
http://www.sealevel.info/Happer_UNC_2014-09-08/Another_question.html
A portion of their discussion:
After hearing Will’s lecture, Dave asks:
1. At low altitudes, the mean time between molecular collisions, through which an excited CO2 molecule can transfer its energy to another gas molecule (usually N2) is on the order of 1 nanosecond.
2. The mean decay time for an excited CO2 molecule to emit an IR photon is on the order of 1 second (a billion times as long).
Did I understand that correctly?
Will replies: [YES, PRECISELY. I ATTACH A PAPER ON RADIATIVE LIFETIMES OF CO2 FROM THE CO2 LASER COMMUNITY. YOU SHOULD LOOK AT THE BENDING-MODE TRANSITIONS, FOR EXAMPLE, 010 ā€“ 000. AS I THINK I MAY HAVE INDICATED ON SLIDE 24, THE RADIATIVE DECAY RATES FOR THE BENDING MODE ALSO DEPEND ON VIBRATION AND ROTATIONAL QUANTUM NUMBERS, AND THEY CAN BE A FEW ORDERS OF MAGNITUDE SLOWER THAN 1 S^{-1} FOR HIGHER EXCITED STATES. THIS IS BECAUSE OF SMALL MATRIX ELEMENTS FOR THE TRANSITION MOMENTS.]
Dave: You didn’t mention it, but I assume H2O molecules have a similar decay time to emit an IR photon. Is that right, too?
[YES. I CAN’T IMMEDIATELY FIND A SIMILAR PAPER TO THE ONE I ATTACHED ABOUT CO2, BUT THESE TRANSITIONS HAVE BEEN CAREFULLY STUDIED IN CONNECTION WITH INTERSTELLAR MASERS. I ATTACH SOME NICE VIEWGRAPHS THAT SUMMARIZE THE ISSUES, A FEW OF WHICH TOUCH ON H2O, ONE OF THE IMPORTANT INTERSTELLAR MOLECULES. ALAS, THE SLIDES DO NOT INCLUDE A TABLE OF LIFETIMES. BUT YOU SHOULD BE ABLE TO TRACK THEM DOWN FROM REFERENCES ON THE VIEWGRAPHS IF YOU LIKE. ROUGHLY SPEAKING, THE RADIATIVE LIFETIMES OF ELECTRIC DIPOLE MOMENTS SCALE AS THE CUBE OF THE WAVELENTH AND INVERSELY AS THE SQUARE OF THE ELECTRIC DIPOLE MATRIX ELEMENT (FROM BASIC QUANTUM MECHANICS) SO IF AN ATOM HAS A RADIATIVE LIFETIME OF 16 NSEC AT A WAVELENGTH OF 0.6 MIRONS (SODIUM), A CO2 BENDING MODE TRANSITION, WITH A WAVELENGTH OF 15 MICRONS AND ABOUT 1/30 THE MATRIX ELEMENT SHOULD HAVE A LIFETIME OF ORDER 16 (30)^2 (15/.6)^3 NS = 0.2 S.
Dave: So, after a CO2 (or H2O) molecule absorbs a 15 micron IR photon, about 99.9999999% of the time it will give up its energy by collision with another gas molecule, not by re-emission of another photon. Is that true (assuming that I counted the right number of nines)?
Will: [YES, ABSOLUTELY.]
Dave: In other words, the very widely repeated description of GHG molecules absorbing infrared photons and then re-emitting them in random directions is only correct for about one absorbed photon in a billion. True?
Will: [YES, IT IS THIS EXTREME SLOWNESS OF RADIATIVE DECAY RATES THAT ALLOWS THE CO2 MOLECULES IN THE ATMOSPHERE TO HAVE VERY NEARLY THE SAME VIBRATION-ROTATION TEMPERATURE OF THE LOCAL AIR MOLECULES.]

Reply to  MarkW
September 22, 2015 6:51 am

MarkW why not first define a photon. Normally this refers to a pulse of visible light (which has a fixed speed in a vacuum) and has a wavelength of 0.45 to 0.7 micron. Such a photon is not absorbed or emitted by CO2. Maybe you do not understand photons which Nobel prize winning physicist W Lamb Jr wrote do not exist.

Reply to  KevinK
September 18, 2015 2:07 pm

Average between boiling water and ice water is 50C…pretty darn hot for anyone.
Just sayin’.
Otherwise I agree with the sentiment.
If the world warms up, and it is mostly all in the Arctic, I would say that is a very good thing…less of the Earth is frozen wasteland.

MarkW
Reply to  Menicholas
September 19, 2015 6:28 am

Maybe it was dry ice?

perplexed
Reply to  KevinK
September 18, 2015 5:04 pm

“The gaseous atmosphere of the Earth is quite like an optical integrating sphere in this regard. The photons arriving from the Sun and being converted to emitted IR radiation (still a form of light or electromagnetic radiation and following all of the same rules/laws) simply bounce ā€œback and forthā€ between the atmosphere and the surface.”
I don’t believe this is a correct analogy, as the radiation from GHGs don’t “bounce back and forth” between the atmosphere and the surface. The radiation is ABSORBED at either end and is re-emitted in the opposite direction only after the respective boundary has increased its temperature. Your “filament in a light bulb” example is one of reflection/scattering and not one of absorption/re-radiation. A perfect reflector doesn’t have to increase its temperature to send the light back in the opposite direction. A CO2 molecule, and the earth’s surface does because neither is close to an ideal reflector.

Reply to  perplexed
September 19, 2015 4:42 pm

Perplexed, you are quite perplexed.
Of course, IR does bounce back and forth, between GHGs and other GHGs and the surface, until ultimately (within microseconds to a max of 22 minutes according to Prof. Happer in the comments below) finding a “hole” to escape to space, exactly as KevinK explained what happens in an optical integrating sphere, a perfect analogy indeed to the so-called Arrhenius radiative “GHE”:
http://hockeyschtick.blogspot.com/2015/09/why-greenhouse-gases-dont-trap-heat-in.html
Also, IR-active gases like CO2 are indeed “perfect reflectors” of IR, emitting & absorbing the exact same 15 micron IR, but much more commonly transfer absorbed IR to molecular collisions with N2/O2, thereby accelerating convective COOLING.

Reply to  perplexed
September 19, 2015 5:15 pm

Typo correction: Of course, IR does bounce back and forth, between GHGs and other GHGs and the surface, until ultimately (within microseconds to seconds) according to Prof. Happer in the comments below) finding a ā€œholeā€ to escape to space,…”

Ed Bo
Reply to  perplexed
September 20, 2015 4:28 pm

perplexed:
You are, of course, correct. KevinK’s and hockeyschtick’s understandings of the basics of thermodynamics are so confused that they go off the rails before they even start. Not discerning the difference between reflection and absorption/re-emission is just the start.
Let’s say the CO2 molecule is in an atmospheric level that has half the absolute temperature of the earth’s surface (just to make the math easy). Its emission at a given wavelength would be 1/16 of the surface’s (given the same emissivity/absorptivity), and half of that would be downward. So only 1/32 of the radiant energy received at that wavelength would be passed upward; 1/32 would be returned down; the other 30/32 would increase the thermal energy of that level of the atmosphere.
If this area of the atmosphere were at 255/288 of the surface’s absolute temperature, its emission at a wavelength would be 61%, and again, half up half down. Completely different from a reflective integrating sphere.

perplexed
Reply to  perplexed
September 20, 2015 7:40 pm

If the analysis by KevinK were correct, then there would be no significant greenhouse effect at all, and there would have to be some other plausible explanation for the 33C increase in average temperature of the Earth. None is offered. If the analysis by KevinK were correct, dark surfaces like asphalt wouldn’t heat significantly in sunlight.
The time it takes for radiation to be absorbed and re-emitted doesn’t seem to be relevant to me. The only important fact is that for re-emission to occur, the material absorbing the radiation must increase its temperature to regain equilibrium. KevinK’s analogy is one that ignores absorption entirely. Perfect reflectors don’t absorb. They are polar opposites where absorption and reflection sum to 1.

Ed Bo
Reply to  perplexed
September 20, 2015 8:52 pm

perplexed:
Yeah, it’s amazing what you can come up with when you don’t understand the basics. To KevinK, everything is an optics problem. The fact that the photon is absorbed, and a completely separate set of conditions govern whether and how often another photon will be emitted, is completely beyond his comprehension, when all he knows how to deal with is reflective optics.

Reply to  perplexed
September 21, 2015 11:59 am

Ed bo, you are equally as perplexed as perplexed
Ed bo says “Not discerning the difference between reflection and absorption/re-emission is just the start.”
Straw man and/or lie. Of course there is a difference, which is a time delay of a fraction of a second in the case of absorption/emission vs. reflection. You completely missed the whole point KevinK’s comment, which is that absorption/emission by CO2 is analogous to an optical delay line and only delays the ultimate escape of IR photons to space by milliseconds to seconds, easily reversed and erased at night for no net warning effect on a daily to annual to multi-decadal basis.
Ed bo and perplexed also are of the grossly mistaken belief that all photons are created equal, and that 15 micron low-Energy/frequency IR (equivalent to the thermal radiation of a cold blackbody at 193K) can warm a much warmer blackbody (earth) at 255K by 33K up to 288K. Absolutely false! Google “frequency cutoff for thermalization ” for 643,000 references explaining why I’m right and you’re wrong, then Google Planck’s theory of blackbody radiation for millions of references explain why low-E/frequency 15 micron CO2 IR cannot warm/increase the temperature/E/frequency of a warmer blackbody emitting at a peak ~10 microns (Earth).
There is indeed a 33C gravito-thermal GHE (of Maxwell, Clausius, Carnot, Boltzmann, Feynman, US Std Atm, HS greenhouse eqn) caused by mass/pressure/gravity. The Arrhenius radiative GHE confuses the cause (gravito-thermal) with the effect (IR absorption/emission from GHGs.

Ed Bo
Reply to  perplexed
September 21, 2015 7:30 pm

No, hockeyschtick, you can’t understand this very basic point even when it is completely spelled out for you. Emission is a separate process from absorption, so you cannot just say that there is re-emission with a short delay after absorption. This is fundamentally different from reflection, not just in timing.
Emission is proportional to the 4th power of absolute temperature, so if the cooler absorbing body has half the absolute temperature of the warmer body, its emission at a given wavelength will be 1/16 (1/[2^4]) of the warmer body (for the same emissivity absorptivity). So for every 16 photons of a given wavelength it absorbs, it only emits one. It is NOT a simple time delay.
This is some of the most basic stuff in radiative heat transfer, and you don’t understand it at all!
You are also completely wrong about “cutoff frequencies”. Lasers with 10.6 micron wavelength, which by your reckoning is “equivalent to the thermal radiation of a cold blackbody” at 273K, are used all the time to cut through steel, which requires them to heat steel at it melting point of 1640K. According to you, this is not possible, but it is done every day!

Reply to  perplexed
September 21, 2015 8:21 pm

Ed Bo, you are way beyond pathetically confused, and say “Emission is a separate process from absorption, so you cannot just say that there is re-emission with a short delay after absorption.”
I have a degree in physical chem, and a grad degree, so stop continuing to make ridiculous straw man arguments that I allegedly think emission is the same thing as absorption. Didn’t you read Will Happer’s explanation right above? To wit:
The mean decay time for an excited CO2 molecule to emit an IR photon is on the order of 1 second (a billion times as long).
Did I understand that correctly?
Will replies: [YES, PRECISELY. I ATTACH A PAPER ON RADIATIVE LIFETIMES OF CO2 FROM THE CO2 LASER COMMUNITY. YOU SHOULD LOOK AT THE BENDING-MODE TRANSITIONS, FOR EXAMPLE, 010 ā€“ 000. AS I THINK I MAY HAVE INDICATED ON SLIDE 24, THE RADIATIVE DECAY RATES FOR THE BENDING MODE ALSO DEPEND ON VIBRATION AND ROTATIONAL QUANTUM NUMBERS, AND THEY CAN BE A FEW ORDERS OF MAGNITUDE SLOWER THAN 1 S^{-1} FOR HIGHER EXCITED STATES. THIS IS BECAUSE OF SMALL MATRIX ELEMENTS FOR THE TRANSITION MOMENTS.]
Dave: You didnā€™t mention it, but I assume H2O molecules have a similar decay time to emit an IR photon. Is that right, too?
[YES. I CANā€™T IMMEDIATELY FIND A SIMILAR PAPER TO THE ONE I ATTACHED ABOUT CO2, BUT THESE TRANSITIONS HAVE BEEN CAREFULLY STUDIED IN CONNECTION WITH INTERSTELLAR MASERS. I ATTACH SOME NICE VIEWGRAPHS THAT SUMMARIZE THE ISSUES, A FEW OF WHICH TOUCH ON H2O, ONE OF THE IMPORTANT INTERSTELLAR MOLECULES. ALAS, THE SLIDES DO NOT INCLUDE A TABLE OF LIFETIMES. BUT YOU SHOULD BE ABLE TO TRACK THEM DOWN FROM REFERENCES ON THE VIEWGRAPHS IF YOU LIKE. ROUGHLY SPEAKING, THE RADIATIVE LIFETIMES OF ELECTRIC DIPOLE MOMENTS SCALE AS THE CUBE OF THE WAVELENTH AND INVERSELY AS THE SQUARE OF THE ELECTRIC DIPOLE MATRIX ELEMENT (FROM BASIC QUANTUM MECHANICS) SO IF AN ATOM HAS A RADIATIVE LIFETIME OF 16 NSEC AT A WAVELENGTH OF 0.6 MIRONS (SODIUM), A CO2 BENDING MODE TRANSITION, WITH A WAVELENGTH OF 15 MICRONS AND ABOUT 1/30 THE MATRIX ELEMENT SHOULD HAVE A LIFETIME OF ORDER 16 (30)^2 (15/.6)^3 NS = 0.2 S.

Ed bo says “Emission is proportional to the 4th power of absolute temperature, so if the cooler absorbing body has half the absolute temperature of the warmer body, its emission at a given wavelength will be 1/16 (1/[2^4]) of the warmer body (for the same emissivity absorptivity). So for every 16 photons of a given wavelength it absorbs, it only emits one. It is NOT a simple time delay.”
Hahahahahahah, you don’t even know the difference between the Stefan-Boltzmann Law of Blackbody radiation and the NON-blackbody molecular bending transitions and microstates of a CO2 MOLECULE, which is NOT a BLACKBODY and for which the SB law for true blackbodies only cannot be applied! Read Happer right above, again.
Ed bo says “So for every 16 photons of a given wavelength it absorbs, it only emits one.” Thus, according to Ed bo, CO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!
LOLOL: “This is some of the most basic stuff in radiative heat transfer, and you donā€™t understand it at all!”
Ditto.
The N2/CO2 laser faux-argument has been shot down a billion times before, including here:
http://hockeyschtick.blogspot.com/2015/08/why-greenhouse-gases-do-not-remove-any.html
Pathetic. I sure hope you’re not a heat transfer engineer, or climate scientist. You could kill someone with such logic.

Ed Bo
Reply to  perplexed
September 21, 2015 11:39 pm

hockeyschtick:
You quote me as discussing “emissivity/absorptivity” and you cannot understand that I am NOT talking about blackbodies. And your understanding of radiative heat transfer is so poor that you do not understand that even for non-blackbodies, thermal radiative output at a given wavelength is still proportional to the 4th power of absolute temperature. (It is also proportional to the emissivity at that wavelength.)
You cannot have been paying attention in your PChem classes!
You say that I believe that “CO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!”
Once again you demonstrate absolutely no understanding of the most basic concepts of thermodynamics. I clearly stated that the remainder “would increase the thermal energy of that level of the atmosphere.” Not everything is a steady state system, and I did not claim this was. By your logic, you could not heat a pot of water on a stove, because it would have to give off as much energy as it received from the burner.
As is typical, you completely misunderstand Happer’s points. A CO2 or H2O molecule excited by absorbing an IR photon is virtually certain to transfer this energy by collision with other gas molecules before it can emit a photon to “relax”. But the chances of it getting “re-excited” by another collision so it can emit a photon at this same wavelength are heavily dependent on the local temperature of the atmosphere. These chances are far less at lower temperatures.
And you haven’t begun to come to grips with 10.6um radiation boiling water and melting steel. According to you, this should not be possible.

Reply to  perplexed
September 22, 2015 10:11 am

Nonono EDbo
First of all, the SB Law for TRUE BBs or greybodies is absolutely NOT applicable to CO2 (or H2O) MOLECULES, and as proven by observations, emissivity decreases with temperature, unlike a true blackbody:
http://1.bp.blogspot.com/-UWg2eWMGU2A/U2vvgvyMnpI/AAAAAAAAF-w/9AJ44NLDsX4/s1600/photo.PNG
http://3.bp.blogspot.com/-Nb4poOlIaco/U2vys448_PI/AAAAAAAAF_c/6NNWzpHJQoE/s1600/water+vapor+emissivity.jpg
I said: You say that I believe that ā€œCO2 MOLECULES are equivalent to BLACK HOLES and/or VIOLATE the 1st LoT by absorbing 16 TIMES more photons than they EMIT! Hilarious!ā€
Look, Ed boo boo, I’ve said, and Happer said, the chance of an absorbed 15 micron photon by CO2 giving up it’s exact same quantum E via collisions with N2/O2 is about ONE BILLION times greater than emitting an identical 15 micron photon. And by preferentially transferring the E via collisions, that ACCELERATES convective COOLING, NOT ‘heat trapping’, of the troposphere.
Your ridiculous claim that CO2 black holes trap 16 photons before giving up ONE photon is too dumb to address further.
Boo boo says “But the chances of it getting ā€œre-excitedā€ by another collision so it can emit a photon at this same wavelength are heavily dependent on the local temperature of the atmosphere. These chances are far less at lower temperatures.”
15 micron FIXED CO2 absorption/emission by Wein’s law corresponds to a true BB emission temperature of 193K. The 1976 US Std Atmosphere clearly shows the ENTIRE atmosphere 0-100km is much WARMER than 193K, and reaches a minimum of 220K in the tropopause. Thus, the silly fallacy that CO2 has to emit less 15 um IR due to a colder surrounding kinetic temperature is FALSIFIED throughout the entire atmosphere 0-100km.
Yes of No: Can radiation from a 193K BB cause a 255K blackbody to warm 33K up to 288K?
Don’t bother answering – I already know UR answer is “YES!”
And I already gave you the link that completely shoots down your N2/CO2 laser faux argument.
I’m done wasting any more time here with boo boo.

Ed Bo
Reply to  perplexed
September 22, 2015 11:30 pm

Hockeyschtick:
Why do I bother? First, I talk about bodies of a certain emissivity/absorptivity, and you say I am talking about blackbodies. You say I am wrong to treat CO2 as a blackbody (which I didn’t) and then you apply Wien’s Law of blackbody radiation to CO2. Gobsmacking!
You emphasize that a CO2 molecule that absorbs a 15um photon is a billion times more likely to pass on the absorbed energy to an adjacent molecule than to re-emit the photon. Fine. But incredibly, you claim this energy cools the atmosphere rather than heats it. And then you claim that this same molecule must re-emit the same number of 15um photons that it absorbs.
I specifically refer to emissivity and emission at a given wavelength and you respond with overall emissivity/emission over all wavelengths. Have you ever even read a heat transfer textbook? The difference between those two is one of the first things discussed.
But since you are completely unfamiliar with the concepts involved, I will lay them out for you. Emissivity at a given wavelength is constant for a particular substance, so its emission AT THAT WAVELENGTH is proportional to T^4, and equal to (emissivity * sigma * T^4).
Overall emissivity covers all wavelengths, and is the ratio of the integral of emissivities over these wavelengths, weighted by the Planck blackbody emissions at the specific temperature, compared to the Planck blackbody curve. For non-blackbodies/non-graybodies, this can vary with temperature.
But until you really understand the differences between these, you can’t do any competent analysis.
You still have not dealt with the EMPIRICAL FACT that CO2 lasers with pure 10.6um output are used to melt steel at 1640K. Your theory says this is a physical impossibility, yet it is a standard industrial process. Your link does not even begin to address the issue.
So maybe it is best that you retreat to your own website, where you won’t let anyone call you out on your egregious mistakes. I especially like your post on effective radiating level, where you:
1.) Claim that non-radiating gases will have an ERL at the vertical center of mass of the atmosphere.
2.) Start your derivation for temperature lapse rate as a function of height by assuming temperature is constant over height. — Yes, you did! When you took the integral over height to derive your expression for pressure as a function of height, you took T out from under the integral. This is only valid if T is constant over height! You don’t even understand high school math!
3.) You treat the atmosphere as a blackbody radiator even though you insist that you cannot treat even a radiatively active gas as a blackbody!
Keep it up! Pure comedy gold!!!

Reply to  perplexed
September 23, 2015 10:51 am

Don’t keep it up bob boo, all you do is grossly mis-state and distort things I’ve said to create your own false straw men to attack.
You are too pathetically confused to tutor, don’t understand the T^4 S-B law only applies to SOLD TRUE BLACKBODIES, not CO2 GASES, don’t understand that CO2 preferentially passes E via collisions which increases the adiabatic kinetic expansion, rising, and cooling of air parcels, thereby ACCELERATING convective cooling.
The N2/CO2 LASER transitions at 9.6 and 10.6 do not occur in our atmosphere and are not applicable to the “GHE” In addition, those wavelengths are of far higher energy, COHERENT without destructive interference, and very highly concentrated with extremely high flux. Has NOTHING to do with passive 15 micron IR absorption/emission in the NON-LASER atmosphere.
In a pure N2 atmosphere, as I’ve said repeatedly, the equilibrium T with the Sun is located at the center of mass, the ERL is = surface and the “average” kinetic temperature of the whole troposphere is 255K.
In a N2 + H2O atmosphere the ERL = center of mass where T=255K
and show a pure N2 atmosphere Boltzmann distribution has a much steeper lapse rate and thus ~25C warmer surface.
http://hockeyschtick.blogspot.com/2015/09/lapse-rates-for-dummies-or-smarties.html
My lapse rate derivations are all standard meteorology text mathematics, and perfectly reproduce the 1976 US Std Atmosphere model, thus your claim of an incorrect derivation of the LR is clearly false.

Ed Bo
Reply to  perplexed
September 24, 2015 7:16 pm

“I got the answer I wanted, so my math must be correct!”
Ask any high school math teacher — when you took “T” out from under the integral over height, you were using T as constant over height. You need to go back to high school! You can’t do this and then say your subsequent analysis proves that temperature will vary over height.
“Forget my blanket denials that longwave radiation cannot heat any substance of a higher temperature than the temperature corresponding to that which has peak radiation at that temperature. I’ll carve out some exceptions by creating a special class of ‘coherent’ photons.”
You’re not convincing anyone with your floundering around.
Oh, and your latest post has another great one. You claim that the reason the “wet” adiabatic lapse rate is so much less than the dry adiabatic lapse rate is that water vapor increases the Cp of the atmosphere. Did you even bother to do any calculations??? Obviously not!
With a relative humidity of 50% at 25C, the Cp of the atmosphere increases by only 2% (1.03 kJ/kgK vs 1.01 for 0% RH). You would need a 50% increase to get the figures you claim.
You obviously have absolutely no understanding of the physics behind the difference in the two rates (or even the reason either of these exist at all).
Can you get anything right???

Reply to  perplexed
September 25, 2015 12:33 pm

Boo boo, why do I bother -you once again create fake straw men in “quotes” to grossly misrepresent what I’ve said and done.
Every single thing I’ve done is in 1st year meteorology texts, including calculation of the dry adiabatic lapse rate,
dT/dh = -g/Cp = 9.8K/km
as well as calculation of the wet adiabatic lapse rate formula and value = 5K/km
and the observed average lapse rate of 6.5K/km
Thus, either Boo boo is dead wrong, or all the meteorology texts, Poisson, Maxwell, Helmholtz, Carnot, Clausius, Feynman, US Std Atmosphere, Intl Std Atmsophere, the HS greenhouse eqn, etc etc are wrong.
Clearly, boo boo doesn’t understanding anything about 1st year freshman meteorology
“Can you get anything right???”

Ed Bo
Reply to  perplexed
September 25, 2015 5:17 pm

hockeyschtick:
You said over at your own site, and I quote:
“Since water vapor has a much higher heat capacity Cp than air or pure N2, addition of water vapor greatly decreases the lapse rate (dT/dh) by almost one-half (from ~9.8K/km to ~5K/km)”
You are claiming that the increased Cp of the atmosphere due to the presence of water vapor cuts the lapse rate in half. This means that you believe that it doubles the Cp of the atmosphere (or increases it by 50% to get from 9.8 to 6.5).
But if you had bothered to do an actual calculation (as I did), you would have seen that water vapor can only increase the Cp of the atmosphere by a few percent. I gave the specific example of 50% RH at 25C increasing Cp from 1.01 kJ/kgK to 1.03, a 2% increase.
Like so many, you just plug some numbers into equations that you find, without the foggiest notion of when or why those equations would apply. You have no clue when the “wet” ALR would occur, or what the physical reason it is significantly different from the dry ALR. (If you understood a basic meterology text, you would know it instantly.)
I guess that shouldn’t be surprising, because you have no clue either as to when and why the dry ALR applies. Again, if you actually understood an introductory meteorology text, and the difference between stable and unstable lapse rates, a very basic concept, you wouldn’t be so confused.

Reply to  KevinK
September 19, 2015 3:55 pm

Excellent comment KevinK, with which I fully agree and have elevated to a post here (with my comments and other comments here):
http://hockeyschtick.blogspot.com/2015/09/why-greenhouse-gases-dont-trap-heat-in.html
Thanks and cheers!,
HS

KevinK
Reply to  hockeyschtick
September 21, 2015 7:11 pm

HS, thanks.
some “bo” body commented;
“Yeah, itā€™s amazing what you can come up with when you donā€™t understand the basics. To KevinK, everything is an optics problem. The fact that the photon is absorbed, and a completely separate set of conditions govern whether and how often another photon will be emitted, is completely beyond his comprehension, when all he knows how to deal with is reflective optics.”
Beyond my comprehension, well if you say so. But the fact is that the interior of most modern optical integrating spheres is an diffuse absorptive surface, somewhat akin to Teflon ™. The Photons are in fact absorbed close to the surface of the interior of the sphere and warm the material. Then they are re-emitted and the material cools. Exactly akin to the gases in the atmosphere.
And indeed I have dealt with all types of optics including; reflective, refractive, diffractive, and scattering. I probably have modeled more optical systems with far more predictive power than any climate model so far produced.
Oh and yes we do indeed model things right down to the photon level. I was part of a team that helped calibrate the current Digital Globe Earth imaging satellites to NIST standards for absolute radiometry, I can assure you I can count individual photons with the best of them.
Maybe open your horizons a wee bit and consider the totally obvious possibility that the now infamous “radiative GHE” merely changes the response time of the gases in the climate, that is just as plausible as blindly accepting that the “temperature must rise”.
Cheers, KevinK.

Ed Bo
Reply to  hockeyschtick
September 22, 2015 7:52 am

And yet, with all that background, you still think a colder body will re-emit photons of a given wavelength one-for-one for each photon of that wavelength absorbed from a warmer body? Amazing…

September 17, 2015 7:06 pm

Thank you for an excellent analysis.

AndyE
September 17, 2015 7:43 pm

Mike Jonas, you forgot one factor : the ability of the human mind passionately to believe in something without necessarily relying on rational thinking. To conclude from models a certain result is easily done if you are a normal human. Our minds are somehow geared to attain such feats : we can just close our eyes and say, “I believe”.
But I do think that we all have a different propensity for such cerebral gymnastics : some of us are much better at it than others. You appear to be no good at it!

emsnews
September 17, 2015 7:46 pm

The most disturbing fact being ignored is, all previous Interglacials didn’t last much longer than the present one. History is clear: we are in far more danger of another Ice Age than not.

Reply to  emsnews
September 19, 2015 6:13 pm

emsnews,
I have never ignored it, and many others are cognizant as well.
But I think large numbers of people may well starve to death from famines caused by cooling, that is well short of a return to full glacial conditions.
All it would really take, in my estimation, as a serious cold snap, or perhaps a hard frost or snowstorm, in the midst of the northern hemisphere growing season, or a very late Spring in the same year as a very early Autumn, which wrecks crops before they are harvested.
The world has about a thirty day supply of food on hand at any given time, according to accounts I have seen from numerous sources.
Precarious, to say the least.

Aussiebear
September 17, 2015 7:47 pm

Looking at the summary table, it seems obvious to me that the elements that are 0% are all natural “forces” for which Mankind has absolutely no control. Even for the three that are >0%, only one (C02) do have any sense that we can control. The other two water vapour and clouds have some intrinsic relationship which is not well understood. From that perspective its no surprise that CAGW supporters see C02 as some sort of control knob.

Svante Callendar
September 17, 2015 8:07 pm

So much for the mid-tropospheric chart you show in the article, the one with the force-aligned begin points at 1979. Got an up to date one for global surface temperatures?

Christopher Hanley
September 17, 2015 8:40 pm

ā€œIf your model started off a long way from reality then inevitably the end result is that a large part of your modelā€™s findings come from unknowns ā€¦ā€.
================================
And what if the ā€œrealityā€ they are attempting to tune their models to is not really and truly reality?comment image

September 17, 2015 9:15 pm

I’ve said it before
So I’ll say it again,
Trying to model chaos
Borders on the insane;
Garbage in garbage out
Has never been more true,
Perhaps there’s an agenda
They want to pursue?
http://rhymeafterrhyme.net/computer-models/

mairon62
September 17, 2015 9:26 pm

As a long time coastal resident, I am still amazed by the lack of positive “feedbacks” for surface temperatures in places like Crescent City, CA where the daily high temp. matches the daily low temp. This past August one day I remember was 56F for the hi and lo temp. for the whole day. It’s utterly common to have a daily temperature swing of just couple degrees. You might want to bring a jacket or sweater while you are adjusting your models…just in case.

Ross
September 17, 2015 9:27 pm

So you picked a random point at which the GCMs and observations peaked at roughly the same time, equalized them at that level, then misleadingly compare them on that basis to come to predetermined conclusions that they do not work? You also ignore that volcanoes are in fact included in the set of natural forcers used in historical reconstructions. Wow, the lengths you will go to just to support your delusion of having smashed the consensus…I am in awe. If you wish, I have a bridge I can sell to you…

Reply to  Ross
September 17, 2015 11:57 pm

Well the Climate modelers have their own bridge to sell. And they have been doing a fair job of it so far.
BTW. What is the value of the coupling coefficients for the various coupled processes?

Reply to  Ross
September 18, 2015 2:10 pm

Ross, that is quite enough Kool-Aid for you, young man.

MarkW
Reply to  Ross
September 19, 2015 6:30 am

There is a chart that was produced by the IPCC that is almost identical to the one above,
In your conspiratorial fantasies, is the IPCC also trying to discredit the models?

September 17, 2015 10:35 pm

The divergence of models from actuality has been being mentioned a lot lately. There are usually two reasons:
1: Some of the graphs showing divergence either include (sometimes have only) RCP 8.5 of the CMIP5 models, which seem to have been based on an overprediction of greenhouse gases, especially methane. The RCPs lower than 8.5, such as 6.0 and 4.5, seem more realistic for total forcing through manmade greenhouse gases.
2: In general, these models seem to be tuned for success at hindcasting the past, but without consideration for multidecadal oscillations. I think about .2, maybe .22 degree C of the warming from the early 1970s to the 2004-2005 peak was from upward swing of one or a combination of multidecadal oscillations, and the pause period starting in the slowdown just before cresting the 2004-2005 peak seems to me as likely to last for a similar amount of time.
I suspect a possible third reason: The graph does not state what the temperature is of. However, balloon and satellite datasets are usually of the lower troposphere. The IPCC models are not even named as CMIP5 ones, let alone which RCP / RCPs. The model curve looks familiar to me as a composite of CMIP5 models that has often been compared not only to lower troposphere data, but also surface data. This has me thinking that the model forecast is of surface temperature. During the period starting with the beginning of 1979, balloon data indicate that the surface-adjacent 100-200 meters of the troposphere have warmed about .03 degree/decade more than the “satellite-measured lower troposphere” as a whole. See Figure 7 of http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade/#comments
This indicates surface temperature (after smoothing by a few years) being about .3 – .31 degree C warmer than in 1979, as opposed to ~.2 degree C. (The surface temperature dataset that agrees with this the best is (more like was but still is) HadCRUT3. This is still cooler than composites of lower-RCP CMIP5 models, but I think figuring for this instead of lower troposphere being what is being predicted (and hindcasted), and correcting models so that they are aware of multidecadal oscillation(s) accounting for some early-1970s to ~2004-2005 warming, will get composites of lower-RCP (4.5 and 6) CMIP5 models into doing an impressively good job.

Editor
Reply to  Donald L. Klipstein
September 17, 2015 10:58 pm

re: “ I think about .2, maybe .22 degree C of the warming from the early 1970s to the 2004-2005 peak was from upward swing of one or a combination of multidecadal oscillations” – When I look at, say, the sunspot cycle, where the cycle amplitude varies a lot from cycle to cycle, I understand that just eyeballing (or even exhaustively analysing) the temperature graph will not tell you much at all about how much of the late 20thC warming was from those oscillations. You guess 0.2 -0.22 deg C. That’s a very tight range. I would have thought that we just don’t know to anything like that accuracy.

AntonyIndia
September 17, 2015 10:48 pm

Real clouds function way below the huge grid cell sizes of the climate models. A tropical cumulus nimbus can short circuit the surface with the upper layers and transport a lot of energy up or down. They are too small and too numerous too be processed by even the fastest computers.

pat
September 17, 2015 11:36 pm

17 Sept: JoanneNova: Scandal Part 3: Bureau of Meteorology homogenized-the-heck out of rural sites too
The Australian Bureau of Meteorology have been struck by the most incredible bad luck. The fickle thermometers of Australia have been ruining climate records for 150 years, and the BOM have done a masterful job of recreating our ā€œcorrectā€ climate trends, despite the data. Bob Fernley-Jones decided to help show the world how clever the BOM are. (Call them the Bureau of Magic)….
http://joannenova.com.au/2015/09/scandal-part-3-bureau-of-meteorology-homogenized-the-heck-out-of-rural-sites-too/

September 17, 2015 11:51 pm

What are the coupling coefficients? (numeric value)

JJM Gommers
September 18, 2015 1:48 am

What surprises me is the steep climb of 1oC for the period 1995-2025, this is such a big incremental change. You would expect that if such result is possible. This increase is counteracted by the Boltzmann equation and contains the temperature T to the 4th power.

old construction worker
September 18, 2015 1:49 am

We don’t live in a greenhouse type atmosphere. We live in an atmosphere that more or less acts like a swamp cooler coupled with a chiller unit. I bet if some engineer could model that it would be closer to reality than computer models based on “CO2 drives the climate” theory.

Reply to  old construction worker
September 18, 2015 2:46 am

I agree that an honest engineer could model the climate closer to reality than the present crop of computer games. But I also think that a drunken plowboy (or farmhand these days) could best the IPCC also.

Reply to  old construction worker
September 18, 2015 2:12 pm

Where I live, the atmosphere is more like a sauna bath with the window cracked a little, a fan in the corner with a randomly variable rheostat controlling it, and a bucket of ice thrown in every once in a while in Winter.

September 18, 2015 1:54 am

Climatologists are as accurate in predicting the climate as seismologists are in predicting earthquakes.
All climate models costing taxpayers billions of dollars, sterling and euros have gone wrong. All predictions about climate have gone wrong. Today we are supposed to be seeing an ice-free Arctic summer, a rise of 4 meters in ocean levels, the desertification of the northern Mediterranean shores, 50 million climate refugees, food shortages, a warming Antarctica which is actually getting colder, snowless northern countries which in fact are having more snow and many other predictions that have gone awry.

richard verney
September 18, 2015 2:07 am

Presently, it is not known whether ENSO events cancel out and thus have no long term impact upon climate when viewed in the long run, or more particularly on a climatology timescale of say circa 30 to 50 or perhaps even 30 to 100 years.
However, there are reasons why ENSO events may not simply cancel each other out and why it may be the case that they do have an impact on short term climatology (ie., periods of 30 years, or at any rate less than 100 years).
Consider:
1. Due to the difference in latent energy contained within the atmosphere and the oceans, the atmosphere cannot heat the oceans. It is well known that it is extremely difficult to heat water in a container open only to the atmosphere above by warm air from above.
2. A warm ocean surface (El Nino) heats the atmosphere above and since hot air rises, it also alters convection rates.
3. The same is not so with a cool ocean surface.
4. Consider a chest freezer. Open the lid, and since cold air sinks, there will be very little impact upon the temperature in the room (at least over short periods). Contrast this with the same chest freezer but one that has been converted to a BBQ at the bottom. Open the lid and it will have an immediate impact on the temperature of the room. One warms the atmosphere, the other does not.
Thus in summary, if there is a short period (lets say 30 or so years) where there are more El Ninos than La Ninas (or where the El Ninos , or some of them,are particularly strong), on a short time scale (lets say 30 years or so) one would expect to see warming. But even if there was exactly the same number of El Ninos as La Ninas (or they were of equal strength), it does not automatically follow that on short time scales (say circa 30 years) the effect is neutral; that La Ninas will cancel out El Ninos. They may do, but since the energy flux is different and since one may have a greater impact upon convection, and thereby energy transport, it does not automatically follow that ENSO cancels out on short climatology time scales.
Further, it should not be overlooked that if one views the satellite data (from 1979 to date), there is no steady linear warming trend. There is simply a one off step change in and around the Super El Nino of 1998. Prior to that event temperatures were trending essentially flat. Following that event, temperature are trending essentially flat. In the satellite data, one can clearly see an ENSO signal and one that has left a marked signature following the extremely strong 1997/1998 Super El Nino.
The satellite data supports the view that ENSO may leave a signature, and that ENSO does not necessarily cancel out when viewed on short climatology time scales.
Just saying that the ENSO assumption is something requiring further consideration and one should remain sceptical as to the correctness of that assumption, at any rate as to its impact on short climatology time scales with which we are dealing with and during which we have some data.

DD More
Reply to  richard verney
September 18, 2015 11:01 am

“Thus in summary, if there is a short period (lets say 30 or so years) ”
Let’s say 1978 (low temp point in Hansen’s earliest charts) to the 1998 temp spike. This is the entire CAGW time frame, but only 20 years. Another 10 years for ENSO to cancel.

William Astley
September 18, 2015 2:11 am

Carbon Dioxide: Contribution to percentage of general circulation model (GCM) warming: IPCC assertion 37%
Highest possible warming based on fundamental science rather than fudging of science to create an issue: 0.2C/3C = 6.7% 0.25 watts/m^2 without ‘feedbacks’. Actual best estimate warming for doubling of atmospheric is less than 0.1C.
If the assertion that the warming for a doubling of atmospheric CO2 without ‘feedbacks’ is 0.1C to 02C and likely less than 0.1C is correct (see below for support for that assertion is correct), there is no CAGW problem.
The majority of the warming in the last 150 years was due to solar cycle changes, not due to the increase in atmospheric CO2. There is no CAGW, there is in fact almost no AGW due to the increase in atmospheric CO2. If that assertion is correct global warming is reversible, if there is a sudden slow down or interruption to the solar cycle.
The GCM models have more than a hundred ‘variables’ and hence can be ‘tuned’ to produce 3C to 6C warming for a doubling of atmospheric CO2. The also could be tuned to produce 0.1C warming.
The one ring that rules the GCM is the initial so called 1-dimensional no ‘feedbacks’ study which determined the surface warming for a doubling of atmospheric CO2 is 1.2C, a forcing of 3.7 watts/m^2.
We had all assumed or at least I had assumed that the 1-dimensional no ‘feedbacks’ study, has scientifically accurate, on the correct page.
I had assumed the problem with why the planet has warmed less than the IPCC models predicted is due to the earth resisting forcing (negative feedback) rather than amplifying (positive feedback) forcing change, by an increase in cloud cover, increase in cloud albedo, and an increase in cloud duration in the tropics. That is the explanation for there being almost no warming in the tropics.
Negative feedback would for example explain why there has been also no warming in the tropical region.
Negative feedback does not however explain 18 years without warming and does not explain the fact that there has been 1/5 of the predicted warming of the tropical troposphere at 5km. Those observational facts support the assertion that the 1-dimensional no ‘feedbacks’ calculation the expected warming for a doubling of atmospheric CO2 is fundamentally incorrect.
The infamous without ‘feedbacks’ cult of CAGW’s calculation (this is the calculation that predicted 1.2C to 1.4C surface warming for a doubling of atmospheric CO2) incorrectly/illogical/irrationally/against the laws of physics held the lapse rate constant to determine (fudge) the estimated surface forcing for a doubling of atmospheric CO2. There is no scientific justification for fixing the lapse rate to calculate the no ‘feedback’ forcing of greenhouse gases.
Convection cooling is a physical fact not a theory and cannot be ignored in the without ‘feedbacks’ calculation. The change in forcing at the surface of the planet is less than the change in forcing higher in the atmosphere due to the increased convection cooling caused by greenhouse gases. We do not need to appeal to crank ‘science’ that there is no greenhouse gas forcing to destroy the cult of CAGW ‘scientific’ argument that there is a global warming crisis problem to solve.
There is a forcing change due to the increase in atmospheric CO2 however that forcing change is almost completely offset by the increase in convection. Due to the increased lapse rate (3% change) due to convection changes (the 3% change in the lapse rate, reduces the surface forcing by a factor of four, the forcing higher in the atmosphere remains the same) therefore warming at the surface of the planet is only 0.1C to 0.2C for a doubling of atmospheric CO2, while the warming at 5 km above the surface of the planet is 1C. As a warming of 0.1C to 0.2C is insufficient to cause any significant feedback change, the zero feedback change for a doubling of CO2 is ballpark the same as the with feedback response.
P.S. The cult of CAGW no ‘feedbacks’ 1-dimensional calculation also ignored the overlap of the absorption of water vapor and CO2. As the planet is 70% covered in water there is a great deal of water vapor in the atmosphere at lower levels, particularly in the tropics. Taking the amount of water vapor overlap into account (before warming) in the no ‘feedbacks’ 1 dimension calculation also reduces the surface warming due to a doubling of atmospheric to 0.1C to 0.2C. Double trump. If the both water vapor/CO2 absorption spectrum absorption overlap and the increased convection cooling of greenhouse gases is taken into account the forcing change due to a doubling of atmospheric CO2 is without feedbacks less than 0.1C.
http://hockeyschtick.blogspot.ca/2015/07/collapse-of-agw-theory-of-ipcc-most.html
https://drive.google.com/file/d/0B74u5vgGLaWoOEJhcUZBNzFBd3M/view?pli=1

Collapse of the Anthropogenic Warming Theory of the IPCC

4. Conclusions
In physical reality, the surface climate sensitivity is 0.1~0.2K from the energy budget of the earth and the surface radiative forcing of 1.1W.m2 for 2xCO2. Since there is no positive feedback from water vapor and ice albedo at the surface, the zero feedback climate sensitivity CS (FAH) is also 0.1~0.2K. A 1K warming occurs in responding to the radiative forcing of 3.7W/m2 for 2xCO2 at the effective radiation height of 5km. This gives the slightly reduced lapse rate of 6.3K/km from 6.5K/km as shown in Fig.2.

The modern anthropogenic global warming (AGW) theory began from the one dimensional radiative convective equilibrium model (1DRCM) studies with the fixed absolute and relative humidity utilizing the fixed lapse rate assumption of 6.5K/km (FLRA) for 1xCO2 and 2xCO2 [Manabe & Strickler, 1964; Manabe & Wetherald, 1967; Hansen et al., 1981]. Table 1 shows the obtained climate sensitivities for 2xCO2 in these studies, in which the climate sensitivity with the fixed absolute humidity CS (FAH) is 1.2~1.3K [Hansen et al., 1984].
In the 1DRCM studies, the most basic assumption is the fixed lapse rate of 6.5K/km for 1xCO2 and 2xCO2. The lapse rate of 6.5K/km is defined for 1xCO2 in the U.S. Standard Atmosphere (1962) [Ramanathan & Coakley, 1978]. There is no guarantee, however, for the same lapse rate maintained in the perturbed atmosphere with 2xCO2 [Chylek & Kiehl, 1981; Sinha, 1995]. Therefore, the lapse rate for 2xCO2 is a parameter requiring a sensitivity analysis as shown in Fig.1.

The followings are supporting data (William: In peer reviewed papers, published more than 20 years ago that support the assertion that convection cooling increases when there is an increase in greenhouse gases and support the assertion that a doubling of atmospheric CO2 will cause surface warming of less than 0.3C) for the Kimoto lapse rate theory above.
(A) Kiehl & Ramanathan (1982) shows the following radiative forcing for 2xCO2.
Radiative forcing at the tropopause: 3.7W/m2.
Radiative forcing at the surface: 0.55~1.56W/m2 (averaged 1.1W/m2).
This denies the FLRA giving the uniform warming throughout the troposphere in
the 1DRCM and the 3DGCMs studies.
(B) Newell & Dopplick (1979) obtained a climate sensitivity of 0.24K considering the
evaporation cooling from the surface of the ocean.
(C) Ramanathan (1981) shows the surface temperature increase of 0.17K with the
direct heating of 1.2W/m2 for 2xCO2 at the surface.

Transcript of a portion of Weart’s interview of James Hansen.

Weart:
This was a radiative convective model, so whereā€™s the convective part come in. Again, are you using somebody elseā€™sā€¦
Hansen:
Thatā€™s trivial. You just put inā€¦
Weart:
… a lapse rate…
Hansen:
Yes. So itā€™s a fudge. Thatā€™s why you have to have a 3-D model to do it properly. In the 1-D model, itā€™s just a fudge, and you can choose different lapse rates and you get somewhat different answers (William: Different answers that invalidate CAGW, the 3-D models have more than 100 parameters to play with so any answer is possible. The 1-D model is simple so it possible to see the fudging/shenanigans). So you try to pick something that has some physical justification (William: You pick what is necessary to create CAGW, the scam fails when the planet abruptly cools due to the abrupt solar change). But the best justification is probably trying to put in the fundamental equations into a 3-D model.

Reply to  William Astley
September 19, 2015 4:48 pm

Amen. Thanks for posting that. Proves even the canonical assumption that sensitivity to doubled CO2 in absence of feedbacks is ~1C. Wrong! And not only due to the false fixed-lapse rate false assumption, but also due to the false assumption of fixed atmospheric emissivity, a basic mathematical error in calculation of the Planck feedback parameter!
http://hockeyschtick.blogspot.com/search?q=kimoto

taxed
September 18, 2015 2:41 am

lf the UK Met Office weather models are any thing to go by its their forecasts of the tracking of the jet stream is where there going wrong. They tend to forecast that when high pressure builds it will come up from the south and so will push the track of the jet stream northwards. But recently there has been a trend of the jet stream taking a more southwards track then their models forecast at least during the summer months. Which has increased the amount of high pressure patterns forming to the north of the jet stream. Which helps to bring down cooler air from the north rather then pushing warmer air up from the south.

September 18, 2015 2:42 am

The great majority of model runs, from the high-profile UK Met Officeā€™s Barbecue Summer to Roy Spencerā€™s Epic Fail analysis of the tropical troposphere, have produced global temperature forecasts that later turned out to be too high. Why?

As the essay points out, there are so very many things we don’t understand about the weather machine. There are most likely factors that we don’t even know about, much less understand. Then there are things that we claim to know about but are very much wrong about. Take CO2 which the essay says is 37% of the models. (whatever that means) We claim to understand CO2 and its function concerning weather and climate but we are very much wrong on that. There are several credible theories of climate that do not have CO2 doing what the IPCC thinks CO2 does and I wager one of those theories will win out after we return to climate science and stop giving the paymasters the answers they want for political reasons.
Why don’t the models work? They are political constructs and not scientific ones. (my best answer)
~ Mark

MarkW
Reply to  markstoval
September 18, 2015 7:40 am

From the article, I would guess that the 37% came from the fact that before fiddling, the models created only about 1/3rd of the observed warming when only CO2 was changed. I’m guessing that “about a third” from the text of the article, and 37% from the table, refer to the same thing.

Reply to  MarkW
September 18, 2015 1:25 pm

Thanks. I bet that is it.

Editor
Reply to  MarkW
September 19, 2015 6:02 am

Yes, the “about a third” does relate to the 37%. The 37% is the proportion of the models’ predicted 0.2 deg C per decade that comes from CO2 itself.

henri Masson
September 18, 2015 2:50 am

“commieBob
September 17, 2015 at 8:25 pm
“One thing I havenā€™t seen in a discussion of chaotic system models, is the idea of attractors. After enough runs, valid models of chaotic systems should show where the attractors are”.
If you consider the Vostok data of temperature and submit them to a “phase plan” analysis, you ‘ll find two attractors: the glacial periods and the temperate ones. The system evolutes from one to the other along two tracks : a progressive cooloing and a fast heating one (which can also be seen directly from the time series.
All the rest is only made of chaotic fluctuations around the attractors or around the trajectories going from one attractor to the other(not to be confused with random fluctuations…in which case the phase plan will be filled completely and evenly).

Julian Williams in Wales
September 18, 2015 3:23 am

A very well written summary – thank you. I have never seen the models explained in this way and it gives me a much clearer understanding of how the models have been constructed our of junk conjecture.

M Seward
September 18, 2015 4:01 am

I broadly agree with the author’s assessment of the models and the ‘science’ that is engrossed into them. My suspicion however is that there is also an issue with the mesh size in that it is too coarse to possibly model much of the water cycle mechanisms, particularly in the tropics. As a consequence fiddle/fudge factors have to be introduces to guesstimate their quantitative contribution. Basically if the mechanism takes place at a smaller scale than the mesh size then the Navier Stokes equations are no longer governing the maths but some fudge factor is.
From my work using CFD I came across the mesh size phenomenon where a first cut mesh model, my very first attempt after reading the software manual, converged to a solution that was manifestly wrong ( since I had model test data to compare it with). In my case the software was always using the N-S model but it was the mesh size that was causing poor results. I had gone ‘coarse’ in order to reduce computation time and got burned for my lack of trouble. It was a salutary lesson and an sharp introduction to the ‘uncertainty principle’ of mesh models.
Accuracy is inversely related to computation time and thus cost and convenience.
On the ‘plus’ side (for ‘climate science’ that is), may also serve a useful marketing purpose in that it allows junk output to be marketed and its various financial and reputational rewards reaped with a viable ‘get out of gaol free’ excuse kept in the back pocket. And lets face it the press release does not need to contain such boring detail. ‘OH we never guessed that our models were too coarse and converging to false solutions. We had no way of verifying them against future field data’. That of course is true but micro models could be test run against others with different mesh size for solution comparison and iteration cycle behavior.
Just thinking out loud folks.

Editor
Reply to  M Seward
September 19, 2015 6:17 am

No mesh size will ever work over decadal+ time scales. See the IPCC quote in footnote 2. But the problems go much much further than just mesh size.

henri Masson
September 18, 2015 4:13 am

“henri Masson wrote, September 18, 2015 at 2:50 am
“If you consider the Vostok data of temperature and submit them to a ā€œphase planā€ analysis, you ā€˜ll find two attractors: the glacial periods and the temperate ones. The system evolutes from one to the other along two tracks : a progressive cooloing and a fast heating one (which can also be seen directly from the time series)”.
.
For the one interested, please find hereunder the dropbox link presenting very briefly and graphically my story under the form of a few PowerPoint slides:
https://dl.dropboxusercontent.com/u/56918808/climate%20%26%20chaos.pptx

John W. Garrett
September 18, 2015 4:27 am

Bingo.
Excellent piece !!!
Anybody who has any experience of computer models of highly complex, dynamic, multivariate, non-linear systems knows full well that they are a largely a waste of time.
ā€œGive me four parameters, and I can fit an elephant. Give me five, and I can wiggle its trunk.ā€
-John von Neumann

September 18, 2015 5:07 am

The IPCC says:
ā€œwe are dealing with a coupled nonlinear chaotic systemā€
Easy to “kill” on its own terms. Ask them “what are the coupling coefficients used in the models?” Electronics deals with this all the time in transformers. We call it the “coupling coefficient”. Climate has way more processes each with its own coupling. And we haven’t even touched “nonlinear”.
Defeating them on their own terms is the easiest way.

Reply to  M Simon
September 18, 2015 2:16 pm

” Give me six, and I can put Babar to bed without his dinner for breaking his sister’s piano.”

Reply to  Menicholas
September 19, 2015 2:01 am

LOL. I can give you 5 with one hand.

Reply to  Menicholas
September 19, 2015 6:34 pm

šŸ™‚

jeanparisot
September 18, 2015 5:18 am

Very nice. I was keeping a list of “what was good” in their AGW world (to paraphrase Gene Kranz), but the blank sheet of paper was misplaced.
I usually evaluate models based on their component’s contribution to the stated error vice to the stated product. But, in this case since the product is so divergent from observed reality, maybe their skill is error. To the best of current computational ability we seem to have identified that CO2 is NOT a significant factor in short-term climate trends. This will have to verified by decades of observations.
You should consider adding a major source of unquantified error to your discussion; the spatial error introduced by the models gridding systems is generally not defined or accounted for. Spatial stats is specialized discipline and the value of gridded data versus data aggregated by major correlations (Land/Sea, Altitude, Latitude, Pop Density, etc.) needs some investment. At a minimum the grids need to be adjusted for the modern ellipsoidal earth models. We’re spinning thru space on an egg not a cue ball.

henri Masson
September 18, 2015 5:28 am

A fable about proxies and other indicators illustrating the importance of systemic thinking
ā€œ The Blind Men and the Elephant ā€
an Hindu fable of which the subject is originating from Jainism
retranscripted by the American poet John Godfrey SAXE ( 1816 ā€“ 1887 )
There were six men of Hindustan
To learning much inclined,
Who went to see the Elephant
( Though all of them were blind ),
That each by observation
Might satisfy his mind.
Ā 
The First approached the Elephant,
And happening to fall
Against his broad and sturdy side,
At once began to bawl :
ā€œ God bless me ! ā€“ but the Elephant
Is very like a wall ! ā€
Ā 
The Second, feeling of the tusk,
Cried : ā€œ Ho ! ā€“ what have we here
So very round and smooth and sharp ?
To me ā€˜t is mighty clear
This wonder of an Elephant
Is very like a spear !
The Third approached the animal,
And happening to take
The squirming trunk within his hands,
Thus boldly up and spoke :
ā€œ I see, ā€ quote he, ā€œ the Elephant
Is very like a snake
ā€The Fourth reached out his eager hand,
And felt about the knee.
ā€œ What most this wondrous beast is like
Is mighty plain, ā€ quote he ;
ā€œ It is clear enough the Elephant
Is very like a tree ! ā€
Ā 
The Fifth, who chanced to touch the ear,
Said : ā€œ I am the blindest man
Can tell what this resembles most ;
Deny the fact who can,
This marvel of an Elephant
Is very like a fan ! ā€
Ā 
The Sixth no sooner had begun
About the beast to grope,
Then, seizing on the swinging tail
That fell within his scope,
ā€œ I see, ā€ quote he, ā€œ the Elephant
Is very like a rope ! ā€
And so these men of Hindustan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong !
Ā 
So, often in theologic wars
The disputants, I weep,
Rail on in utter ignorance
Of what each other mean,
And prate about an Elephant
Not one of them has seen !”
And, of course, I personally think that IPCC suppo(r)ts are involved in a theological war (al what they do and say is finally just aimed to venerate Goddess Gaia to avoid being sinned for all the excess of our industrial economically developed world, in which they have huge difficulties to find their own place….without grants and subsidies)

pos
September 18, 2015 5:56 am

In light of recent research, should ozone depletion be included in the list?
ā€œStratospheric Ozone Depletion: The Main Driver of Twentieth-Century Atmospheric Circulation Changes in the Southern Hemisphereā€ Polvani et al, 2011
http://journals.ametsoc.org/doi/abs/10.1175/2010JCLI3772.1

MarkW
September 18, 2015 6:51 am

While it is true that the cyclic nature of the various oceanic cycles means that over time, they play no/little role in the climate. The problem comes from the tuning of the models. There was no attempt made to remove the affects of el nino/PDO/AMO and other cycles from the raw data prior to using that data to tune the models. As a result, short term warming that was caused by the oceanic cycles was assumed to have been due to CO2, as a result, they assumed that the warming from the mid-1970s to around 2000 would continue, even accelerate.

herkimer
September 18, 2015 7:02 am

The fact that ocean currents and ENSO contribute 0% consideration to the climate models predicted future , makes them of 0% value in predicting global temperatures for the near term, decadal or the 60 year climate cycle period . We might as well stop comparing their predictions ,which are running 4 times higher than reality, because the models will be out in the left field constantly unless they change their methods . In my opinion they will not change because they are getting too much free money without any accountability for their clearly failed predictions . The models are paraded as PR material to the public and to exaggerate the non existing climate threat. It seems to me that . the reason they want to stop all future climate debate , is to prevent the public from really finding out how uncertain their models are and how uncertain their science really is which they have been pushing on the public, the politicians and the media as settled This science and their models are so uncertain that it should have never played a part in public policy. We will pay dearly for this lack of oversight and accountability to the public of these alarmist scientists.

tabnumlock
September 18, 2015 7:15 am

Just flip a coin and be right half the time. Cost to taxpayers, as low a 1 cent (assuming you want to reuse it indefinitely).

September 18, 2015 7:29 am

Having written and designed numerical models for petroleum reservoirs, it was necessary to take the model back and run it to confirm a history match of the actual results before the projections of the model were used to make economic decisions. It appears that the programmer’s failure of climate models to even remotely project accurately have caused them to abandon attempts to correct the models. Rather than tweak the model to match historical reality, today they seem to be going back and adjusting the temperature history record to match the models.

Curious George
September 18, 2015 7:44 am

Climate models are only as reliable as people maintaining them. A failure to correct a known flaw – and running known flawed models thousands of times instead – does destroy any confidence.

MarkW
September 18, 2015 7:45 am

Ignoring cyclic events in the models, means that the models are only good for predicting steady state averages of climate. IE, weather, averaged over a long enough period that all the cycles have averaged out.
That means that any claims that we are going to warm X degrees in the next Y years are worthless, since the models simply are not capable of discerning what the temperature is going to be in a particular year.
Beyond that, as long as CO2 levels are changing, we are not in a steady state condition.
If the models were any good (which they aren’t) the only thing they would be useful for is to declare that once CO2 levels have stabilized at a particular level. Wait a couple hundred years for things to stabilize, then average the temperatures over 3 or 4 centuries, and this is what the average will be.

Resourceguy
September 18, 2015 8:05 am

Answer: Reliable enough to ignore in policy messaging over reach efforts

Tom O
September 18, 2015 8:15 am

How reliable are the models? A very simple fact – you cannot model that which you do not know. You cannot model based on anything but DATA, and proxy information is NOT data. How reliable, then, are climate models? Far less reliable than expecting to make a windfall at any casino.

rgbatduke
September 18, 2015 8:20 am

You make one serious error in the way you lay out your table. GCRs and aerosols belong in completely different boxes, because GCRs are not well understood and have a questionable effect (and what we know of them at all has largely come from work done after the GCMs were originally written) and hence are neglected but aerosols are not. In fact, in the GCMs aerosols are included and produce a strong cooling effect.
This is one of their major problems. One of the ways they balanced the excessive warming in the reference period is by tweaking the aerosol knob and turning it up until it balanced the rapidly increasing CO2 driven effect. Then as time passed, CO2 was kept going up and aerosols were not across the rapidly warming part of the late 80’s through the 90’s. In this way, the CO2 sensitivity had to be turned up still more to match the rapid rise against the large background cancellation that allowed the 70’s and early 80’s to be fit well enough not to laugh at the result.
Of course, then came the 1997-1998 super-ENSO that produced one last glorious burst of warming, and then warming more or less stopped and the infamous “pause” began. CO2 continued increasing, faster than ever, aerosols didn’t, the climate models that used this pattern clearly showed the world metaphorically catastrophically “cooking” from a climate shift (if an increase in average temperature that represents moving from one city in NC to another 40 miles away can be called a climate shift at all) but the world refused to cooperate. Since the late 90’s temperatures have been at the very least close to flat, even as frantic efforts have continued to find some excuse not to fail the models, if necessary by changing the data they do not predict.
The models clearly overestimate the impact of aerosols. This is bad news for the entire hypothesis that radiative chemistry is the dominant forcer in climate change, because if aerosols are not a major cooling factor, then the large CO2 sensitivity needed to fit the rapid increase in temperature in single 15 year stretch where the planet significantly warmed at all in the last 70 years (back to 1945) is simply wrong, far too high. They have to go back to a sensitivity that is too low to produce a catastrophe and that fits the other 55 years of the last 70 decently but that fails to do a good job on the 15. Which is just as well, because a burst of warming almost identical to the late 20th century warming occurred over a 15 year period in the early 20th century, and this warming was already being skated right over by the GCMs (although this hindcast error was ignored).
If you reduce the aerosol’s contribution to “very little”, you have to reduce CO2 sensitivity including all feedbacks to ballpark 1 to 1.5 C per doubling in order to come close to fitting — or at least not disagreeing with to the point of obvious failure — the data. But this is still in GCMs that, as you point out, do not credibly include the modulation of cooling efficiencies of things like the multidecadal oscillations and thermohaline transport that are self-organized phenomena that obviously have a huge impact on the absorption, transport, and dissipation of heat and that are clearly correlated with both climate change and weather events, major and minor, across the instrumental and proxy-inferred record. Once those were accurately included, what would sensitivity be? What would the net feedbacks be? Nobody knows.
And does it matter? Research demonstrating fairly convincingly that aerosol cooling is overestimated is not a year or two old, but people are still trying to argue that increased volcanic aerosols are the cause of the pause (when they aren’t trying to erase the pause altogether). The only real problem is that they can’t find the volcanoes to explain the increase in aerosols, or demonstrate that atmospheric transmittance at the top of Mauna Loa has been modulated, or come up with a credible model for the impact of even large volcanoes on climate except to note that it is transient and very small until the volcanoes get up to at least VEI 5, if not VEI 6 or higher. And those volcanic eruptions are rare.
So no, it does not matter. What matters is that in the world’s eyes, the energy industry is accurately portrayed in Naked Gun 2 1/2 — a cabal that would willingly use assassination and lies and bribes and corruption to maintain its “iron grip” on society and the wealth of its owners. One isn’t just saving the world ecologically — this may not even be the real point of it all. It is all about the money and power. In the real world, it is a lot simpler to co-opt and subvert a political movement than it is to fight it. And that’s what the power industry has done.
Who makes, and stands to make, the most money out of completely retooling the energy industry to be less efficient? Oh, wait, would that be the energy industry? Do they really give a rat’s ass if they get rich selling power generated by solar panels (if that’s what we are taught to “want”) rather than coal? Not at all. They’ll make even larger profits from the margin on more expensive power. They want power to be as expensive as possible, which means as scarce as possible, especially in a nearly completely inelastic market.
If climate catastrophism didn’t exist, the energy industry would probably invent it. Come to think of it, they probably did. And there is no Leslie Nielsen to come to the rescue.
rgb

jeanparisot
Reply to  rgbatduke
September 18, 2015 8:57 am

I have sat in a room with a US Senator and Nuclear Power executives from all the major US energy companies; who when asked what they wanted, universally answered “a price for carbon”. Why? It was explained that the regulated market for power in the US restricted their ability to obtain project financing from todays customers for building new plants without showing a cost to consumers for the alternatives.

Reply to  rgbatduke
September 18, 2015 1:44 pm

RGB I was sorry to see the last three paragraphs of your comment. Your comments on scientific matters have always been objective ,rational and illuminating. Your views on the energy industry are ill informed and seem to adopt the naĆÆve ” evil fossil fuel company “paradigm. The “energy” Industry is far from monolithic and while all companies will try to influence policy to their advantage, the interests of the different segments eg wind, solar, nuclear, biomass ,fossil fuel and indeed of the companies within those segments e,g international major oil companies, national oil companies, independent operators are very different.
What is true is that the energy companies at the moment ,by and large, blindly go along with the consensus CAGW meme as a basis for looking at the future and where convenient will use it to influence politicians where it suits their particular ends.
An interesting example of how belief in CAGW influences individual companies is Shell Oil, who are betting billions in their Arctic offshore drilling program while other companies have withdrawn, They must believe that Arctic sea ice is going to decrease in the next 20 years or so .I think the opposite.
For forecasts if the timing and extent of the coming cooling see
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
and
http://climatesense-norpag.blogspot.com/2015/08/the-epistemology-of-climate-forecasting.html

MarkW
Reply to  Dr Norman Page
September 18, 2015 2:17 pm

For the most part, the actual companies that are building wind and solar are not the same companies that are building oil, gas and coal.
Conflating all companies that build plants that make electricity into a monolithic “power industry” is the kind of tactic used by one who either has no knowledge about which he is talking, or who hopes that his listeners don’t.

MarkW
Reply to  rgbatduke
September 18, 2015 2:15 pm

Do you have any evidence that the power industry is behind the global warming movement? Or are you just letting your normal paranoia take over again?

Editor
Reply to  rgbatduke
September 19, 2015 7:05 am

Thanks for your comment, rgb. In my article “understood” was in the context of prediction. Aerosols may well have a strong cooling effect, but what matters is how much they contribute (+ or -) to the predicted future warming. Since the modellers don’t know how aerosols will change in future they can’t predict their future impact. Since GCRs are known to create aerosols, and also can’t be predicted, I lumped the two together for simplicity. Maybe I should have just done them separately. They are both “understood : No; contribution : 0%” because no-one knows how they will change in future, and there is no contribution (+ or -) from them to future warming in the models. The three non-zero factors I cited do deliver 100% of the predicted future warming, according to the IPCC.

henri Masson
September 18, 2015 8:37 am

“Tom O said (September 18, 2015 at 8:15 am)
.” You cannot model based on anything but DATA, and proxy information is NOT data”.
And when the system is a dynamical one (chaotic system), it is responding so monstruously to the tiniest change in intial conditions that, regarding the experimental and data processing errors associated to these conditions, you cannot make any prediction. Also the data are not spread normally (gaussan distribution) around a given mean value or trend line, and you cannot define any confidence interval, because the underlying statisitcal hypotheses are not met…..
And, as I recalled and showed earlier in this discussion (henri Masson September 18, 2015 at 2.15 am and at 4.13am ) the climate system is of dynamical nature.
This means that speaking about scenarios fed in models for limiting the global temperature to 2Ā°C by the end of the century (or by doubling CO2) is absolute mathematical and statisitcal non-sense. (by the way, the “magic limit” is now 1.5Ā°C because even a large number of the models with their latest settings predict an increase in temperature of less than 2Ā°C)

Reply to  henri Masson
September 19, 2015 6:39 pm

Even more preposterous is the idea that 2 degrees of warming will somehow be in any way a problem, let alone a catastrophe, on a planet which has large areas perpetually frozen solid, and much of the habitable surface cold enough, during large stretches of any given year, to kill any person who gets caught without sufficient protections and stores of supplies.

herkimer
September 18, 2015 8:54 am

It seems to me that if you ignore ocean currents and Enso in you models ,you end up barking up the wrong tree with your climate modeling. We are all well aware of the back ground warming of about 0.75 C /decade during the past century. We are also aware that for reasons that we do not yet completely understand , this back ground temperature has not always wamed but really fluctuates as we saw with the LITTLE ICE AGE , MEDIEVAL WARM PERIOD ,Middle ages cold period , ROMAN WARM PERIOD , etc. Man was not responsible for these fluctations .Recent evidence shows that EL Ninos, especially the strong ones raise this back ground global temperature in series of steps. We are also aware that superimposed on the background temperatures is a 60 year climate cycle with aproximately 30 years of warm and temperatures and 30 years of cooler temperatures . These changes are caused by changes in ocean temperatures and currents . So in a 100 year period , the rate of temperature change is greatly modified and the expected rise is much less than if you ignore them in your models . Two cold troughs were centered around 1910 and 1979 greatly modified our climate during the last 100 years . There will be two during next 100 years . If you cannot simulate these , your temperature predictions are worthless and all your alarmism is unjustified
So along come the alarmists and they first blamed mankind for the warming in the post 1970,s which was reallycaused caused by the major oceans when both were in their warm mode and 3-4 strong El Ninos . They then changed their tune and blamed mankind for all the warming of the background( 0.75C/DECADE) after the start of the industrial reveolution going back some 100 years We know this to be wrong also since the planet has been naturally warming after the Little ice age . Now they have again changed the tune blaming all weather events and extreme events on mankind and telling us their models tell them so.
If you use models that do not refelct realty your out put is worthless and the public should be told so.
http://appinsys.com/GlobalWarming/GW_TemperatureProjections.htm#akasofu

herkimer
Reply to  herkimer
September 18, 2015 8:56 am

The background warming should read O.75 C/ CENTURY not per decade

September 18, 2015 9:12 am

Your article has much better writing than prior articles — so much better I suspected a ghost writer!
Unfortunately, your science knowledge and logic have not improved, so the greatly improved writing quality now makes you dangerous!
YOU wrote:
“Carbon Dioxide (CO2) : At last we come to something which is quite well understood. The ability of CO2 to absorb and re-emit a specific part of the light spectrum is well understood and well quantified, supported by a multitude of laboratory experiments. [NB. I do not claim that we have perfect understanding, only that we have good understanding]. In summary ā€“ Understood? Yes. Contribution in the models : about 37%”
Your statement is wrong.
CO2 is NOT “quite well understood”.
You seem confident scientists know what CO2 does to Earth’s average temperature.
You are wrong.
If CO2 was really so well understood, this website probably would not exist.
Climate science would be “settled”.
Mr. Watts would probably have a blog where he posted pictures of his family and his vacations. (ha, ha)
Geologists and other non-climate modeler scientists (i.e.; real scientists) report huge past changes in CO2 levels with absolutely no correlation with average temperature, from climate proxy studies of Earth’s climate history.
Do you dismiss all the work of non-climate modeler scientists over the past century — if so, you are practicing “data mining”.
Laboratory experiments prove little about what CO2 actually does to the average temperature.
They suggest warming, but with a rapidly diminishing warming effect as CO2 levels increase.
There is no scientific proof that manmade CO2 is the dominant cause of the minor warming over the past 100 years.
Where is that proof written down for us to see?
No actual proof exists (based on what the word “proof” means in science).
Is there an upper limit of CO2 concentration where there is no more “greenhouse” warming, or too little warming to be measured?
— What is that upper limit?
Does the first 100 ppmv of CO2 cause warming?
— Probably.
How much warming does the next +100 ppmv of CO2 cause?
— No one knows.
Why was there a mild cooling trend from 1940 to 1976?
Why was there no warming from the late 1990s to 2015?
— For both time periods, CO2 was said to be rising rapidly!
There are many theories for the lack of warming in those periods, but no scientific proof.
We can confidently state, based on climate proxy studies, that even very high levels of CO2 do not guarantee Earth will be warm.
We can state with great certainty that the era of rising manmade CO2 from 1940 to 2015, based on smoothed data, has had periods of FALLING or STEADY average temperatures more often than periods of RISING average temperatures.
Would a CO2 increase from 400 to 500 ppmv cause any warming, or at least enough warming to be measurable?
— No one knows.
Are there positive or negative feedbacks, that amplify or buffer, greenhouse gas warming from CO2, assuming CO2 caused at least some of the warming since 1850?
— No one knows.
And since I enjoy giving you a hard time, I will continue:
The chart you selected was very clear and easy to read.
However, it implies the climate models are wrong simply because the short term climate has not matched the predictions (simulations).
That seems like a logical conclusion … but …
I am reminded of investment advice from the great Fidelity Magellan Fund manager, Peter Lynch, who bought what he thought were undervalued stocks.
.
He has written in his books that quite a few times the stocks he bought fell a lot, sometimes even over 50%, before they turned around and became big winners for him.
His point was that his long-term stock predictions were often right, even when the short-term performance of his new stock purchases initially made him look like a fool.
My point, and I do have one — It is possible today’s climate model “predictions” will be right about the climate in 100 years, even if they do not appear accurate for the first decade, or even for the next 50 years.
Of course it’s MORE likely if a 100-year prediction looks foolish in the first 10, 20 or 40 years of observations, the prediction was really nothing more than a wild guess by people who do not understand what causes changes in Earth’s climate.
Based on ice core studies, it is a pretty safe guess that the 1850 Modern Warming would last for hundreds of years … so a wild guess that warming will continue tends to make the “climate astrologists” (modelers) appear to understand CO2 fairly well, when they don’t.
They don’t understand the effects of CO2 with any precision.
Neither do you.
Neither do I.
And if you want to avoid criticism in the future, do not state that effects of CO2 are “quite well understood”.
The THEORY of what CO2 does to Earth’s climate is well known.
Many people think CO2 levels / “greenhouse theory” can be used to predict the future average temperature.
The inaccuracy of climate model “predictions” suggests CO2 is not well understood.
Your article did a pretty good job of “trashing” the models.
In my opinion, the climate modelers are “props” for politicians, on the public dole, not real scientists … and their computer games are political tools for scaring people — climate astrology, not real science.
Models are not data.
With no data, there is no science.
My climate blog for non-scientists:
Free
No ads.
No money for me.
A public service.
http://www.elOnionBloggle.blogspot.com

Editor
Reply to  Richard Greene
September 19, 2015 7:12 am

If CO2 was really so well understood, this website probably would not exist.“. Not so. It has been stated and explained time and time again on WUWT that the argument is not about the basic physics of CO2 but is about the other factors that are built into the models and which exaggerate the warming. My article explains the origin of that exaggeration.

Reply to  Mike Jonas
September 19, 2015 4:53 pm

Once again, Mike Jonas, your false claim that the “basic physics of CO2 (as relates to AGW) is well understood” is easily proven false for 8 plus reasons in 4 papers by Japanese physical chemist Kyoji Kimoto:
http://hockeyschtick.blogspot.com/search?q=kimoto
and the comments by KevinK et al above:
http://hockeyschtick.blogspot.com/2015/09/why-greenhouse-gases-dont-trap-heat-in.html

richard verney
Reply to  Mike Jonas
September 20, 2015 5:38 am

The fact that CO2 is a radiative gas and why it is such, is well understood.
Likewise the laboratory properties of CO2 are more or less well known and understood.
But that is where it ends. Earth’s atmosphere is not laboratory conditions. How CO2 acts in Earth’s atmosphere is not known or understood. Some consider that it will behave as it does in a laboratory with like results and impact. Others that Earth’ atmosphere is so far divorced from laboratory conditions that the behavoir of CO2 in laboratory conditions tells us little if anything as to what impact CO2 has when not working under laboratory conditions and when it is simply a small part of a much larger system.
To some extent, this is a question of feedbacks. but it is far more complex than that since other processes (convection, the water cycle, lapse rate etc ) are in play and those other processes may wholly swamp the effect of CO2.
The real world effects of CO” can only be deduced from studying the real world, not the laboratory, still les from models that are imperfect on almost every level.

Allen
September 18, 2015 10:24 am

Isn’t the attempt to model the climate of the Earth a bit of a throwback to nineteenth century thinking?
I’m reminded of the following quote by Pierre-Simon de Laplace
“Consider an intelligence which, at any instant, could have a knowledge of all the forces controlling nature together with all the momentary conditions of all the entities of which nature consists.
If this intelligence were powerful enough to submit all these data to analysis it would be able to embrace in a single formula the movements of the largest bodies in the universe and those of the lightest atoms; for it nothing would be uncertain the past and future would be equally present for its eyes.”

herkimer
September 18, 2015 10:40 am

Abstract
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods ā€“ including those in this special issue ā€“ found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makersā€™ plans; and (3) forecastersā€™ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecastersā€™ procedures using the questionnaire at simple-forecasting.com.
Keywords
Analytics;
Big data;
Decision-making;
Decomposition;
Econometrics;
Occamā€™s razor

henri Masson
Reply to  herkimer
September 18, 2015 11:42 am

Answer to herkimer September 18, 2015 at 10:40 am
“forecastersā€™ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures”
I think you make some confusion between “transparency”, “sophisitcation of the code of a model” and “complexity of the system description”.
Transparency on all hypotheses made and details of methods used is of course a scientific must. Your results must be reproducible.
Sophisitcation of the code has to make with software skills, methods used (and you can use rather standard subroutines for transparency and ease of duplication of your results) and spatio-temporal resolution of your model.
But for the physical system you describe, you must be either able to describe absolutely and correctly ALL the phenomena happening in your system AND all their connections and feedback loops (as well as the associated time constants) exisitng between them, because actually it is largely the structure of the system defines its overall behaviour.
Good luck if you try to do this for the Climate system made of a huge number of interconnected phenomena (often far away from equilibrium and in dynamic mode, as you are looking to changes over time). And if you look , as IPCC does, only to the greenhouse anthropogenic gases as a driver for global temperature change (a funny concept, from a mathematical point of view actually), and then use different forcings for finally fine tuning the sensisitvity knob, and considering the other factors as noise, you are completely wrong and without any chance to get any convincing result. And this because the system is higgly non linear and the effects are all except additive. The residues of the adjustment of the data by the model, are not random noise and cannot be handled like this. The system is chaotic (dynamical system) as I said already before in this discussion
Alternatively, you can consider the (climate) system as a complete black box, or better a set of interconnected black boxes) and in this case for each of the boxes you use the input /output time series to define what is called in automation theory a transfer function (obtained by using Laplace transforms and convolution integrals). Alternatively you build up an artifical intelligence neuronal system that you tune by trial and errors to reproduce the time series of the different proxies taken into cnsideration. You work exclusively on signal theory and do not try to understand the underlying physics. This is called complex system theory and it is the future in modelisation, also for climate models.
Well at least this is my opinion, and I share it completely..

herkimer
Reply to  henri Masson
September 18, 2015 12:44 pm

henri masson
Thanks for your thoughtful comments . I think you are saying what I believe the authors of the above Abstract implied . Having more complex models with bad science is worse than simpler models with more valid science.

Silver ralph
September 18, 2015 10:58 am

Perhaps they should be called the “Old King Cole” models:
Old King Climatologist was a merry old soul,
And a merry old soul was he,
He called for his dataset, and applied for a grant,
And he called for his fiddlers three,
Every fiddler had a fiddle,
And a very fine fiddler was he.
With apologies to the Welsh bards.

knr
September 18, 2015 11:23 am

‘you cannot do that and retain any credibility’
Normally the author would correct in this idea however, and this is big however , your dealing with climate ‘science’ and normal is not something that has any real relationship to this area where what matters is not the validity of your facts but their ‘usefulness’
So the models can be wildly inaccurate and sceintffically worthless , but if what they produced is ‘useful ‘ in the political or idealogical sense then they are ‘perfect’ .
Rule one of climate ‘science’ is after all , ‘if reality and the models differ in value , it is reality which is in error’

henri Masson
September 18, 2015 11:49 am

Indeed and this is called “scientism”: to justify a posteriori a political agenda, undertake some “oriented and biased”research. This is exactly why IPCC has been created, actually:trying and justifying the fight against fossil fuels. They are buzzy now for more than 2 decades without any scientific proof of their saying.

willhaas
September 18, 2015 3:48 pm

In building a GCM climate simulation I would assume that one would start with an existing and working GCM weather simulation that for the most part seems to work with a significant track record. To simulate climate in finite time one has to first increase the spatial temporal sampling interval. Such increasing intervals may cause the computational process to become at least marginally unstable. A global warming result could totally be caused by this computational instability. Has any work been done to rule out the possibility of computational instability in association with these climate simulations? To simulate climate, it sounds like they hard coded in the idea of CO2 based warming. It that is the case then the simulations beg the question and their result are totally useless. They code in that more CO2 causes warming and that is exactly what the results show making the effort entirely useless. The theory is that adding more CO2 to the atmosphere causes an increase in the radiant thermal insulation properties of the atmosphere resulting in restricted heat energy flow resulting in higher temperatures at the Earth’s surface and lower atmosphere but lower temperatures in the upper atmosphere. If it were really true then one would expect that the increase in CO2 over the past 30 years would have caused a noticeable increase in the natural lapse rate in the troposphere but that has not happened. But let us go on and assume that CO2 does actually cause an increase in insulation. The increased temperatures would cause more H2O to enter the atmosphere which according to the AGW conjecture would cause even more warming because H2O is also a greenhouse gas and is in fact the primary greenhouse gas. That is the positive feedback they like to talk about. That is where the AGW conjecture ends but that is not all what happens. Besides being a greenhouse gas, H2O is a major coolant in the Earth’s atmosphere moving heat energy from the Earth’s surface to where clouds form via the heat of vaporization. According to energy balance models, more energy is moved by H2O via the heat of vaporization then by both convection and LWIR absorption band radiation combined so that without even considering clouds, H2O provides a negative feedback to changes in CO2 thus mitigating any effect that CO2 might have on climate. The wet lapse rate is smaller than the dry lapse rate which further shows that more H2O in the atmosphere has a cooling effect. So that coding in that H2O amplifies CO2 warming, cannot possible be correct. The feedbacks have to be negative for the climate to be stable which it has for at least the past 500 million years, enough for life to evolve. We are here.

Reply to  willhaas
September 19, 2015 4:58 pm

“The theory is that adding more CO2 to the atmosphere causes an increase in the radiant thermal insulation properties of the atmosphere resulting in restricted heat energy flow resulting in higher temperatures at the Earthā€™s surface and lower atmosphere but lower temperatures in the upper atmosphere. If it were really true then one would expect that the increase in CO2 over the past 30 years would have caused a noticeable increase in the natural lapse rate in the troposphere but that has not happened. ”
The lapse rate formula is
dT/dh = -g/Cp
CO2 (and H2O) have higher Cps than air, therefore increase of Cp by added CO2 decreases/b> the lapse rate dT/dh, thus COOLING, not insulating or warming, the surface. The models have this bassackwards!

willhaas
Reply to  hockeyschtick
September 20, 2015 5:42 pm

I agree. The natural lapse rate should include everything that that effects the lapse rate. The natural lapse rate is a measure of the insulating properties of the atmosphere. CO2 is not a source of energy so the only way it can affect climate is passively changing the radiant thermal insulation properties of the atmosphere. The climate sensitivity of CO2 must be directly proportional to how much an increase in CO2 changes the natural lapse rate. Observations over the past thirty years show that the climate sensitivity of CO2 equals zero. Hence CO2 does not affect climate. There is no evidence in the paleoclimate record that CO2 has any effect on climate either.

Reply to  willhaas
September 20, 2015 6:18 pm

willhaas:
What make you think that “the climate sensitivity of CO2” is a constant?

Science or Fiction
September 18, 2015 4:52 pm

Personally I just love the following statement by IPCC:
“When initialized with states close to the observations, models ā€˜driftā€™ towards their imperfect climatology (an estimate of the mean climate), leading to biases in the simulations that depend on the forecast time. The time scale of the drift in the atmosphere and upper ocean is, in most cases, a few years. Biases can be largely removed using empirical techniques a posteriori. The bias correction or adjustment linearly corrects for model drift.”
(Ref: Contribution from Working Group I to the fifth assessment report by IPCC; 11.2.3 Prediction Quality; 11.2.3.1 Decadal Prediction Experiments )
Holy Moses!!!!
I am sorry – but an expression seems to be appropriate , and all other expressions seems so inadequate.
Models drifts towards “Imperfect climatology” and biases can be removed by “using empirical techniques a posteriori”. How can this possible pass the extensive scientific and governmental review process without anyone being alarmed? It seems to be honest however, it seems to be a glimpse of realism, which just failed to alert the scientific reviewers, failed to alert Intergovernmental Panel on Climate Change and failed to alert the governmental reviewers. How can that possibly happen?
The reviewers of this sections must have slept through their classes on logic and scientific theory – If they ever took such classes.The worst thing is that IPCC conclusion heavily relies on the result of such models. Such an admission, is alone sufficient to blow off the whole thing. I am sorry – but a fundamental flaw has been identified in the works of IPCC. Consensus or not – I couldnĀ“t care less. A fundamental flaw has been identified, even if it has not been realized by the review process. Which is just another reason to suspend further action. Please, United Nations, can you please stop this nonsense and start using all effort to help those who already suffer – by known and real causes.
“The WGI contribution to the IPCC AR5 assesses the current state of the physical sciences with respect to climate change. This report presents an assessment of the current state of research results and is not a discussion of all relevant papers as would be included in a review. It thus seeks to make sure that the range of scientific views, as represented in the peer-reviewed literature, is considered and evaluated in the assessment, and that the state of the science is concisely and accurately presented.”
Yeah – I agree! The state of the climate science is very concisely presented by such nonsense.

perplexed
Reply to  Science or Fiction
September 18, 2015 5:41 pm

What struck me the most was, again, the assertion of linear adjustments to correct for “drift” in the output of a nonlinear chaotic system. If the system being modeled is nonlinear and chaotic, and it drifts, why on earth (no pun intended) would you think that the drift is linear? That’s what sets my alarm bells off. I see it as an implicit admission that the models they use to simulate the earth’s climate are too complex for them to understand the way the model behaves during the simulation. They just look and see what the output is, without the slightest understanding of how the output resulted from the input. Basically, to the climate scientists and modelers, what goes on inside the black box is just “magic” that’s not amenable to any kind of reasoned analysis and at at the end of the day, if the model results don’t match reality for more than two years out, they just do a linear curve fit that looks good, but has no mathematical foundation to it other than that its the only thing they really can do.

Reply to  Science or Fiction
September 19, 2015 2:10 am

How can they be sure “drift” is not signal? Unless of course they already know the answer.
And this is classic:
Biases can be largely removed using empirical techniques a posteriori.
Empirical techniques? Translation – “What ever we need to make it look good.”

MarkW
Reply to  M Simon
September 19, 2015 6:40 am

When I was in school, we called that the fudge factor.

John Moore
September 18, 2015 8:19 pm

The author’s definition of parameterizations is incorrect. In GCM’s, parameterizations are procedures to account for sub-grid scale phenomena – those which cannot be physically simulated using the grid-level finite element modeling. For example, a thunderstorm is small compared to grid sizes, so the model cannot simulate its physics. Instead, it uses another, non-physical model to estimate the the thunderstorm and ultimately, its effect on the grid output.
Simply: these parameterizations are not just variables to be fiddled with to get a desired results. They are, themselves, models and can be extremely complex.
However… the (very necessary) use of them puts lie to the claim that the models are just simulating the physics of the atmosphere, and since we understand those physics well (we do, for the most part), the model must be right.
There are, of course, knobs to twiddle to adjust the model. If they are done in the ex post facto approach described in the article, that procedure is obviously extremely suspect.
The real problem with climate models isn’t adjusting for backcasting – although that is done. It is the simple fact that the atmosphere is far to complex to accurately simulate over an extended time frame. If you don’t believe that, tell me what the weather will be in 14 days? The weather models, which are very, very close to the climate models, are useless that far out. Now tell me the weather in 10 years. Yep.

Editor
Reply to  John Moore
September 19, 2015 6:39 am

Plain wrong. IPCC AR4 Box TS.8 :- “parametrizations are still used to represent unresolved physical processes such as the formation of clouds and precipitation

John Moore
Reply to  Mike Jonas
September 19, 2015 9:21 am

Which is *exactly* what I said. “Unresolved” means sub-grid scale – the processes are below the spatial resolution of the model. Represent means a way of coming up with the data other than the main physics package of the model.

John Moore
Reply to  Mike Jonas
September 19, 2015 9:23 am

I should add – the term “parameterization” is very misleading, so your mistake is understandable. When one hears the term, one imagines parameters that can be twiddled. But, that is not how it is used in this context. However, the parameterizations, which are models, can be tweaked and twiddled for hindcasting, the same way other aspects can.

Editor
Reply to  Mike Jonas
September 19, 2015 9:48 pm

John Moore – “Unresolved” typically means not fully understood, as in
Thus, the physical causes of the variability seen in the tide gauge record are uncertain.These unresolved issues relating to sea level change and its decadal variability …” [AR4 TS.4]
It has been hypothesised that a transient increase in the diffuse fraction of radiation enhanced CO2 uptake by land ecosystems in 1992 to 1993, but the global significance and magnitude of this effect remains unresolved“.[AR4 7.3.2.4.3]
Whether emissions of soil dust aerosols increase or decrease in response to changes in atmospheric state and circulation is still unresolved” [AR4 10.4.4]
but, even if your interpretation is correct in this instance, they would still be guessing. Which is the central thrust of the article. And when you say the parameterisations can be “
tweaked and twiddled for hindcasting“, well, that is exactly the problem that I identify : tweaking and twiddling the models to match observation just adds more guesswork to the models. A lot more.

John Moore
Reply to  Mike Jonas
September 19, 2015 10:26 pm

Mike, we are discussion the meaning of parameterization in the context of climate models.
IPCC: “Moreover, many physical processes, such as those related to clouds, also occur at smaller scales and cannot be properly modelled. Instead, their known properties must be averaged over the larger scale in a technique known as parameterization. ”
And, as one example trivially found by Googling: “A new parameterization is presented for the shortwave radiative properties of water clouds, which is fast enough to be included in general circulation models (GCMs).”
Another: “The parameterization schemes currently in use range in complexityfrom simple moist convective adjustment schemes that are similar to that proposedby Manabe et al. (1965) almost three dĆ©cades ago to complicated mass flux schemes utilizing and elaborating thĆ© basic concepts set forth by Arakawa and Schubert (1974).”
here
And here
Parameterization, far from being a pile of numbers to twiddle, is a huge body of complex computer code implementing models of sub-grid processes. In fact, most of a modern GCM is in the parameterizatiom models, not in the main physics package.
There is another thing you need to consider: GCM’s are also used in weather modeling, including these complex parameterizations. In the case of weather, the results can be falsified. That process leads to improvements in the sub-grid models (parameterizations) over time, and those improvements also go into the climate models.
As to “unresolved” – yes, it can *also* mean not fully understood. In this context, however, it means “unresolved by the grid resolution.” But… that’s a diversion, since the point is what parameterization means.
Please, before you respond – research parameterization. Google “gcm parameterization” and look through the results. You’ll see what I mean.

Editor
Reply to  Mike Jonas
September 21, 2015 4:26 pm

John Moore – sorry but your comments are just fantasy.
9.2 ā€œparametrizations are still used to represent unresolved physical processes such as the formation of clouds and precipitationā€œ
TS.6.4.2 “Large uncertainties remain about how clouds might respond to global climate change.”

John Moore
Reply to  John Moore
September 21, 2015 4:52 pm

Believe it or not, Mike, I am on your side. I don’t believe the climate model projections. I was just trying to correct an error in your article. I am sorry that you are not interested in learning more about climate models.

perplexed
Reply to  John Moore
September 21, 2015 6:10 pm

Mike Jonas – based on John Moore’s comment, the IPCC is not using the term “unresolved” in its colloquial sense of meaning “not known.” They are instead using it in the technical sense of not being represented on a fine enough scale so that they instead have to simply come up with a parameter as a substitute for having the model physically simulate clouds as they form, or rain as it falls.
Your latter quote of uncertainty as to how clouds respond to increasing temperatures DOES NOT say that everything that is “unknown” is “parameterized” in the models and is then adjusted when tuning the model. That was an assumption on your part, and John Moore is challenging the factual validity of that assumption. My recollection of earlier explanations of climate models by books or articles by climate scientists (I think John Christy, but I’m not sure – Maybe Michaels) tracks with what John Moore claims. That’s not to say that the models are accurate – the thrust of your essay is still true in that the things the models do treat as parameters are arbitrarily adjusted in a glorified and pointless curve-fitting exercise, but your details are not correct.
Also, as noted above, I still think that you are incorrectly conflating the IPCC AR4 description of how they approach the attribution problem by linearly combining the outputs of computer models to individual forcings, with the different issue of how the models themselves are tuned. As I read the quote you gave from the IPCC AR4 report,when they try to determine how much of historical warming to attribute to CO2, they run each forcing individually through a model, linearly combine the results using weights that achieve a “best fit” and then with the weights that achieved the best fit, simply proportion how much of the output was due to the CO2 forcing relative to other forcings. This process seems silly to me, not being designed to produce any kind of useful result, but no parameters or any other feature of the models are being adjusted in this procedure.
Certainly, your statement in the essay that climate models don’t account for solar variations is incorrect, and maybe the volcanoes too given that these could be accounted for in the aerosols (although the IPCC discourages this interpretation by referring to aerosols only as a “man-made” forcing.

John Moore
Reply to  John Moore
September 21, 2015 7:31 pm

perplexed – thank you for your observations. I made my comments in the spirit that the worst way to combat the poor science of global warming is by misrepresenting it. We need to be accurate, or we will confirm the incorrect ideas in the public world and especially the scientific world that all criticisms are poorly informed.
One important note: they don’t come up with a “parameter” – they come up with a “parameterization.” It is this messy neologism based on the word “parameter” that leads to a lot of confusion. It is natural, when hearing “parameterization” to imagine varying a few “parameters” – simple numbers. Unfortunately, this natural reading leads one astray. A parameter is a number. A parameterization is a model ranging from trivial to vast complexity.
It is valid to critique models which are tuned to hind-cast, but not to assume that this is done through parameterization. It may be, but often the parameterization, since it is tested against the real world, is less likely to be tweaked. Of course, there is a gotcha – it is hard to test a lot of the parameterizations in a higher CO2 concentration world – so those parameterizations might be wrong.
In my mind, the biggest problems with climate models are not parameter twiddling, they are:
Inability to falsify the model – a violation of Popper’s model of science. Why should we believe a forecast for far in the future from a process which has never had its forecasts tested?
The difficulty (impossibility, I think) of simulating over time the complex non-linear feedback system that is the atmosphere, due to chaos, which itself was discovered with the first atmospheric models by Lorenz. Weather models fail rather quickly. Last night, Tropical Depression 16-E was supposed to be on top of Phoenix right now. I look out the window and it isn’t there. Instead, it changed track, and then dissipated in Mexico. More general weather forecasting loses accuracy over time, and becomes pretty much worthless 5 to 15 days in the future, as chaos dominates. The weather models and the climate models share the same physics modeling and parameterizations. In other words, the climate models are just as vulnerable to chaos, because they are using the same approach and often the same code. Climatologists attempt to compensate for this by using ensembles, but so do meteorologists, and the ensembles don’t add that much time before chaos dominates – days, not years.

Reply to  John Moore
September 21, 2015 8:26 pm

John Moore:
To your list of shortcomings today’s climate models can be added that they convey no information to a policy maker about the outcomes from his/her policy decisions thus making control of the climate impossible. These models are worthless for their advertised purpose.

Reply to  John Moore
September 21, 2015 8:29 pm

Oops, I should have typed “shortcomings of” instead of “shortcomings.”

John Moore
Reply to  John Moore
September 21, 2015 8:34 pm

– it’s even worse than that, because they appear to provide information that is useful, when it is not.

Reply to  John Moore
September 21, 2015 10:12 pm

John Moore:
That’s right. The information that is useful for the purpose of controlling the climate (the “mutual information” in information theoretic terms) is nil. Non-nil mutual information is the product of models that make predictions but today’s models make projections. An effect from application of Mike Jonas’s “duck test” is to obscure the important difference between a “prediction” and a “projection” thus covering up the actual incapacity of today’s climate models to support regulation of the climate..

Arnold1
September 19, 2015 12:05 am

Interesting post Mike, but can you please tell me where you sourced your data? You say that the models deduce temperature increase from only 3 factors, namely carbon dioxide, water vapour and clouds , contributing 37%, 22% and 41% respectively. These are very specific numbers, did you deduce them yourself from model performance, or is there some IPCC reference for this? I know catastrophists tell us that co2 is THE control knob, aided by positive feedback from water vapour, but surely any “sophisticated” computer model would take all of the many factors you mention into account?

Editor
Reply to  Arnold1
September 19, 2015 6:48 am

IPCC AR4 8.6.2.3 : “Using feedback parameters from Figure 8.14, it can be
estimated that in the presence of water vapour, lapse rate and
surface albedo feedbacks, but in the absence of cloud feedbacks,
current GCMs would predict a climate sensitivity (Ā±1 standard
deviation) of roughly 1.9Ā°C Ā± 0.15Ā°C (ignoring spread from
radiative forcing differences). The mean and standard deviation
of climate sensitivity estimates derived from current GCMs are
larger (3.2Ā°C Ā± 0.7Ā°C) essentially because the GCMs all predict
a positive cloud feedback (Figure 8.14) but strongly disagree
on its magnitude.

With CO2 alone at 1.2, that’s 37% of 3.2, then water vapour etc adds 0.7 ie. 22% and clouds 1.3 ie. 41%. I’m working with the model average and trying to keep things simple, so use just the bare figures without the +-.

Alx
September 19, 2015 6:24 am

There is disagreement on whether climate models predict, project, or forecast. Whatever the intent, they in practice have proven incapable of performing any of the above so the question is moot.
They do burn a lot of cpu cycles though. I think there you might find consensus.

Reply to  Alx
September 19, 2015 5:18 pm

No. The models do project.

MarkW
September 19, 2015 6:47 am

In real world usage, the difference between prediction and projection has to do with how much confidence you have in the result.
When I declare that I am making a prediction, I am putting my reputation on the line and declaring that I am confident that this is going to happen.
If I call it a projection instead, I am declaring that something might happen.
If you want to rework the economy of the entire world and transfer trillions of dollars of wealth from one group of people to another, you had darn well better be making predictions, not merely projections.
Until you are willing to put your reputation on the line and start making PREDICTIONS, then go away and stop bothering me.

Reply to  MarkW
September 19, 2015 9:38 am

MarkW
The real world contains numerous people who are susceptible to being duped by applications of the equivocation fallacy in making global warming arguments and numerous people who enjoy duping them out of their money through applications of this fallacy. Observing that this is happening, some of us try to prevent it from happening by making a distinction between “prediction” and “projection” under which “predictions” are made by a model possessing an underlying statistical population and “projections” are made by a model lacking an underlying statistical population. Lacking a statistical population a model is insusceptible to being validated and conveys no information to a policy maker about the outcomes from his/her policy decisions; these traits mark the model as unscientific and unsuitable for use in controlling the climate.
Others prefer not to distinguish between a model that possesses an underlying statistical population and a model that does not. Some of them argue that “prediction” should not be disambiguated because it is used ambiguously in the English vernacular. If they prevail, this has the effect of making non-scientific research look scientific and for models that are unsuited to their intended purpose of controlling the climate seem suited to it. The effect is to put us on track to spend an estimated 100 trillion US$ over the next century on phasing out fossil fuels for no gain.

Reply to  Terry Oldberg
September 19, 2015 9:58 am

You should use shorter sentences, and smaller words.
I have to admit that I’m over 60 years old, have a vocabulary of at least 300 words, yet had never heard or seen these two words used in your comment: “insusceptible” and “disambiguated”
I read your post out loud for friends.
I used it as an example of how some smart people can have difficulty communicating their knowledge.
No one else knew what you were talking about.
I tried to explain your post, but while reading it I ran out of breath, started coughing, and someone had to call 911.
My solution, when I hear anyone predicting the future (climate or anything else, with computer models or not), is to plug my ears with my fingers and loudly hum the US national anthem.
I’m tired of listening to predictions of the future that will be wrong.
Especially predictions of doom used by smarmy religious and political leaders to control the masses and/or take their money … which is why I got interested in global warming in the late 1990s — the “warmists” seemed like a new religion, or cult. … And I also thought Al Gore was a stupid head, so didn’t trust him.

Reply to  Terry Oldberg
September 19, 2015 3:07 pm

Richard Greene September 19, 2015 at 9:58 am
Obviously you need a larger vocabulary. I have seen those words before. I may even have used them.

Reply to  Terry Oldberg
September 19, 2015 6:52 pm

I know all of the words used, and knew what Terry, was saying, and may in fact be one of the people he was referring to as responsible for all the wasted money…but still laughed out loud at Richard Greene’s very funny comment.
But since I have argued vociferously against CAGW since the late 1980’s, I know for sure that there aint none of it my dang fault.

Anne Ominous
Reply to  Terry Oldberg
September 19, 2015 8:21 pm

Terry:
Your article here: http://wmbriggs.com/post/7923/ is very interesting, and makes some very good and valid points. However, it is rather technical and trying to explain it here in fewer words is probably not very fruitful. Instead, I would refer people to that link and let it speak for itself.

Reply to  Anne Ominous
September 19, 2015 9:24 pm

Anne Ominous:
Thanks for the feedback. “Scientists” who include President Obama’s science advisor have fallen down on the job by representing a pseudoscience to be a science. Unless I can figure out how to communicate this message to voters or someone else can figure how to do so mankind is screwed. Your continuing support in figuring this out would be much appreciated. You can reach me at terry@knowledgetothemax.com .

Anne Ominous
Reply to  Terry Oldberg
September 19, 2015 10:21 pm

Terry:
I will probably email you but it will likely be a couple of days due to prior obligations.
I have long been interested in the practice of effectively explaining complex ideas to the laity, but that does not imply I am any sort of expert. Still, in my own field I am well regarded for that, if only informally.
I have long felt that many of the logical failures of AGW advocates should be addressed, but many exchanges with others over the years have left me feeling rather isolated in that regard.
So I will email. It may be as late as Wednesday.

Reply to  Terry Oldberg
September 20, 2015 10:57 am

Anne,
I myself have noted that there are pages and pages, amounting to reams of commentary, here at WUWT in which many individuals almost continually point out the various logical fallacies, shortcomings, and failures of the warmista cadre and their long since falsified meme.

September 19, 2015 7:59 am

Thanks, Mike Jonas. Excellent article.
“The climate modelsā€™ predictions are very unreliable.”, you are too kind.

September 19, 2015 3:37 pm

Several bloggers have joined the authors of the IPCC assessment reports in using the term “signal” inappropriately. For a control system there is no such thing as a signal or noise as the signal + noise would have to travel at a superluminal speed to carry information to the controller about the outcomes of the events. As the aim of global warming research is control, this “signal” does not exist.

Reply to  Terry Oldberg
September 19, 2015 6:55 pm

Maybe the signal is communicate by means of quantum mechanical entanglement, and thus has the superluminal aspects you scoff at?
Didja ever think a dat?
(I reckon I should speak in common vernacular when addressing Mr. Oldberg, so I does.)

Reply to  Menicholas
September 19, 2015 7:46 pm

Menicholas:
You exhibit difficulty in or unwillingness to communicate in technical English. Are you having a seizure? If so, call 911. Do you mean to mock me? If so, bug off. Otherwise, perhaps you have sufficient training to wrap your head around technical concepts when they are communicated in technical English and to reply in technical English.
In technical English, a signal is not “communicated” but rather is the basis for communication. Communication is conducted by the passage of a signal at or below the speed of light. There is no violation of relativity theory.
It is in control that passage of a signal above the speed of light would be required for the flow of information. It can be concluded that there is no such signal. Climatologists imply that there is. As the issue is control and not communication, communication by quantum mechanical entanglement is irrelevant.

Reply to  Menicholas
September 20, 2015 10:49 am

Terry, I will “bug off”, and not respond to you here again.
Anyone can read the thread and decide if I am out of line for having the temerity to make a joke or mock one such as you, who would never stoop to mocking anyone, least of all those who hold the same general position as you.
Likewise, and one can decide if I know how to communicate using English, or if I have any degree of technical knowledge and am able to effectively communicate my thoughts.
As well as whether or not you display any willingness to engage people in the tone of the then ongoing conversation, whatever such tone may be.
I will say, that I get the impression that you may be a rather humorless sort, and that you seem to relish displaying a grating tone of belittlement in your expert commentary.

Reply to  Menicholas
September 20, 2015 11:11 am

L. Buckingham,
Since Mr. Oldberg is a published, peer reviewed author, his understanding of peer review is clearly superior to Mr. Buckingham’s ā€” unless Lewis B is also a peer reviewed author?
Also, Terry, if I may play peacemaker here, I think Menochilas was being funny, and probably a little sarcastic. He sometimes comments like that. As a more objective observer, I think you both might be misreading the other, and with that unasked-for comment I will STFU and GTFA.ā˜ŗ

Reply to  Menicholas
September 20, 2015 11:30 am

The Buckster says:
Without providing a citation for any ā€œpeer reviewedā€ work, all Stealey is doing is blowing smoke.
Whoa there, cowboy! One thing at a time. As always, I can back up my statements.
But you were digging a different hole just 4 minutes ago, erroneously claiming that T. Oldberg didn’t provide a source to a particular paper.
Maybe you’ve found it now, so you’re deflecting to this question?
Ya know, Bucky, you ask dozens and dozens of questions in these threads. But you never really answer any. Why not?
So, if I provide you with the citation you’re demanding here, will you agree to answer ‘Yes’ or ‘No’ to just five (5) questions from me? I’ll make them easy.
You will still be far ahead in the ‘asking questions but not answering questions’ game. But at least we will have narrowed the gap by five. So, are you game?
I have to go out for a while, but as soon as I’m back, I will log on to see if you accept my kind offer. See you then, pardner.

catweazle666
Reply to  dbstealey
September 20, 2015 5:33 pm

dbstealey: “Whoa there, cowboy! One thing at a time. As always, I can back up my statements.”
Yes, you keep saying that.
So go on, do it.
Or back off.

Reply to  Menicholas
September 20, 2015 2:07 pm

Lewis P Buckingham:
I frequently observe you playing the game called “defamation by innuendo.”

Reply to  Menicholas
September 20, 2015 3:17 pm

I’m back! And just and as I knew he would, Bucky is still at it, digging his hole deeper:
Terry, it is real easy for both you and Stealey to end this discussion. Simply post the citation of the peer-reviewed journal…&blah, blah, etc.
First, off, I don’t want to end this discussion. It is highly amusing watching Buckingham squirm. The referenced paper is there, in the posted link. Bucky says, I need to see that the person that makes a big fuss…&etc.
But of course, Mr. B is actually the one making the big, constant, unending fuss, whining that others must do what he demands. Bucky, no one else is claiming ignorance like you, by complaining that they can’t find the paper that Terry Oldberg linked to. Only you are. No one else seems to have that problem.
Next:
I donā€™t play games Stealey.
And a good thing, too (if it was true). From the looks of your comments, you would have a difficult time winning a game of tic-tac-toe.
Anyway, I simply pointed out that you incessantly ask questions, but you rarely answer anyone else’s questions. So let me know if you are willing to answer a few, like I proposed.
If so, I’ll post the paper you can’t find. If not, go find it on your own.

Reply to  Menicholas
September 20, 2015 3:40 pm

Oh, goody. I’m being ignored.
Terry Oldberg, I found your link in about fifteen seconds with no trouble. Buckingham is desperate to make you, or me, or someone post it for him.
Don’t do it. It’s there, he can find it himself. Bucky claims he doesn’t play games, but this is a game to him. He wants you or me to give him what he demands.
I told him I would post the link for him if Bucky would agree to answer a few questions. That’s fair, no? So he can ge the link, easy-peasy. But as we see, he’s playing a game here. And so far, he’s losing.

catweazle666
Reply to  Menicholas
September 20, 2015 5:35 pm

The best thing to do about blog denizens who clearly live under bridges is to refrain from feeding them.

Reply to  Menicholas
September 20, 2015 7:13 pm

catweazel666 says:
So go on, do it. Or back off.
Why the hostility? Could you not find the referenced paper, either?
Or are you questioning my comment about Terry Oldberg? You’re not clear about which, and WordPress as usual screwed up the placement of comments. It was only by accident that I happened to see yours, and looked at the time stamp.
So, please explain clearly what you need to know. And I can do wirthout the hostility. I have never said anything negative about you, not once, nor have I ever been snarky or unpleasant with you. Same with Buckingham: he inserted himself into a conversation I was having with someone else, and he did it in a hostile, unfriendly and confrontational way. What does he expect now? Kissy-face?
Treat me good, I’ll treat you better
Treat me bad, I’ll treat you worse

We can start over on the right foot. Or not. It’s up to you.

Reply to  dbstealey
September 20, 2015 8:37 pm

dbstealey:
I admire and endorse your firm but fair approach to human relations on scientific issues!

Reply to  Menicholas
September 21, 2015 9:47 am

catweazel666,
Now that Buckingham has stated that he will ignore my comments, I feel I owe you an answer.
I was having fun by not giving Lewis B what he was incessantly ā€“ and impolitely ā€“ demanding. But since you asked, here is a peer-reviewed publication authored by Terry Oldberg:
Oldberg, T. and R. Christensen, 1995, “Erratic Measure” in NDE for the Energy Industry 1995; The American Society of Mechanical Engineers, New York, NY.
There may be more. I cannot vouch as to the accuracy or data contained in anyone’s papers, and you can be sure that detractors can be found for almost any peer reviewed paper. But the specific challenge questioned whether Terry has any peer reviewed publications. This answers Bucky’s challenge, which he has now lost. He doesn’t matter, but I wanted to show you, at least, that I don’t fabricate things.
I can also post the peer reviewed, published paper that got Buckinham so spun up in this thread. It’s right there, and easy to find.
It’s always been there, if anyone had taken the time to look, instead of arguing for about twenty times longer than it would take to find it, demanding that someone else must produce it for him. It took me less than half a minute to find it and start reading it. The link is still there.

Science or Fiction
September 20, 2015 1:31 am

When speaking about how reliable the climate models are, IPCC does another flaw in considering the average of the model results. The average value of an ensemble of climate models is often used as an argument in the debate. What does it mean? The following is a quote from the contribution from Working group I to the fifth assessment report by the Intergovernmental Panel on Climate Change.:
Box 12.1 | Methods to Quantify Model Agreement in Maps
ā€œThe climate change projections in this report are based on ensembles of climate models. The ensemble mean is a useful quantity to characterize the average response to external forcings, but does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.ā€
This can be regarded as a glimpse of realism. Except from the logical fallacy expressed in the same section. Let us rephrase the section:
The ensemble mean does not convey any information on:
ā€“ the robustness of this response across models
ā€“ its uncertainty
ā€“ likelihood
ā€“ its magnitude relative to unforced climate variability
but it is a useful quantity to characterize the average response to external forcing.
That is a quite silly thing to say ā€“ isnĀ“t it?
How can it be useful when you do not know
ā€“ if it is robust
ā€“ its uncertainty
ā€“ its likelihood
ā€“ its magnitude relative to unforced climate variability?
Exactly what is the ensemble mean then supposed to be useful for?
Later in the same section it is stated:
ā€œThere is some debate in the literature on how the multi-model ensembles should be interpreted statistically. This and past IPCC reports treat the model spread as some measure of uncertainty, irrespective of the number of models, which implies an ā€˜indistinguishableā€™ interpretation.ā€
I think this section speaks for it self. What ā€ implies an indistinguishable interpretationā€ is supposed to mean ā€“ I have no idea. It seams to me to be totally meaningless.
And this passed the peer review by experts and the governmental review IPCC so heavily relies on and trust. Ref. Article 3 in the “Principles governing IPCC work”.

richard verney
Reply to  Science or Fiction
September 20, 2015 5:02 am

Something that “…does not convey any information on the robustness of this response across models, its uncertainty and/or likelihood or its magnitude relative to unforced climate variability.ā€ cannot on any objective criteria be considered useful.
It is only the likes of those like Mosher who consider climate models useful. They are so bad that they are a hinderance to our understanding, not an aid.
It has been patently obvious for a long time that Climate Models should be ditched and the money spend on them redirected to more productive science.

Reply to  Science or Fiction
September 20, 2015 8:51 am

Like the projections from which it is formed, the multimodel mean is non-falsifiable and conveys no information to a policy maker about the outcomes from his/her policy decisions. Thus, it is scientifically nonsensical.

perplexed
Reply to  Science or Fiction
September 21, 2015 6:55 pm

The fact that the IPCC thinks that any useful information is conveyed by an average of different climate models, or even different runs of the same climate model, should be a flag to everyone that the IPCC does not know what it is talking about. Assuming that the models used in the IPCC report are a representative sample of the entire universe of climate models, the average would then only give you an expected value of what another climate model would produce as an output. That’s it. It conveys no other information. The average certainly tells you nothing about how the actual climate behaves.
Say I’m forced to bet on a hockey game – a sport I know nothing about. I check out the predicted scores by ESPN, CBS, Fox Sports – as many as I can for the game I have to bet on, and they are all over the place. Being the amateur that I am, what do I do? I just take a meaningless average of the predictions, which again only tells me an expected value of some other person’s prediction that I didn’t consider. If I did know anything about hockey I could either form my own intelligent prediction or at least have the wherewithal to distinguish whose predictions made sense and whose did not. Taking an average of a prediction is a sign of an amateur – not a sign of expertise.
To understand what the average of a series of samples represents, you first have to understand what the samples are. In this case, the samples are of model runs of the Earth’s climate – and are not samples of the climate itself. The average, therefore, does not convey any useful information about how the climate actually behaves EXCEPT under the unproven (delusional) assumption that the climate models accurately simulate the behavior of the real climate.
How many of you think that climate models programmed by people who fervently believe that CO2 plays an important role in the Earth’s average temperature will produce an output that DOES NOT show CO2 having an important effect on the Earth’s average temperature? I think that climate scientists use computer models as a backdoor way of fabricating the data that the real world won’t let them collect scientifically. I think its unethical, and I can’t image any other scientific discipline in which this kind of nonsense would be tolerated.
There is no way of conducting a controlled experiment of the Earth’s climate system to actually measure the effect of CO2 concentration on temperature (or any other climate state variable). You can’t tell the sun to not change its output, or control the amount of clouds, etc. You can’t tell volcanoes to stop erupting. Neither is there any way of conducting a study, like epidemiological studies, that determine the effect that X has on system Y by examining a large number of instances of the system with control populations, etc. There’s only one Earth. In other words, all the scientific ways of measuring the effect CO2 has on the Earth’s climate are not available to scientists, and since scientists are experts only in the scientific procedure, this leaves them up a creek without a paddle.
So instead what they do is invent the data they so desperately want by programming computers to simulate they way they THINK the climate operates. When the computers runs don’t match with actual, measured climate data, they make all kinds of excuses:
“Not enough time has passed, the real-world data is bad and needs adjusting, maybe it’s the aerosols, or the oceans are absorbing the heat – whatever. But don’t you worry, even though we don’t know why temperatures aren’t moving the way our models said they should, our confidence that CO2 contributed to at least half of the observed warming just went up from 90% to 95%. Trust us. What? You want performance before relying on our conclusions? Ha – we don’t need no stinkin’ performance. We’re ‘experts.’ Trust us on that.”
Diatribe over.

Solomon Green
September 20, 2015 6:08 am

Thanks for an excellent article which has stimulated a large number of very interesting and thoughtful posts (and, with one possible exception, no denials).
By the way, instead of arguing about “projections” and “predictions” why not use “prognostications” which, for me at least, carries a greater element of guesswork than do the other two words.

Reply to  Solomon Green
September 20, 2015 12:49 pm

dbstealey
By the way, in the literature of global warming climatology, the word “science” is polysemic thus fostering applications of the equivocation fallacy. It references pseudoscience as well as legitimate science thus being ideally suited to the task of covering up the pseudoscientific nature of global warming research.

Reply to  Solomon Green
September 20, 2015 1:16 pm

Solomon Green:
To suggest that debaters are “arguing about ‘projections’ and ‘predictions’ muddies waters needing clarification. When described in logical terms the topic of the debate is the equivocation fallacy in global warming arguments. One subtopic is the role of the polysemic form of “prediction” in applying this fallacy to the task of reaching false or unproved conclusions from global warming arguments. A second subtopic is the availability of means by which applications of the equivocation fallacy can be made impossible. This is by assignment of one of the two definitions of the polysemic form of “prediction” to a monosemic form of “prediction” and the other definition to “projection.” This approach is already in widespread use.
In making a global warming argument, the words “prediction” and “projection” are merely ways of referencing the associated meanings. Thus, they can be replaced by any other pair of words, including made-up words, that reference the same meanings without effect on the logical legitimacy of the argument being made. Thus to suggest that an argument is being conducted over the trivial issue of the semantics of “prediction” and “projection” is inaccurate and misleading.

September 20, 2015 6:39 am

Here’s my issue on this, that nobody seems to discuss here:
The fourth IPCC report [para 9.1.3] says : ā€œResults from forward calculations are used for formal detection and attribution analyses. In such studies, a climate model is used to calculate response patterns (ā€˜fingerprintsā€™) for individual forcings or sets of forcings, which are then combined linearly to provide the best fit to the observations.ā€
fit to the observations? Which observations?
This can make a huge difference, right?
If you compare the model output to that list of “observations” and tweak it to fit that, you will get different results depending on which observations you start with… for example…
1) RSS
2) GISTemp
3) Hadcrut4
http://www.woodfortrees.org/plot/gistemp/from:1978/mean:12/plot/gistemp/from:1978/mean:12/trend/plot/rss/mean:12/plot/rss/mean:12/trend/plot/hadcrut4gl/from:1978/mean:12/plot/hadcrut4gl/from:1978/mean:12/trend
So the question is, which observation are they using to tweak their model to fit “reality” ?
My guess is they are using the dataset that is showing the most warming, which is already adjusted/homogenized to fit CO2… Thus using this to fit your model, would explain the large discrepancy between model and RSS for example, caused by this artificial feedback loop (adjusted dataset fit model)
Also the models are suppose to model the atmosphere, right? Not the ground temperature influenced by UHI… So comparing to RSS/UAH would make more sense to tweak them?
Thoughts?

DWR54
Reply to  Simon Filiatrault
September 20, 2015 8:52 pm

The CMIP5 models are specifically predicated on surface temperature data: the average of the main sets currently in use. This is set out in chapter 11 of the 2013 report. See Figure 11.25: http://www.met.reading.ac.uk/~ed/bloguploads/AR5_11_25.png
Ed Hawkins of Reading University in the UK regularly updates this figure with the latest values (to end July 2014 here): http://www.climate-lab-book.ac.uk/wp-content/uploads/fig-nearterm_all_UPDATE_2015b.png
As you can see, observations are well inside the range of projections, but have been consistently on the low side of average. That’s why the IPCC lowered its ‘assessed likely range’ for near term observations in the 2013 report (red chevroned box). 2014/15 have pushed observations much closer to the multi-model average, but still on the low side.

DWR54
Reply to  DWR54
September 20, 2015 9:02 pm

Should be “to end of July 2015”, not 2014. Apologies.

markl
September 20, 2015 3:19 pm

Take it off line or everyone goes to bed early and no jello tonight.

Not Chicken Little
September 20, 2015 5:06 pm

Even though I am not a scientist and don’t even play one on the internet, I always learn something from Watts Up. I never knew that “global warming” causes so many stuffed shirts and windbags…

Reply to  Not Chicken Little
September 20, 2015 5:38 pm

Thanks for the report Chicken. Also reportedly caused by global warming are: acne, alligators in the Thames and beer shortage ( http://whatreallyhappened.com/WRHARTICLES/globalwarming2.html ).

September 21, 2015 3:25 am

The longer the time to follow the main efforts of science and policy to define and determine the cause of climate change and the consequences in terms of global warming. But, unfortunately, I have not seen anything logical in many stories, especially those that support the policy and not a science that studies and respecting the laws of nature.
In many places I have called attention to the fact that climate change on the planet, not only on nŔoj planet, depend on the relationships of the planets and the sun.
In what way can this be proved? It depends on the interests and moods of powerful circles and when they realize that the progress of science can not be achieved with a profit interest in this field.
Today they all run and rush headlong into the unknown, only if they consider that there can be realized a personal profit.
These all who read this, I can not ignore this, as they wish, because nobody can forbid you, but remember, that I have the obvious idea that these ENIGMA successfully complete !!.
Offering up with his idea, but now I stand by that, that NASA and the Government of the United States if they have this interest, can be a little “lowered down” and to accept the offer with a contractual obligation to perform it in detail.
Read this and think there is no need to be making fun of this, but to try to solve.
I can not wait to fall soon many false theories about climate change.

MG
September 22, 2015 2:18 pm

Mike Jonas
Enjoyed your article Foot note 3 is the strongest and most eloquent part of your argument.
Thank you for listing out the possible contributors to climate change.
But a few of the items do not contribute enough to be considered
1) Ocean currents
2) Wind
3) Galactic Cosmic Rays
Water Cycle should not be consider either because it effect is net neutral over any distance in time.
You are correct that these items need to be considered as part of the Climate Models. These items do not have in and of themselves have a long term effect. But need to be included in the Models because the models are using short term variations to project long term changes. The fact that the models do not incorporate these short term variables show a strong indication intended bias by the modelers.
1) ENSO
2) Ocean Isolations
3) Volcanoes
It is reasonable the modelers to ignore Milinkovic Cycles. Milinkovic Cycles are too long term to change the projections in the models.
Uncertain why so many of the Factors are listed as not Understood.
Milinkovic Cycles are fairly well understood
ENSO, Ocean Isolations, and Volcanos should be classified as at least partially understood.
We do agree on the conclusions:
Because the modelers choose to ignore the short term variables of ENSO, Ocean Isolation, Volcanos, Sun Spot activity. The modelers used this intentionally created vacuum of data to distort the net effect of Water Vapor and Clouds from a negative feedback (cooling) to positive feedback (Forcing) (heating ) to achieve their premeditated end goals.

Reply to  MG
September 22, 2015 5:35 pm

MG:
Like the arguments of the IPCC, Mr. Jonas’s argument is divorced from logic by the absence from it of the example of a proposition that some of us call a “prediction.” Though he uses the term “prediction” his “prediction” is not a proposition of a specialized kind. Not being a proposition, his “prediction” lacks either a truth-value or a probability of being true. Thus, though a model can be “evaluated” it cannot be validated for validation implies probabilities or truth-values. Lacking probabilities or truth-values a model cannot supply us with information though information is required if governments are to regulate our climate.

perrymyk
September 24, 2015 5:09 am

Only 33% of US voters still believe in the lie of climate change
and every one of them is a whacked out Liberal who wants to redistribute wealth too…

HenryMiller
September 24, 2015 5:57 am

“…have produced global temperature forecasts that later turned out to be too high. Why?
“The answer is, mathematically speaking, very simple.”
The answer has nothing to do with mathematics, the answer is that those who devised the models were biasing them to show the results the modellers wanted them to show. The East Anglia effect–nothing is too dishonest for the climate change fanatics to try to do.

Reply to  HenryMiller
September 24, 2015 10:13 am

HenryMiller:
This is hard for a lot of people to understand. Nevertheless, today’s climate models do not make forecasts. Models that are scientific and logical make forecasts. Models that support control of systems make forecasts. These models do not make forecasts. They belong to a special class of models that are pseudoscientific and useless for their intended purpose. These models are made to seem scientific and useful through the use of ambiguous language in describing them. When described in a disambiguated language their true nature becomes obvious.

Chris Nelli
September 24, 2015 6:42 am

Great article, but I thought aerosols were another questionable component to the models.

cjones1
September 24, 2015 7:11 am

Geologic factors such as volcanoes were briefly mentioned in the article, major historical events like the Panama hypothesis are great examples (no ice on Iceland or N. Polar ice caps until the isthmus was formed. Earthquakes can crack open seams of blue ice (methane) and change coastlines. Volcanoes releases CO2 and melt ice shelfs as in West Antarctica. You cover known and unknown factors well enough to show that the models used by IPCC are, for the most part, garbage in, garbage out. They should be trashed when used to determine economic and environmental policy until more reliable models are available.

wil keepers
September 24, 2015 8:28 am

So I am just a high school science teacher and don’t know all the intricacies, but I do know that when you use a model that just assumes zero or cancelling effect for multiple criteria, it would seemingly be useless because the model no longer fits in reality. Even without detailed understanding of the minutia this always seemed to me the biggest problem with the concept of a human cause to climate change. Seems to me that if there are too many unaccounted for variables, that the whole thing is kind of a wash and even if models show something, the cause cannot be defined with any conclusiveness.
I did have a question though about a coupe of your zero factors…..Don’t we have data about the cooling effect of major volcanic eruptions in the short term? I know it was 200 years ago, but there was much anecdotal evidence pointing to Tambora’s eruption being the cause of the year without summer. Coming from 1815, that may not be detailed enough data, but more recently, when Pinatubo erupted in 1991, wasn’t there a verified 1-2 degree global cooling for the following year or so, or did I find incorrect data? I would swear that I read that based on weather readings globally after Pinatubo and other volcano activity, if a volcanic eruption emits enough ash to high enough altitudes, it affects short term climate in quantifiable amounts.
Also, going forward, might it make sense to put a weather station or series of them on mars then correlate data to observed solar activity so we could determine what if any role sun spot and solar radiation changes play on climate on a planet unaffected by human activity? Then we would at least have some reliable measure of the zero we currently place on solar activity. I was more a biology than physics major, but if my ideas do not make sense, at least tell me why

CB
September 24, 2015 9:03 am

Thank you, Mr. Jonas for your truely scientific analysis. As a scientifically minded person I’ve known for years that climate models are sketchy on the details and biased towards a desired result. You demonstrated how utterly insufficient the models are on several key factors.
Such scientific fiddlings would never be allowed in endeavors with immediate dire consequences. Only in areas where there’s no accountability are such bogus manipulations allowed to occur.
Climate alarmists are quick to declare that weather isn’t climate. This assertion is ridiculous as it ignores the inextricable relationship. Even more absurd is the belief that although predicting localized weather conditions a week in advance is nary impossible, somehow climate models can predict global climate conditions 50-100 years in advance.

Climate Confused
September 24, 2015 9:24 am

Dear Mike,
There have been many reports of late that the purported hiatus of the past couple of decades never actually happened, that reports of a hiatus were based upon incorrect analyses of data. Of course, it has also been claimed that the hiatus is in fact ongoing and that it is due to the ocean absorbing excess heat. Indeed, the failure to anticipate the extent of ocean capture could cause a discrepancy between model and reality. I would very much like to know what you make of these claims.

Reply to  Climate Confused
September 24, 2015 12:47 pm

Climate Confused:
The proposition that “the ‘global warming’ has been nil’ in the period of the ‘hiatus'”lacks logical significance. This state of affairs is associated with the semantics of the phrase ‘global warming’.
Under these semantics, in the period of time that I’ll call deltaT1, the ‘global warming’ is the change in the global temperature along a straight line when this line is fit by a specified procedure to data belonging to a specified global temperature time series in the period of time that I’ll call deltaT2. When deltaT1 is held constant while deltaT2 varies, the slope of the line varies. It follows that the ‘global warming’ over deltaT1 varies. That it varies negates the law of non-contradiction (LNC). Under negation of the LNC the proposition that “the ‘global warming’ has been nil” is true and false. Thus that this proposition is true lacks logical significance because the same proposition is also false.
Q.E.D.

David Laney
September 24, 2015 10:01 am

Actually we do know a fair amount about what causes sunspots, but predicting sunspot cycles suffers from the same problems with ‘coupled nonlinear chaotic systems’ that plagues attempts to calculate future climate on Earth.

Gerry Houser
September 24, 2015 10:33 am

Re narrow bands of absorption by CO2 – is it possible that ALL of the possible absorption of the infrared radiation is absorbed by the existing CO2? And perhaps has been for years? And therefore, adding more
CO2 to the atmosphere won’t matter a bit?

Anne Ominous
Reply to  Gerry Houser
September 26, 2015 3:05 pm

Gerry:
That is a recognized possibility: the “saturation effect”. If a distance X of CO2 absorbs a majority of the radiation at the appropriate infrared wavelength, then the majority of the remainder should be absorbed within a distance of 2X.
I’m not saying it’s strictly linear, just laying out the basic concept.
If indeed the saturation limit has already been reached, then adding more CO2 will make very little difference. Whether this is actually true or not has been a matter of much argument.

johnwerneken
September 24, 2015 10:54 am

Climate alarmists start from human political motives: to understand and to control destiny, and to give that control to themselves and to others sharing their ideological and class interests. No surprise that their so-called ‘science’ is total bullshit.

Mirza
September 24, 2015 11:24 am

Models are always wrong and are used more as an aid to illustrate relationships between variables and how things might develop, rather than as predictors of the future. So a model might show that increase in CO2 in atmosphere would probably result in increased global temperatures, but that is not certain because there are other factors that affect temperatures as well.
Let me illustrate with example. There could be a model that will predict future salary of a student depending on what university the student is attending. The actual salary would depend on things like grades, major, connections, economy, etc. But those things would not be in the model. Would that mean that model is useless? Of course not! That model would still tell you something important about the world, namely that Harvard graduates make more than community college graduates.
There is a trade off between accuracy and complexity and, as someone said, a model that is 100% accurate is a useful as a map with scale 1:1.

Reply to  Mirza
September 24, 2015 1:07 pm

Mirza:
George Box’s generalization that “all models are wrong” is out of date. These days through the use of modern information theory models that are not wrong are routinely constructed.
The generalization that “all climate models are wrong” is accurate as the methods of their construction violate principles of modern information theory. Fortunately, that they violate these principles is a consequence of ineptitude on the part of the model builders rather than logical necessity. Unfortunately control over the study of global warming is currently held by politically powerful incompetents.

Lady Gaiagaia
September 24, 2015 1:23 pm

This blog post was picked up by Realclearpolitics, so might be read by millions.
Well done!

September 28, 2015 7:03 am

Reblogged this on Climate Collections and commented:
Executive Summary: Mike Jonas dissects “how the models have come to predict a high level of future warming, and how they claim that it is all caused by CO2. The reality of course is that two-thirds of the predicted future warming is from guesswork and they donā€™t even know if the sign of the guesswork is correct. ie, they donā€™t even know whether the guessed factors actually warm the planet at all. They might even cool it…
One thing, though, is absolutely certain. The climate modelsā€™ predictions are very unreliable.”