# Why there cannot be a global warming consensus

By Christopher Monckton of Brenchley

In a previous post, I explained that many of the climate-extremists’ commonest arguments are instances of logical fallacies codified by Aristotle in his Sophistical Refutations 2300 years ago. Not the least of these is the argumentum ad populum, the consensus or head-count fallacy.

The fallacy of reliance upon consensus, particularly when combined with the argumentum ad verecundiam, the fallacy of appealing to the authority or reputation of presumed experts, is more likely than any other to mislead those who have not been Classically trained in mathematical or in formal logic.

To the Classicist, an argument founded upon any of the Aristotelian logical fallacies is defective a priori. Nothing more need be said about it. However, few these days are Classicists. Accordingly, in this post I propose to explain mathematically why there can be no legitimate consensus about the answer to the central scientific question in the climate debate: how much warming will occur by 2100 as a result of our sins of emission?

There can be no consensus because all of the key parameters in the fundamental equation of climate sensitivity are unknown and unknowable. Not one can be directly measured, indirectly inferred, or determined by any theoretical method to a precision sufficient to give us a reliable answer.

The fundamental equation of climate sensitivity determines how much global warming may be expected to occur once the climate has settled back to a presumed pre-existing state of equilibrium after we have perturbed it by doubling the atmospheric concentration of CO2. The simplifying assumption that temperature feedbacks are linear introduces little error, so I shall adopt it. For clarity, I have colored the equation’s principal terms:

Climate sensitivity at CO2 doubling (blue) equals the product of the CO2 forcing (green), the Planck parameter (purple) and the feedback gain factor (red).

The term in green, ΔF2x, is the “radiative forcing” that the IPCC expects to occur in response to a doubling of the concentration of CO2 in the air. Measurement and modeling have established that the relation between a change in CO2 concentration and a corresponding change in the net down-minus-up flux of radiation at the top of the climatically-active region of the atmosphere (the tropopause) is approximately logarithmic. In other words, each additional molecule of CO2 exerts less influence on the net radiative flux, and hence on global temperature, than its predecessors. The returns diminish.

To determine the radiative forcing in response to a CO2 doubling, one multiplies the natural logarithm of 2 by an unknown coefficient. The IPCC’s first and second Assessment Reports set it at 6.3, but the third and fourth reduced it by a hefty 15% to 5.35. The CO2 forcing is now thought to be 5.35 ln 2 = 3.708 Watts per square meter. This value was obtained by inter-comparison between three models: but models cannot reliably determine it. Both of the IPCC’s values for the vital coefficient are guesses.

The term in purple, , denominated in Kelvin per Watt per square meter of direct forcing, is the Planck or zero-feedback climate-sensitivity parameter. This is one of the most important quantities in the equation, because both the direct pre-feedback warming and separately the feedback gain factor depend upon it. Yet the literature on it is thin. Recent observations have indicated that the IPCC’s value is a large exaggeration.

The Planck parameter is – in theory – the first differential of the fundamental equation of radiative transfer about 3-5 miles above us, where incoming and outgoing fluxes of radiation are equal by definition. The measured radiative flux is 238 Watts per square meter. The radiative-transfer equation then gives us the theoretical mean atmospheric temperature of 255 Kelvin at that altitude, and its first differential is 255 / (4 x 238), or 0.267 Kelvin per Watt per square meter. This value is increased by a sixth to 0.313 because global temperatures are not uniformly distributed. However, it is also guesswork, and the current Lunar Diviner mission suggests it is a considerable overestimate.

Theory predicts that the Moon’s mean surface temperature should be around 270 Kelvin. However, Diviner has now found the mean lunar equatorial temperature to be 206 K, implying that mean lunar surface temperature is little more than 192 K. If so, the theoretical value of 270 K, and thus the lunar Planck parameter, is a 40% exaggeration.

If the terrestrial Planck parameter were similarly exaggerated, even if all other parameters were held constant the climate sensitivity would – on this ground alone – have to be reduced by more than half, from 3.3 K to just 1.5 K per CO2 doubling. There is evidence that the overestimate may be no more than 20%, in which event climate sensitivity would be at least 2.1 K: still below two-thirds of the IPCC’s current central estimate.

If there were no temperature feedbacks acting to amplify or attenuate the direct warming caused by a CO2 doubling, then the warming would simply be the product of the CO2 radiative forcing and the Planck parameter: thus, using the IPCC’s values, 3.708 x 0.313 = 1.2 K.

But that is not enough to generate the climate crisis the IPCC’s founding document orders it to demonstrate: so the IPCC assumes the existence of several temperature feedbacks – additional forcings fn demonimated in Watts per square meter per Kelvin of the direct warming that triggered them. The IPCC also imagines that these feedbacks are so strongly net-positive that they very nearly triple the direct warming we cause by adding CO2 to the atmosphere.

The term in red in the climate-sensitivity equation is the overall feedback gain factor, which is unitless. It is the reciprocal of (1 minus the product of the Planck parameter and the sum of all temperature feedbacks), and it multiplies the direct warming from CO2 more than 2.8 times.

Remarkably, the IPCC relies upon a single paper, Soden & Held (2006), to establish its central estimates of the values of the principal temperature feedbacks. It did not publish all of these feedback values until its fourth and most recent Assessment Report in 2007.

The values it gives are: Water vapor feedback fH2O = 1.80 ± 0.18; lapse-rate feedback flap = –0.84 ± 0.26; surface albedo feedback falb = 0.26 ± 0.08; cloud feedback fcld = 0.69 ± 0.38 Watts per square meter per Kelvin. There is also an implicit allowance of 0.15 Kelvin for the CO2 feedback and other small feedbacks, giving a net feedback sum of approximately 2.06 Watts per square meter of additional forcing per Kelvin of direct warming.

Note how small the error bars are. Yet even the sign of most of these feedbacks is disputed in the literature, and not one of them can be established definitively either by measurement or by theory, nor even distinguished by any observational method from the direct forcings that triggered them. Accordingly, there is no scientific basis for the assumption that any of these feedbacks is anywhere close to the stated values, still less for the notion that in aggregate they have so drastic an effect as almost to triple the forcing that triggered them.

Multiplying the feedback sum by the Planck parameter gives an implicit central estimate of 0.64 for the closed-loop gain in the climate system as imagined by the IPCC. And that, as any process engineer will tell you, is impossible. In electronic circuits intended to remain stable and not to oscillate, the loop gain is designed not to exceed 0.1. Global temperatures have very probably not departed by more than 3% from the long-run mean over the past 64 million years, and perhaps over the past 750 million years, so that a climate system with a loop gain as high as two-thirds of the value at which violent oscillation sets in is impossible, for no such violent oscillation has been observed or inferred.

Multiplying the 1.2 K direct warming from CO2 by its unrealistically overstated overall feedback gain factor of 2.8 gives an implicit central estimate of the IPCC’s central estimate of 3.3 K for the term in blue, , which is the quantity we are looking for: the equilibrium warming in Kelvin in response to a doubling of CO2 concentration.

To sum up: the precise values of the CO2 radiative forcing, the Planck parameter, and all five relevant temperature feedbacks are unmeasured and unmeasurable, unknown and unknowable. The feedbacks are particularly uncertain, and may well be somewhat net-negative rather than strongly net-positive: yet the IPCC’s error-bars suggest, quite falsely, that they are known to an extraordinary precision.

It is the imagined influence of feedbacks on climate sensitivity that is the chief bone of contention between the skeptics and the climate extremists. For instance, Paltridge et al. (2009) find that the water-vapor feedback may not be anything like as strongly positive as the IPCC thinks; Lindzen and Choi (2009, 2011) report that satellite measurements of changes in outgoing radiation in response to changes in sea-surface temperature indicate that the feedback sum is net-negative, implying a climate sensitivity of 0.7 K, or less than a quarter of the IPCC’s central estimate; Spencer and Braswell (2010, 2011) agree with this estimate, on the basis that the cloud feedback is as strongly negative as the IPCC imagines it to be positive; etc., etc.

Since all seven of the key parameters in the climate sensitivity equation are unknown and unknowable, the IPCC and its acolytes are manifestly incorrect in stating or implying that there is – or can possibly be – a consensus about how much global warming a doubling of CO2 concentration will cause.

The difficulties are even greater than this. For the equilibrium climate sensitivity to a CO2 doubling is not the only quantity we need to determine. One must also establish three additional quantities, all of then unmeasured and unmeasurable: the negative forcing from anthropogenic non-greenhouse sources (notably particulate aerosols); the warming that will occur this century as a result of our previous enrichment of the atmosphere with greenhouse gases (the IPCC says 0.6 K); the transient-sensitivity parameter for the 21st century (the IPCC implies 0.4 K per Watt per square meter); and the fraction of total anthropogenic forcings represented by non-CO2 greenhouse gases (the IPCC implies 70%).

Accordingly, the IPCC’s implicit estimate of the warming we shall cause by 2100 as a result of the CO2 we add to the atmosphere this century is just 1.5 K. Even if we were to have emitted no CO2 from 2000-2100, the world would be just 1.5 K cooler by 2100 than it is today. And that is on the assumption that the IPCC has not greatly exaggerated the sensitivity of the global temperature to CO2.

There is a final, insuperable difficulty. The climate is a coupled, non-linear, mathematically-chaotic object, so that even the IPCC admits that the long-term prediction of future climate states is not possible. It attempts to overcome this Lorenz constraint by presenting climate sensitivity as a probability distribution. However, in view of the uncertainty as to the values of any of the relevant parameters, a probability distribution is no less likely to fail than a central estimate flanked by error-bars.

If by this time your head hurts from too much math, consider how much easier it is if one is a Classicist. The Classicist knows that the central argument of the climate extremists – that there is a (carefully-unspecified) consensus among the experts – is an unholy conflation of the argumentum ad populum and the argumentum ad verecundiam. That is enough on its own to demonstrate to him that the climate-extremist argument is unmeritorious. However, you now know the math. The fact that not one of the necessary key parameters can be or has been determined by any method amply confirms that there is no scientific basis for any assumption that climate sensitivity is or will ever be high enough to be dangerous in the least.

Article Rating
Inline Feedbacks
April 23, 2012 4:44 pm

I expect most readers of the alarmist variety will be unable to understand the meaning of “cannot” in the title.

Ken Methven
April 23, 2012 4:51 pm

Concise and explicit. I’d like to hear the alarmist argument to knock it down?

Interstellar Bill
April 23, 2012 4:55 pm

We cannot repeat it too much that the null hypothesis of weak CO2 response
is well supported by the entire Cenozoic climate record.
Even during the height of the Eocene Optimum, 12C warmer than today,
there was no thermal runaway, no mass extinctions (like those when the optimum ended),
and no ‘Earth crisis’. CO2 was in the thousands, driven as it was out of the warm oceans.
Since the Eocene hot-house, the Earth has been steadily cooling, which should be impossible.
Better yet, when the Earth’s orbital eccentricity goes high (over 5%) every 400 kyr,
the semiannual variation in sunlight is the square of (1+e)/(1-e) or, at e=5%
TWENTY TWO PERCENT! Shouldn’t that light off the greenhouse bomb?
Why didn’t the first such perihelion yield the runaway greenhouse
predicted by the IPCC ‘equation’?
(It’s as much an equation as Hansen is a scientist — not at all.)

The other Phil
April 23, 2012 4:56 pm

Well-done

April 23, 2012 4:57 pm

“demonimated” … surely a typo, but wonderfully evocative.

Chris B
April 23, 2012 5:06 pm

Lord Monckton cracks me up!
“sins of emission?”
Would those be diurnal or nocturnal?

April 23, 2012 5:08 pm

Brilliant!

polistra
April 23, 2012 5:14 pm

You don’t need Classical training. Read the Book of Proverbs, or read any author who wrote before 1960, or listen to your grandmother.

Babsy
April 23, 2012 5:23 pm

I love this stuff!

Mike Smith
April 23, 2012 5:25 pm

“Unknown and unknowable”.
Lord Monckton establishes beyond a shadow of a doubt that climate alarmism is founded on little more than pure dogma — the antithesis of science.
Thank you for a wonderful and brilliantly succinct essay!

mondo
April 23, 2012 5:29 pm

Christopher: You say: “Multiplying the feedback sum by the Planck parameter gives an implicit central estimate of 0.64 for the closed-loop gain in the climate system as imagined by the IPCC. And that, as any process engineer will tell you, is impossible. In electronic circuits intended to remain stable and not to oscillate, the loop gain is designed not to exceed 0.1. Global temperatures have very probably not departed by more than 3% from the long-run mean over the past 64 million years, and perhaps over the past 750 million years, so that a climate system with a loop gain as high as two-thirds of the value at which violent oscillation sets in is impossible, for no such violent oscillation has been observed or inferred”
Can I ask if the value of 0.64 in the second line might be ten times too high? That is, should it perhaps be 0.064? The reason I ask is that you go on to say that “a loop gain as high as two thirds of the value at which violent oscillation sets in is impossible”. But aren’t you saying that the latter value is 0.1, thus a number two thirds of 0.1 would be 0.064, not 0.64.
I would appreciate it if you could clarify this point. .

cgh
April 23, 2012 5:44 pm

I have a serious problem with this bit.
“This value was obtained by inter-comparison between three models: but models cannot reliably determine it.”
How can comparing models show anything about deriving a constant such as CO2 forcings? This is backward. A model can only illustrate outcomes after such constants have been determined by real world observation and experiment. I think that Monckton may be pulling his punches a bit here. Models by definition cannot prove anything. The most they can hope to do is provide some verification for things discovered elsewhere.
And we see it in the next sentence:
“Both of the IPCC’s values for the vital coefficient are guesses.”
Meaning that the models are utterly worthless as evidence of anything.
This all reminds me of Drake’s equation for the number of civilizations in the galaxy. While interesting speculation in 1960, it was utterly meaningless because none of the constants were known then and are still not known to this day. Moreover this presumes that Drake had included all of the limiting factors which it’s now apparent he had not.
Feynman was right. This is cargo-cult science at its worst.

Policy Guy
April 23, 2012 5:52 pm

I admit that I have not yet digested this article, but I observe that it appears that the central equation is untenable. That, in itself, should be grounds to walk away. Of course observations that the values predicted have not come to be should also carry some weight.
So this becomes a political equation expressed in terms that scientists might devourer for more research money?
I thought the holy grail of research was the cause of cancer, these idiots have turned it into finding the cause of anthropogenic warming when the earth is cooling. Lets declare victory and go home.

Mindbuilder
April 23, 2012 6:02 pm

You can’t get a reliable answer to the standard of a logically sound argument with regards to the climate. But that doesn’t mean you should ignore the information available or that you should not make a judgement call based on unreliable information. Judgement calls are often PROPERLY based on logical falacies and generalizations when there is not enough information to know for certain what is the right thing to do. For example, if a coal fired power plant is only in service for 5 years, the return on investment will not be repaid. Yet despite the fact that we have needed to build many power plants in the past, it is a logical falacy to conclude that we will certainly still need another one in five years from now. A wide variety of energy technology breakthroughs from fusion to super cheap solar and storage could make a coal plant built today obsolete. But that doesn’t mean it is a mistake to build a coal plant today.
It’s logical falacy to believe with certainty that it will freeze tonight just because the current weather patterns have often been followed by freezing, but you still might decide to expend resources to bring your potted plants in to protect them from the freezing, even though it might not freeze.
Probabalistic reasoning often properly relies on logical falacies.

April 23, 2012 6:21 pm

Mondo asks for clarification of how the implicit closed-loop gain of 0.64 in the climate system is determined. I am happy to provide the requested clarification. The loop gain, in the climate, is the product of the feedback sum (the IPCC’s implicit central estimate is approximately 2.06 Watts per square meter per Kelvin) and the Planck parameter (0.313 Kelvin per Watt per square meter: IPCC, 2007, p. 631 fn).
The singularity in the Bode feedback-amplification equation occurs at a loop gain of unity. Process engineers designing electronic circuits that are intended to be stable usually set a design limit of one-tenth of this value – or 0.1 – to make absolutely sure that no operational circumstances sufficient to drive the circuit across the singularity arise. They would certainly not design in a loop gain as high as 0.64. That is far, far too close to the singularity.
In the climate, over at least the past 64 million years, the Earth has not seen the violent oscillations either side of the singularity that would be near-inevitable if the loop gain were anything like as high as 0.64.
Interestingly, the literature (e.g. Roe, 2009) says that at or above the singularity the climate sensitivity becomes undefined. Though the sensitivity is certainly undefined at the singularity itself, it is as certainly defined at loop gains above 1 as at loop gains below 1. In fact, at loop gains above 1 it is as strongly negative as it was strongly positive at loop gains below 1. In effect, the equation has the effect of rotating the climate-sensitivity curve by 180 degrees about the point (1, 0) as the singularity is crossed. It is this reversal that produces the violent oscillations that are sometimes deliberately designed into circuits that are not intended to be stable. Yet global temperature has not fluctuated by more than 3% either side of the long-run mean over the past 64 million years.
The conclusion is that feedbacks are extremely unlikely to be anything like as strongly net-positive as the IPCC’s climate-sensitivity estimates imply. To produce the formidable temperature-stability that the climate has exhibited over the past 64 million years, feedbacks cannot really exceed the process engineers’ limit of 0.1. And that, in turn, implies on this ground alone a maximum global warming of 1.3 Celsius per doubling of CO2 concentration – not the 3.3 Celsius imagined by the IPCC.

April 23, 2012 6:22 pm

Christopher,
Thanks you for your post. Speaking as an electrical engineer who spent many years developing communications amplifiers, your assertion that the loop gain needs to be limited to about 0.1 when the feedback is positive to avoid oscillations is consistent with my experience. In EE-speak, we limit the feedback amplification to 1dB. When I give talks, I tell people that the standard climate model has 10dB of feedback amplification in it, and I have yet to find an EE who thinks that is plausible.

John West
April 23, 2012 6:30 pm

Very good! I always enjoy your writings and agree with most everything except the climate being chaotic part. Yes, I realize that’s basically word for word from the IPCC’s description. I, however, see no compelling evidence that the climate behaves chaotically. It’s certainly massively complicated, but that does not equate to chaotic. Climate models, on the other hand, seem highly likely to behave chaotically.

Ethically Civil
April 23, 2012 7:00 pm

I have a small nit with the logic.
You assert that there can be no legitimate consensus. It is tautological that a group sharing a common view has a “consensus.” The legitimacy of the consensus hangs only on the nature of the unanimity of the group on the matter — that agreement was unfeigned and not coerced.
However, you prove very nicely indeed there can be no *rational* consensus. You have thus further proven that the group holding this view is, ipso facto, irrational to the degree the consensus is legitimate.
Civilly and Classically Yours,
EC.

Bobl
April 23, 2012 7:20 pm

@ Monckton,
Being an Electronics and Process Control Engineer I can say you are outstandingly correct, anything with a loop gain of 0.64 is going to be unstable, it will ring in both directions, and we would see wild temperature fluctuations in response to changes in temperature or CO2 say the sunspot cycle, or Diurnal ranges. Also temperatures would overshoot, keep rising or falling in response to the feedbacks after the stimulii is removed.
I think however you should look a bit deeper, if you consider the emission (lapse rate) negative feedback is -0.84, then to acheive a total feedback factor of 1.8 (1 for CO2 and 1.8 for Feedbacks = 2.8 overall response) the positive feedback has to first overcome the negatives. If the lapse rate feed back is -0.84 then that equates to a loop gain of -0.455. The Positive feedback contribution is supposed to raise the loop gain to +0.64 – It’s been a while so excuse me if my math is wrong, but doesn’t this imply that the change in gain factor is over unity 0.64- (-0.455) = 1.095. If one could exclude emission in an experiment so the lapse rate feedback is not present but the positive feedbacks were, should not the environment within the experiment represent an oscillator? Clearly the real world doesn’t behave like this!
I also worry about other negative feedbacks that are omitted, for example as water is evaporated and raised to the Troposphere mgh Newtons is absorbed or about 30kN per kg when this inevitably rains out this is expended against the earth as kinetic energy or 1/2 mV(t)^2 at the terminal velocity of the rain. The energy is absorbed by the mass of the earth and is expended into the gravitational system. I would see this kinetic energy as being removed from the thermal system yet it is not represented in your equation. I have not looked at the magnitude. But given the capacity of Hydro power plants generating gigawatts of energy using a very small part of this precipitation one might guess it would be significant. If water represents a 1.8 x feedback then presumably the much larger amount cycling through the climate system must represent a significant negative feedback. This makes our analysis of the feedbacks above much worse.
Do you have any idea what this rain feedback might amount to?

April 23, 2012 7:20 pm

Lord Monckton,
Thank you for far and away the most complete, understandable and convincing refutation of the raditive forcing equation I have read. You may not be a Climate Scientist, but you are an extraordinary climate scientist.

April 23, 2012 7:38 pm

Reblogged this on Climate Ponderings.

mondo
April 23, 2012 7:46 pm

Christopher,
Thank you for your polite and informative response to my question. I now understand that point. Overall of course, I think you do an outstanding job of providing evidence and facts. I look forward to some of the “real” climate scientists giving us detailed reasoning as to why your conclusions are not valid. In the absence of receiving such response, I think that we can assume that your conclusions are unchallenged.

CRS, DrPH
April 23, 2012 7:55 pm

Thank you, Lord Monckton! And the Hockey Team laugh at this guy??

Ned
April 23, 2012 7:55 pm

I wonder if someone would be kind enough to answer a sort of involved question, one that may be a little stupid. I feel like I must be missing something rather obvious.
In electronics, amplifiers with gain modified by positive feedback are required to have some kind of dividing network which make the input to the amplifier a different (smaller) value than the output of the amplifier. This is where the idea of “loop gain” or “feedback factor” comes from, and why in positive feedback amplifiers the loop gain is always less than 1, the loop is a physical structure that buffers input from output and cuts the magnitude. On the other hand, if the amplifier has any intrinsic gain at all prior to the loop, even the tiniest amount, and the input and the output are ever allowed to touch each other, to become the same quantity, there’s no chance of avoiding runaway. The output of an amplifier with gain can NEVER be fed directly to its input or disaster.
But, true enough, if you feed only a tiny proportion of the output back to the input, then you can increase gain and keep stability. Climate modellers call on this well known fact to back up their claim that the positive feedback conception of climate can remain stable — they seem to just stipulate that the feedback factor is less than 1. And then, I think, they proceed from there not to actually MODEL the feedback, but rather just assume a particular amplification of the direct forcing, that is they plug 3 into the forcing instead of 1 (or whatever) and thell themselves it’s okay, they don’t have to model the feedback to know what the amplification is, and they know it can be positive because, after all, the feedback equation allows it if the feedback factor is below unity. But from the feedback equation, in order for feedback factor to be below unity, the input must be isolated from the output. Output and input are completely different variables with completely different values, and they’re kept apart from each other by a mathematical, and if you want your amplifier to work, an equivalent physical divider.
So what I’d earnestly like to know is, in climate, where the input to the amplifier (evaporation of water, or melting of permafrost, or CO2 coming out of the ocean or whatever) is apparently always atmospheric temperature, and the ouput of the amplifier is ALSO atmospheric temperature, what physical mechanism is thought to provide the “divider” which buffers the output temperature from becoming the input temperature and allowing only a tiny portion of the output temperature from returning to the input of the process? I feel like the basic structure of the atmosphere, which doesn’t have wires or pipes or walls in it to provide isolation from output to input has no way of having a feedback factor that could be (effectively, after maybe time delays) anything other than 1. That would imply that the atomosphere could never have net “gainy” processes, but clearly I’m not understanding something. Help?

April 23, 2012 8:37 pm

Hi Ned,
“I feel like the basic structure of the atmosphere, which doesn’t have wires or pipes or walls in it to provide isolation from output to input has no way of having a feedback factor that could be (effectively, after maybe time delays) anything other than 1.”
The input is the greenhouse forcing (W/m^2) at the top of the atmosphere and the output is surface temperature (K). Does that help?
Dave

April 23, 2012 9:03 pm

With reference to the equation which he presents, Lord Monckton claims that:”…there can be no consensus regarding the cause of global warming because all of the parameters in this equation are “…unknown and unknowable.” This is to provide justification for Monckton’s conclusion.
Monckton describes the blue term on the left side of his equation as the “climate sensitivity at CO2 doubling.” While it is indeed the climate sensitivity at CO2 doubling, a different description is more revealing of logical error. Under this description, the blue term is “the rise in the equilibrium global average surface air temperature at CO2 doubling.” When Monckton’s description is replaced by this description, a feature of Monckton’s equation stands out. Unlike a temperature, an equilibrium temperature is not observable. It follows that Monckton’s equation is not falsifiable thus lying outside science.

Brian H
April 23, 2012 9:11 pm

typo: “all of then unmeasured and unmeasurable” = them.
____
It’s clear that virtually all the factors and variables in that equation have been retro-fitted to generate the desired outcome. It’s a form of the Omitted Variable Frod. Loading all the variance onto a chosen “driver” by excluding or mis-weighting all the others.
(Misspelling above to avoid the verboten word filter; the real word is used in the linked article, another post here on WUWT.)

April 23, 2012 9:14 pm

However, if one realizes that IR from CO2 as back radiation is incapable of heating the Earth’s surface and thus the atmosphere, then we find that the sensitivity is ZERO. NADA, NOTHING!
If the surface radiates IR outward and CO2 absorbs it and radiates it back down, the IR either resonates and is reflected back up, as the existing energy levels equivalent to the IR are full, or it is absorbed while another IR photon is emitted, making the exchange a flat wash. It is counter to the 2nd Law of Thermodynamics for the atmosphere, which is almost always cooler than the Earth’s radiating surface, to warm the surface.
Of course, the situation of the models is that the Earth is flat and the sun shines 24/7, totally unrealistic. But, that makes the above description even more valid. At night, the atmosphere and the surface radiate IR out into space—here CO2 and water vapor simply act as heat to IR converters and aid in the nighttime cooling.
Atmospheric gases of any kind simply cannot warm the surface and cannot cause anything akin to a greenhouse effect—it simply does not exist.
Sensitivity is a cute idea, but it is flogging a non-existent, thus patently dead, horse, particularly as it lends worthless credence to the global warming junk science scam.

Bobl
April 23, 2012 9:15 pm

@Ned
You are almost correct, except that there isn’t a need to physically isolate the output and input. If the gain of the feedback loop, IE the amplification of the product of all the negative and positive feedbacks is less than 1 the system will remain stable (After a fashion) its about the amplification factors, not the isolation.
The IPCC simply says this, if the Temperature goes up 1 degree the feedbacks will cause the temperature to go up another 0.64 degrees, which will cause the temperature to go up 0.64×0.64 degrees and so on, this geometric progression is asymptotic for all values below 1. It’s not about the Temperature, its about the response to a CHANGE in temperature.
If you consider we have a lot of Negative feedback to overcome due to the lapse rate before we even get into positive territory – (see my story above). The positive part of the feedback must be around a loop gain of one before the negative feedback is applied. I don’t think this is physically possible.
I have calculated this two ways I get 0.94 or 1.095 for the gains of the positive feedbacks depending on how I calculate it, but lets choose the only reasonable value for stability ( 0.94 ). A Gain of 0.94 results in about a 15.6 times amplification ( which is what it would take to overcome the 5 times reduction due to the lapse rate factor and get to a three times multiplication (3/0.2=15) ) . But now lets say something changes the gain by 1% to .95 this represents a 19 x multiplication a difference of 3.4/15.6 or 21% for a 1 in 100 change in the gain factor – The climate would be very very sensitive to the gain of the positive feedback system and Earth would be essentially unlivable.
The Math and observation clearly say the IPCC is wrong, if negative feedbacks effect are -0.84 then there is no way that the total system gain could get to an overall 3 times amplification and stay any way stable – Not a chance, zip, none.
Bob

bubbagyro
April 23, 2012 9:24 pm

Viscount Monckton:
I find that the warm-earther club is guilty of many more logical fallacies (as you must know). It is almost comical, if not so costly!
However, the primary fallacy from which these all emanate is well known and fundamental to the CAGW case:
onus probandi incumbit ei qui dicit, non ei qui negat (viz. the burden of proof is on the person who makes the claim, not on the person who denies (or questions the claim) is a subsumed fallacy within argumentum ad ignorantium. This informal logical fallacy has absolutely no place in science.
It shifts the burden of proof from the positor of the hypothesis. In science, an hypothesis must be proven in all parts, and any objections must be considered (nay, embraced!) by the generator of the hypothesis. Failure of one part is failure of all.

Bobl
April 23, 2012 9:29 pm

@ John West
The Climate is chaotic, this is not to say it is unstable, just that you cannot reliably predict the outcome (output) from the inputs. There are non-linear thresholds for example that prevent the sea temperature exceeding a limit, because the moment they do, storms arise and extract the heat. It is the non-linearities that make the climate chaotic.
This for example is why the weather can’t be accurately predicted beyond 4 days, and forecasts have a habit of being defeated by nature, because some unknown confluence of factors results in a storm. Perhaps a sea temperature difference of less than 0.5 degree might be the difference between a storm and no storm, or a pressure difference of 1 hPa, or a temp of 1 degree.

April 23, 2012 9:36 pm

In my previous post, please replace the 1st paragraph by:
With reference to the equation which he presents, Lord Monckton claims that:”…there can be no consensus regarding the cause of global warming…” because all of the parameters in this equation are “…unknown and unknowable.” This is to provide justification for Monckton’s conclusion.

Darren Potter
April 23, 2012 10:16 pm

“… or determined by any theoretical method to a precision sufficient to give us a reliable answer.”
That is all that need be said, when one sees pro-pundits of Global Warming making predictions of 0.039 deg C increases per year. Based on modern temperature readings, that have an average error of +/-1.96 deg C for over 92% of surveyed weather stations. Coupled with less accurate temperature reading devices of the past, along with guess-estimations of distant past temperatures based on wood pulp (RoFL), and you got utter lack of precision, that can only result in bogus predictions (aka Mann’s Hockeystick).

Len
April 23, 2012 10:18 pm

Lord Monckton it is always a pleasure to read your articles. I consider you a public servant in the ogiginal, complimentary sense, as one who dedicates his efforts to serving his fellow man. You serve us well. All the best to you Sir.

April 23, 2012 10:26 pm

All of Christopher Monckton of Brenchley’s criticisms can be circumvented by simply defining “consensus” as meaning “what the majority of climate scientists” believe, and leave it at that.
The problem with “consensus” in a scientific context is that it has no precise meaning. Yes, there is a dictionary definition, but it not overly helpful. i.e.,
“a generally accepted opinion or decision among a group of people”
http://dictionary.cambridge.org/dictionary/british/consensus
If Catastrophists fall back on a dictionary definition, then it is hardly possible to argue that held opinions are not in fact held. What is generally accepted? 51% ?

Bobl
April 23, 2012 11:04 pm

Did a back of the excel calculation that was enlightening
I asked before what effect rain has
Average rainfall for earth 1050mm
Volume of rain per cu m = 1.05
Density of water = 1000kg/cu m
therefore Mass of rain per square m 1050 kg
Terminal velocity of rain (about 7.5 m/s) median value
Energy per annum per square meter = 0.5*1050*7,5*7.5=29531 Joules per annum
Watts per Square m =Joules/sec/sq m = 29531/86400 =0.342 w/sq meter
Now this is direct impact, if we assume that rain doesn’t give back the portion of potential energy lost to air resistance all in the form of heat (friction with air) but some is expended in the displacement of the air, which is expended kinetically on the earth, the effect of rain is then greater – we can calculate an upper limit for this energy sink
As rain clouds occur at a height of between 2600m and 4200m (mean 3300m) we can calculate the potential energy of our mean 1.05 meter annual rainfall before it falls.
mxhxg =
1050x3300x9.8 = 33957000 J
Meaning rain absorbs at least 33957000/86400 or 393 W/sq meter in forming the clouds it rains from, not including the latent heat of evaporation which we presume it gives back to the atmosphere as it condenses.
Seems to me therefore that rain cooling by conversion to kinetic energy is significant at somewhere between 0.342 w/sq m and 393 w/sq m.
Now for a 1 degree rise at 25 deg C, evaporation increases about 5 % so that would suggest the forcing is somewhere between 0.342/20 (-0.017 W / per squre meter per degree C) to 393/20 (-19W / sqm per degree C) – well in the range that could completely offset CO2 forcing (1.46W/sqm)
I’d conclude this was a significant negative feedback factor that deserves research, One wonders how they can use models that don’t consider the hydrological cycle.
Bob

major9985
April 23, 2012 11:29 pm

@ Christopher Monckton
The consensus is regarding anthropogenic climate change, are you basically saying that we are causing climate change but the consensus is now focused on how much?

Ned
April 23, 2012 11:56 pm

Dave and Bob,
Thanks for taking the time to reply. I’m sure for most people this stuff is old and uninteresting, and, again, I’m sure I’m just being blind to something. I hope I’m not irritating anyone. But I’d really like to get a better understanding of what’s assumed to be going on, and so far I haven’t been able to find an answer to my specific hangup in anything I’ve been able to find and read. I’ve looked at skeptical science, and their discussion of feedback factor seems to completely ignore the input/output problem that I just don’t get.
“The input is the greenhouse forcing (W/m^2) at the top of the atmosphere and the output is surface temperature (K). Does that help?”
Sadly, not really. I’m still thinking the inputs to the *feedbacks* aren’t forcing at the top of the atmosphere, but temperature in their own vicinity. Is that wrong? If the inputs to the feedbacks are something other than temperature, then what other mechanism is there for CO2 to operate on them with?
Bob,
The point is, I think isolation between output and input does matter. I understand your geometric series thing….1C caused .64C which causes .41 etc. My question is, on the “second iteration” (so to speak) how does the climate know to only respond to the .64 part of the total increase this time around? After the first iteration there’s definitely been 1.64C of total warming…so what’s the mechanism by which the climate parses out which fraction of that warming to respond to next? In an electronic amplifier the mechanism is resistors in the feedback loop. It knows to only respond to the fraction because it only gets a fraction of that fraction *back* — the input is isolated from the output. When the series eventually converges and the output of the amplifier is 3 (or whatever) the summed input to the amplifier is most definitely not 3, it’s about 1.82 or so. Clearly the input is isolated from the output because they’re not the same number. How does this work with global temperature? How does the output temperature not become the input temperature?
To put it a different way, in your series, once the total temperature rise is 1.64C, the next contribution is .41C. What if, instead of CO2 doubling to make a 1C initial forcing, it were increased so much (I guess about 3 times or a little more) that its direct effect already made 1.64C of average warming? Would the next step in the series be .41C, or would it be 1.64 x .64 = 1.05C? I mean, obviously anyone would say the second one, but my question is, why? In terms of average global warming, how does the climate know the difference between warming caused by CO2 and warming caused by something else? I think if someone can say by what means the climate parses out which heat it wants to respond to, that would be a sufficient sort of “dividing network” to fully answer my question. But without that dividing network, I don’t think the climate can have above unity gain and remain stable, yeah? Because with no dividing network the loop gain is 1. I’m just curious what the mainstream explanation for that analogous dividing network might be. Obviously it’s a big complex thing, and maybe I’m asking for too much stuff, and maybe I’m asking the wrong folks. But I’d love to understand this a little. I actually asked at skeptical science a while back and my question never got posted.

Gail Combs
April 24, 2012 12:06 am

CRS, DrPH says:
April 23, 2012 at 7:55 pm
Thank you, Lord Monckton! And the Hockey Team laugh at this guy??
_______________________
If the Hockey Team did not laugh at Lord Monckton they might have to actually debate him. A situation they want to avoid at all costs.
Note how a debate between the Hockey team and any well informed “Denier” is avoided at all costs. Al Gore and Peter Gleick immediately come to mind as does the twisting and turning of Phil Jones and Mike Mann as they try to avoid FOIA requests and dodge people like Steve McIntyre.
Someone who truly believes what they are saying will use logic and data to defend their reasoning. From the Hockey Team we get “There is a Consensus” “Argument from Authority” or as seen here on WUWT, a call for contributors to have their posts placed in a “Peer reviewed” (Pal reviewed) journal before they will consider it.
Every single one of these tactics is used to avoid engaging in actual debate about the science.
The use of phrases such as “This study does not, in any way, undermine the widespread consensus in the scientific community about the reality of global warming” to get a paper into a peer reviewed journal also shows how down right frightened of true science these posers really are.

April 24, 2012 12:16 am

“The simplifying assumption that temperature feedbacks are linear introduces little error”, what is the basis for this assumption?

April 24, 2012 12:20 am

““This value was obtained by inter-comparison between three models: but models cannot reliably determine it.”
This is incorrect. the models used to estimate the forcing due to doubling ( 3.7w) are line by line radiative transfer models. These physical models and their lower resolution companions ( band models) are used every day. We use them to create satellite images. We use them to engineer radars and IRSTs. They were used in designing Reagans star wars. They are reliable. The have been tested against actual observation as well.

robert barclay
April 24, 2012 12:26 am

You’re right my head hurts. Try the proposition that because of surface tension the ocean does not obey the second law of thermodynamics. Don’t dismiss this out of hand, remember that when it comes to unpredictable behaviour water has form. Water cooling below 3.9deg C expands to make ice float. Heating water from above is not as easy as you might think, get a bucket of water and a heat gun and give it a try.

Kelvin Vaughan
April 24, 2012 1:49 am

The troposphere is 8.14 x 10^18 m3. The temperature of every cubic metre depends on the temperature of the 6 adjacent cu. m and many other factors such as the quantity of greenhouse gasses in each cu. m. The number mathematical formlae required to calculate the temperatre of a cu. m. are vast Each adjacent cu. m. depends partly on the temperature in the first cu. m.
By the time your computer model has worked out the temperature of the first cu. m.every thing has changed.
I used to work with a technician repairing radio equipment. The equiment was highly unreliable and kept going faulty. As the equipment failed, a fault document was written and placed in a clip awaiting repair. This tecnician had a personal objective to keep the clip empty. As a fault came in he would grab the document and rush off to repair the fault. This became so much of an obsession that one day the faults were coming in faster than he could repair the equipment which eventually ended up in him going insane.
Climate scientists beware!

April 24, 2012 1:53 am

When the sun delivers 4000 trillion kWh of energy onto the top of the earth’s atmosphere as sunshine in every 24 hours [Oliver Morton: Eating the Sun] I think we must here be close to resemble the arguments over how may angels could dance on the head of a needle – which is, of course, exactly what the IPCC propaganda would like to achieve. For an eyeopener, look at what IPCC officials actually are reported to have said – see http://cleanenergypundit.blogspot.com/2011/10/west-is-facing-new-severe-recession.html

Joe Born
April 24, 2012 2:43 am

Ned: “what’s the mechanism by which the climate parses out which fraction of that warming to respond to next?”
The climate is responding to all of the warming all the time. That is, the temperature T equals the open-loop gain lambda times the forcing F plus the feedback f times the temperature. Solve that equation for temperature, and you get Lord M’s relationship:
$T=\lambda_0F-fT$
$T=\frac{\lambda_0F}{1-f\lambda_0}$

oMan
April 24, 2012 3:09 am

Lord Monckton: thank you for another excellent article. It’s an education in miniature and a reminder of the value of mathematics to distill (and expose) the ideas and the logic of what we all “think we’re talking about.”. So much of the popular debate is just hand-waving and, as you say, appeals to authority, consensus, or both. The value of your presentation is, for me, twofold. First, it is an independent attack on the extremists’ citadel of “science” and it leaves it in ruins. Second, it confirms the value of a priori dismissal of any argument suffering from a logical flaw on Aristotle’s list; because it saves us all so much time and effort. To engage on the merits with the proponent of a logically defective argument is to dignify the argument, exhaust oneself, and encourage fools.
Independently I want to thank BobI for his great comments on feedback and the negative feedback of the hydrological cycle. I have long had an interest in seeing just how big an effect this was, but hadn’t done even the simplest quantitative estimates. From the look of it, the negative-feedback effect of rain could very easily –forgive me– drown out the positive-feedback effect of CO2. Note also that the latter is only asserted to exist as a positive feedback; whereas the former is certainly known to be a negative feedback.
BobI, last thought on the heat transport of hydrologic cycle and return of latent heat to atmosphere when water vapor condenses out: yes, the heat goes back into the air, but it does so (1) a mile or three spaceward and (2) as part of an air column that (for cumulonimbus clouds, anyway) is moving upward, thus carrying the heat even farther spaceward. So it’s not just a wash, is it?

Joe Born
April 24, 2012 3:14 am

Mosher: “the models used to estimate the forcing due to doubling ( 3.7w) are line by line radiative transfer models. These physical models and their lower resolution companions ( band models) are used every day. ”
I would be grateful for a link to your support for this proposition. I have no doubt that radiation-physics calculations of the type that would go into such an estimate are indeed used everyday and that in principle they could be used to determine a reasonably accurate estimate. But it strikes me that the calculations would be exceedingly tedious, involving not only a large number of wavelength bands per gas to get the necessary resolution but also a wide variation in atmospheric composition among the various altitudes at various latitudes. Do you have confirmation that someone did indeed subject himself to the necessary tedium? Did he show his work?

Harriet Harridan
April 24, 2012 4:12 am

Dear Lord Monckton,
Beautifully written as always. I’m very glad to see the Diviner measurements getting a wider audience. These measurements must be more widely reported. They are crucial to demonstating why the IPCC are wrong. Strangely enough they were predicted by an elegant equation by Niklov and Zeller. Their equation also predicts the mean temperatures of several other planets, including the earth. Willis made a mistake with his algebra when he dismissed it on WUWT. The kicker is: there is no CO2 term in their equation. Their work deserves your interest.
HH

Brendan H
April 24, 2012 4:13 am

Monckton of Brenchley: “The fallacy of reliance upon consensus, particularly when combined with the argumentum ad verecundiam, the fallacy of appealing to the authority or reputation of presumed experts, is more likely than any other to mislead those who have not been Classically trained in mathematical or in formal logic.”
The claim here is that those who lack a certain type of training in mathematical or formal logic are likely to be misled by logical fallacies, and even more so by the arguments from consensus and authority.
The underlying assumption is that those who have acquired a certain type of training in mathematical or formal logic are less likely to be misled by logical fallacies; that is, they are better placed to identify logical fallacies than those not so blessed.
And this, of course, is an argument from authority, where those who are “Classically trained” in mathematical or formal logic are considered to have a better understanding of the subject than those not so well trained.
Lord Monckton is attempting to use the argument from authority to rebut the argument from authority. Logically, this cannot be done.
The paradox can only be resolved by abandoning either:
1) The claim of superiority of a Classical training; or
2) The claim that argument from genuine authority is a logical fallacy.

April 24, 2012 4:28 am

Most people don’t understand the nuances of the speculative hypothesis of Global Warming. They do however understand self interest. “Experts” in fields where their conclusions directly impact thier funding are inclined to conclude they need more funding.
I spent a career as a military officer and national security affairs analyst. In my twenty some years in that business I never heard a defense department offical (expert) suggest the world was not on the brink and they were best served with less funding. On the contrary the world was always near Armageddon and the only way to salavation was a dramatic increase of the defense department’s budget.
The Greens have no problem criticizing defense experts in an extraordinary complex field but get all uptight when anyone not a certified climatologist makes a disparging remark concerning their religion.
It is perfectly clear that if the Climatology field in mass concluded today that climate change was not an issue or a factor of natural variation or solar variability they would lose their funding, media attention, probability of fame and fortune and academic positions in short order.
The Greens should not be overly surprised that the masses who don’t understand the complexities of the science are skeptical. They recognize self interest because they have seen it all before. What field doesn’t overstate the problem to ensure continued funding?

DirkH
April 24, 2012 4:31 am

Mindbuilder says:
April 23, 2012 at 6:02 pm
“Probabalistic reasoning often properly relies on logical falacies.”
No; “probabilistic reasoning” uses a different set of values and operators than boolean logic. Yet the way you combine those operators must follow all the rules of logic or you end up with a meaningless way of combining your measurements to form a decision. In other words, you surely can argue logically WHY you do a certain probabilistic computation. So the logic is there – just not manifested as boolean logic in the dataflow.

April 24, 2012 4:32 am

Who cares that 206K is little more than 192K? Why is 192K important? — John M Reynolds

DirkH
April 24, 2012 4:33 am

Joe Born says:
April 24, 2012 at 3:14 am
“But it strikes me that the calculations would be exceedingly tedious, involving not only a large number of wavelength bands per gas to get the necessary resolution but also a wide variation in atmospheric composition among the various altitudes at various latitudes. Do you have confirmation that someone did indeed subject himself to the necessary tedium? Did he show his work?”
Ferenc Miskolczi did.
http://tallbloke.wordpress.com/2012/02/15/ferenc-miskolczi-short-interview-and-letter-to-epa/#comment-17446

DirkH
April 24, 2012 4:36 am

Brendan H says:
April 24, 2012 at 4:13 am
“The underlying assumption is that those who have acquired a certain type of training in mathematical or formal logic are less likely to be misled by logical fallacies; that is, they are better placed to identify logical fallacies than those not so blessed.
And this, of course, is an argument from authority, where those who are “Classically trained” in mathematical or formal logic are considered to have a better understanding of the subject than those not so well trained. ”
No, Brendan, it is not an argument from authority to assume that someone who knows a topic is more likely to have better understanding of the topic than someone who hasn’t such a topic.
It WOULD be an argument from authority if Monckton would go on to say “You are not a classicist so you are wrong.”
But he doesn’t say that.

April 24, 2012 4:53 am

I am particularly grateful to the many commenters who have illuminated the discussion on this thread. Trolls are almost entirely absent, and that is welcome: we can enjoy a proper scientific discussion without vexatious distractions. As usual, I shall respond to the individual points of interest that have been raised.
Mr. Mindbuilder says: “Probabilistic reasoning often properly relies on logical fallacies.” No, it doesn’t: it properly relies upon the laws of probability, which were first devised to illuminate the mathematics of expectation in gambling. No reasoning properly relies on logical fallacies. Mindbuilder’s implication seems to be that even though there is no scientific basis for the IPCC’s assumption that climate sensitivity is high we should act expensively on the assumption that there is.
John West, with great courtesy, disagrees with my assertion (backed by a citation from an admittedly unreliable source, the IPCC) to the effect that the climate, mathematically speaking, behaves as a chaotic object. Mr. West is in respectable company: Professor Lindzen recently took me to task for saying that the climate object behaves chaotically. And so did Fred Singer, until I explained to him that mathematically-chaotic objects, though they exhibit non-periodic behavior, are nevertheless deterministic. Chaos, in the mathematical sense, was first defined, described and demonstrated (with a five-parameter climate model) by the late Edward N. Lorenz, the father of numerical weather forecasting, in his justifiably celebrated 1963 paper Deterministic Non-Periodic Flow, published in the Journal of the Atmospheric Sciences. The climate object that we observe exhibits both periodic and non-periodic behavior and is deterministic. In this formal sense, then, it is in my submission a chaotic object; and, even if it is not, it behaves as though it is, at least to the extent that, though it is deterministic, its non-periodic behaviors, or bifurcations, are not determinable by any method, because we do not and shall never know to a sufficient precision the values of the millions of parameters that define the state of the object at any chosen moment. Even if the climate object is not chaotic, in all relevant respects it behaves as though it were, so the IPCC rightly says that “the long-term prediction of future climate states is not possible.”
Mr. Ethically Civil says: “The legitimacy of the consensus hangs only on the nature of the unaniminty of the group on the matter.” However, it has not been any part of my argument to suggest that no consensus can ever be legitimate. However, no argument from consensus can ever be legitimate, even where the matter upon which the consensus is agreed is true. It cannot be too often stated that the mere existence of a consensus, even among experts, tells us nothing in itself about whether or not the proposition to which subscribers to the consensus are said to assent is true or false.
Mr. Bobl says the total feedback factor is 1.8. No, it is 2.8. Like so many of the key quantities in the discussion of climate sensitivity, its value is not made explicit in the IPCC’s documents. However, it may be derived simply enough as follows: divide the IPCC’s multi-model mean central estimate of 3.26 K for equilibrium warming in response to a CO2 doubling (IPCC, 2007, p. 798, box 10.2) by the radiative forcing of 5.35 ln 2 = 3.708 Watts per square meter at Co2 doubling (Myhre et al., 1998) and also by the Planck parameter of 1/3.2 = 0.3125 Kelvin per Watt per square meter (IPCC, 2007, p. 631 fn). The answer is 2.813.
Mr. Oldberg says that “Monckton’s [clmate-sensitivity] equation is not falsifiable, lying outside science.” Well, it is not my equation, but that which the IPCC uses. Its origins lie in quite an interesting paper by James Hansen in 1984, which has a particularly clear account of how the feedback-amplification equation is derived. Mr. Oldberg makes a fair point when he says that, since the equation is looking for equilibrium warming and equilibrium will not be reached for some time [1000-3000 years: Solomon et al., 2009], it is not falsifiable. And that is indeed exactly the point. It might be 130 generations before the answer became apparent, and even then it will prove impossible to distinguish clearly enough between natural and anthropogenic contributions to temperature change.
Mr. Higley7, off topic, says there is no greenhouse effect. There is: get used to it.
Mr. Bubbagyro says “Onus probandi incumbit ei qui dicit, non ei qui negat” – the burden of proof is upon the assertor, not upon the denier. That is not quite right. Science approaches the truth by two methods, one rare, one perforce more common. The rare method is that of absolute mathematical demonstration. It is particularly rare in the physical sciences, and rarer still in the slippery sciences such as climatology. The usual method by which science approaches the truth is known as the “scientific method”. Here, once a hypothesis has been credibly asserted (usually these days by publication in a learned journal after peer review), it is up to the scientific community at large to try to shoot it down in what Popper, in his celebrated paper of 1934 on the scientific method, calls the “error elimination” phase. To the extent that the hypothesis survives and flies on, it gains credibility. However, if it is shot down and shown to be false, that is the end of it. This method, first adumbrated by al-Haytham a thousand years ago, accordingly encourages scientists to come up with hypotheses, even if they are somewhat speculative. The onus is then upon the scientific community to falsify it, thus reversing the usual burden of proof.
Mr. Major9985 asks: “Are you saying that we are causing climate change but the consensus is now focused on how much?” It has long been established by experiment and measurement that adding greenhouse gases to an atmosphere like ours will be likely to cause warming. Accordingly there is no need to pray “consensus” in aid in support of that proposition: it has been sufficiently and repeatedly demonstrated. The true question is and has always been how much warming our doubling of the CO2 concentration in the atmosphere will cause. On that subject, as the head posting here shows, there is insufficient knowledge of the seven key parameters that define equilibrium climate sensitivity to allow a basis for any general agreement on how much warming will occur. Nor do I say that any consensus truly exists on the answer to the “how-much-warming” question, or, therefore, upon the luridly-fancied disasters that would be consequent upon a very large warming. My point is simply that there is no scientific basis for any consensus as to how much warming a doubling of CO2 concentration will cause, because the fundamental equation of climate sensitivity contains too many unknown and unknowable unknowns.
Mr. SonOfMulder asks how I can assert that little error will arise from the simplifying assumption that all feedbacks are linear. Well, my argument is confined to the question whether we know enough about the value of the individual feedbacks (or of their sum) to make reliable estimates of the warming to be expected in response to a CO2 doubling, and it is my contention that, even under the simplifying assumption that the feedbacks are linear, neither equilibrium nor transient climate sensitivity is determinable to any respectable precision. If some or all of the feedbacks are non-linear (see Roe, 2009, for an interesting discussion), then in additiion to knowing the initial value of each feedback we must also know the profile of its non-linear evolution over time – and we do not know that either. Accordingly, assuming that some or all feedbacks are non-linear introduces yet another series of unknown and unknowable unknowns into the equation, demonstrating a fortiori my conclusion that there can be no consensus as to the amount of warming we shall see.
Mr. Mosher says we can be sure of the value of the CO2 radiative forcing because it has been established by line-by-line radiative-transfer modeling. If that were so, there would have been no need for the IPCC to reduce the value of the CO2 forcing by a hefty 15% between its 1995 and 2001 reports. Some years ago I consulted one of the modelers who had performed the earliest radiative-transfer calculations to establish the form of the CO2 forcing equation. He was happy to confirm to me that the calculations indicated that the equation was indeed logarithmic: i.e., that each additional molecule of CO2 in the air has less forcing and hence warming effect than any of its predecessors. However, he was unable to warrant that the coefficient in the equation was correct. He thought it was in the right ballpark, but was quite unable to say how big the ballpark was. The assumption that because a model is complex its output must be right is a fallacy that Aristotle would have pounced upon if there had been computers in his day. The greater the complexity of the model, the more likely it is – all other things being equal – to produce incorrect output. That is not to say that no model has value, nor that there is no scientific matter that can be modeled. However, it should by now be obvious to all who have done me the kindness of following this thread that no model – however sophisticated – can possibly give us the answer to the “how-much-warming” question of not just some but all of the parameters that are essential to the calculation are unknown and unknowable.

Reply to  Monckton of Brenchley
April 24, 2012 12:41 pm

Monckton of Brenchley says “…since the equation is looking for equilibrium warming and equilibrium will not be reached for some time [1000-3000 years: Solomon et al., 2009], it is not falsifiable.” It sounds as though Lord Monckton and I are in substantial agreement but may differ on some details.
As I understand the term, the “equilibrium temperature” is the temperature that is reached after an infinite period of time during which all forcings are held constant. In the real world it is true that: a) forcings are not generally constant and b) scientists cannot wait for infinite periods of time before making their measurements. For both reasons, the equilibrium temperature is not observable. Thus, claims about the magnitude of the equilibrium climate sensitivity (e.g. that it is about 3 Celsius per doubling of the CO2 concentration) are non-falsifiable thus lying outside science. However, claims of this kind are at the heart of the IPCC’s argument for CAGW. Thus, currently there isn’t a scientific argument for regulation of CO2 emissions. The case for regulation has been made through logical fallacies.
While equilibrium temperatures are not observable, temperatures are. Thus, while a science of global warming cannot be built upon observations of equilibrium temperatures, such a science could be built upon observations of temperatures. The temperatures that would be observed would be the outcomes of statistically independent events. In a breach of professional duty, climatologists have thus far failed to describe these events for us. That they have failed to describe them leaves their claims non-falsifiable and thus pseudo-scientific.

Curiousgeorge
April 24, 2012 4:55 am

Good explanation, but the politicians ain’t listening.
” Obama wants 469 million to fight overseas global warming ” http://washingtonexaminer.com/politics/washington-secrets/2012/04/obama-wants-469-million-fight-overseas-global-warming/524076 .

John West
April 24, 2012 5:14 am

Bobl says:
“The Climate is chaotic, this is not to say it is unstable, just that you cannot reliably predict the outcome (output) from the inputs. There are non-linear thresholds for example that prevent the sea temperature exceeding a limit, because the moment they do, storms arise and extract the heat. It is the non-linearities that make the climate chaotic.
This for example is why the weather can’t be accurately predicted beyond 4 days, and forecasts have a habit of being defeated by nature, because some unknown confluence of factors results in a storm. Perhaps a sea temperature difference of less than 0.5 degree might be the difference between a storm and no storm, or a pressure difference of 1 hPa, or a temp of 1 degree.”

You’re right, chaotic does not mean unstable. Storms are weather not climate but I guess it comes down to what one defines as “sensitive”. What’s the difference between a glacial and an interglacial, a change of 10W/m2, 1 W/m2, or 0.1 W/m2; or more relevantly to the issue at hand 30 W/m2, 3 W/m2, or 0.3 W/m2? I think we’d all agree that a 30 W/m2 change would result in significant climate change, 3 W/m2 probably not much at all, and 0.3 W/m2 would be indiscernible. If 0.3 W/m2 change resulted in significant climate change, then I would whole heartily agree that climate is chaotic. If a 3 W/m2 change results in significant climate change then I might be persuaded into agreeing that’s at least somewhat chaotic. Otherwise, I’ll continue to consider the climate an amalgamation of coupled damped oscillating systems. It seems to me that the question that remains to be resolved is whether the net damping is critical, over, or under.

oMan
April 24, 2012 5:15 am

DirkH: thanks for the corrections offered to Mindbuilder on “probabilistic reasoning” and to Brendan H on the “paradox” of claiming that those familiar with a subject are shutting down debate when they ask others to follow the plot. On the latter point we could write volumes; there is a great tendency today to use credentials as trump cards. That is exactly NOT what Lord Monckton does. He shows his work and his sources. He also thinks and writes so clearly that there is almost no excuse for a reader not being able to follow the plot. I would speculate that many of those who argue from authority, do so because they are too lazy, sloppy or unsure of their own work to show it. I say nothing of corruption; which usually thrives behind claims of authority.

April 24, 2012 5:32 am

Well, it couldn’t last. Here comes a troll. Brendan H says that the assertion that one trained in logic will be likely to be more skilled in that subject than one who is not thus trained is an instance of the logical fallacy of appeal to authority.
That is to misunderstand what the form of a logical argument is. If the argument is to be an argument at all, it must comprise at least one premise and a conclusion. If it is to be valid, its premises must entail its conclusion. If it is to be sound, its premises must all be true and must validly entail its conclusion, whereupon the conclusion will be necessarily true.
If one were to say, “The IPCC says that those trained in climatology are likely to be more skilled in that subject than those who are not thus trained, and the IPCC is full of experts, and therefore it is true that those trained in climatology are more likely to be skilled in that subject than those who are not thus trained,” then that would indeed be an argument (in that it contains one or more premises and a conclusion); and it would be fallacious, because it relies upon the authority of the IPCC rather than any merit intrinsic to the argument itself. Note that the argument is fallacious even though, in this instance, its conclusion is self-evidently true.
I had made the surely unobjectionable – and again self-evidently true – statement that the fallacious arguments from consensus and from authority or reputation are more likely to mislead those who have not been trained in logic than those who have. A single declarative statement such as this is not an argument: it is merely an assertion, albeit a truthful one. Many hundreds of billions of dollars that have been wasted on various scams and boondoggles to make “global warming” go away could have been spent on something more useful,, such as alleviating poverty or eradicating disease, if everyone had been given elementary training in logic, as every educated person was from the Middle Ages until my own generation. With universal Classical training, no one would have been fooled for an instant by the climate extremists.

jaschrumpf
April 24, 2012 5:39 am

Interstellar Bill says:
April 23, 2012 at 4:55 pm
[snip]
Better yet, when the Earth’s orbital eccentricity goes high (over 5%) every 400 kyr,
the semiannual variation in sunlight is the square of (1+e)/(1-e) or, at e=5%
TWENTY TWO PERCENT! Shouldn’t that light off the greenhouse bomb?
Why didn’t the first such perihelion yield the runaway greenhouse
predicted by the IPCC ‘equation’?
(It’s as much an equation as Hansen is a scientist — not at all.)

In the paper that Eric Adler pointed out to me regarding the “cool sun” 534 MY BCE, the figure of 94.5% of current value was used as the starting point. Eric also referred to Milankovitch Cycles, which upon further review, cause fluctuations of solar irradiation up to… 23.5%, right in line with your calculation of 22%. The paper also stated that less than ~500 ppm CO2 would kick off an ice age, with that number up around 1000 ppm during the “cool sun” period. Yet here we are, being shrieked at because CO2 is a paltry 387 ppm — you’d think they’d be praying for more, based on that paper.
One of Carl Sagan’s “baloney detector” precepts was that all of the links in a chain of argument had to work — not just most of them. It seems to me that there are a lot of weak and even missing links in the CAGW argument, when a paper referenced by a CAGW proponent can have so much material refuting their hypothesis.

Jean Parisot
April 24, 2012 5:52 am

This article just received a genuine check of approval for me. I copy and pasted it into an email to an green friend who is otherwise a competant scientist, his response: racist.

MartinW
April 24, 2012 6:08 am

For the sake of our well-being and future prosperity, if our British government had had any sense, it would have ‘co-opted’ Chistopher Monckton long ago to its inner circle of policy advisors. Instead, all we have is a bunch of wholly blinkered, AGW ‘climate-change’ ministers, advised by the equally blinkered Lord Stern, Sir Paul Nurse, and the like. Thoroughly depressing.

Robbie
April 24, 2012 6:09 am

What: “Trolls are almost entirely absent” Lord Monckton?
They are not allowed to post and are simply filtered out.
Just like me in your previous post about the “blonde with the messy hair”. I tried to post there twice and didn’t succeed.
[The incessant Hadfield comments are to thread bombing, and violate site Policy. ~dbs, mod.]

Somebody
April 24, 2012 6:11 am

[snip. Provide a legitimate email address in order to have comment privileges. ~dbs, mod.]

Curiousgeorge
April 24, 2012 6:23 am

@ Jean Parisot says:
April 24, 2012 at 5:52 am
This article just received a genuine check of approval for me. I copy and pasted it into an email to an green friend who is otherwise a competant scientist, his response: racist.
********************************************************************************************
Racist? ! How the hell does one get from logic and math to racist?

Joe Born
April 24, 2012 6:34 am

DirkH: “Ferenc Miskolczi did.”
Thank you for bringing Miskolczi’s theory to my attention.
As to the question of whether he did indeed calculate the open-loop response of temperature to CO2 concentration, however, I have yet to overcome my skepticism. My initial take on his work is that he compared the trend over time a single “optical depth” value with the corresponding CO2-concentration trend, purporting to show that decreases in other optical-depth contributions canceled the CO2 contribution’s increase. Nothing I’ve seen so far gives me much basis for concluding that he computed what the CO2 contribution contribution to “optical depth” actually was. (I placed the phrase in quotes because, given that intensity decay must differ for different wavelengths, times of day, and latitudes, I don’t personally know exactly what those folks mean when they say “optical depth.”)

tom
April 24, 2012 6:58 am

You can not find consensus on this because there are many among us who are wedded to excesses and continued denial of the unambiguous evidence that we(all humans) are by our sheer numbers creating stresses on our environment and climate. The stresses we are creating are making the climate react in ways that are extreme. One only needs to look at the record numbers of extreme weather events occurring in recent years. Things are getting wilder. Just like the people studying earths climate say it will.
Now who would want to denial or prevent us from reaching consensus on this process?
It is the people who for their own personal self interest and maintenance of their narrow but tenuous positions will do most anything to keep the rest of us from acting in sane and rational to the dangers we all face.

Vince Causey
April 24, 2012 7:13 am

A good article by his Lordship. I must say, as soon as I saw that equation with all those made up terms, it reminded me of another nefarious equation – The Drake Equation. Both, though logically correct, are entirely meaningless and add absolutely no information to that which existed previously.

Jeremy
April 24, 2012 7:26 am

It is interesting, perhaps noteworthy, how much that red factor looks like parts of the Drake Equation.

April 24, 2012 7:31 am

Sorry my dear Lord, but here I have to disagree.
we have weatherstations and we have statistics.
If you use these tools correctly then you can only see that it has been cooling since 1994….
http://www.letterdash.com/henryp/global-cooling-is-here
(it is a lot of work, though…..)

Vince Causey
April 24, 2012 7:33 am

tom says:
April 24, 2012 at 6:58 am
“Now who would want to denial or prevent us from reaching consensus on this process? t is the people who for their own personal self interest and maintenance of their narrow but tenuous positions will do most anything to keep the rest of us from acting in sane and rational to the dangers we all face.”
Ok, now let me ask you a question: Who would want to promote a consensus on this process? It is the people who for their own personal self interest and maintenance of their narrow but tenuous positions who will do most anything to keep the rest of us from acting in a sane and rational way. In other words, rent-seekers looking to enrich themselves from the taxpayer subsidies of various boondoggle schemes to build power sources from wind and sunlight, deft white-collar opportunists who wish to enrich themselves by transacting in emission certificates, grant-seeking individuals and their departments and various eco-zealots who want us to “change our ways” (ie, go medieval).
“You can not find consensus on this because there are many among us who are wedded to excesses and continued denial of the unambiguous evidence that we(all humans) are by our sheer numbers creating stresses on our environment and climate.”
A complete non-sequitur. The article is about the climate sensitivity due to CO2 emissions. It is certainly not a contradiction to insist that such sensitivity is lower than touted, whilst still acknowledging the “stresses” that humans are creating on the planet. Your’s is the sort of argument that attempts to lump together individuals who are skeptical of what is emanating on the science from certain quarters, with those who condone environmental destruction. There are plenty of scares – overfishing, rain forest destruction – but what has any of that got to do with CO2?
In fact, your argument, taken to its conclusion is ridiculous, because most of the so-called “mitigation” of CO2 emissions causes worse problems that that which existed before the mitigation. Who is it that is ripping up rain forest to make way for biofuel projection? Not the skeptics. Who is it that is harming the environment by covering the land with wind farms? Not the skeptics.
It seems to me that it is those who are pushing this scare that are more harmful to the planet than those against. Ironical, isn’t it?

mkelly
April 24, 2012 7:47 am

Monckton of Brenchley says:”The Planck parameter is – in theory – the first differential of the fundamental equation of radiative transfer about 3-5 miles above us, where incoming and outgoing fluxes of radiation are equal by definition.”
I am familiar with the fundamental equation of radiative HEAT transfer, but not with the fundamental equation of radiative transfer. Could you be so kind as to tell what that is.
The fundamental radiative heat transfer equation (q/a = e * SB * (T1^4 – T2^4) tells us that when T1=T2 heat transfer is zero. So that could suggest that the Planck parameter is zero and by extention the entire equation leads to zero.

April 24, 2012 7:52 am

Mr. Tom says we are causing stresses not only on the environment (true) but on the climate (largely false), and points to what he says is a recent increase in the frequency of extreme-weather events. But one does not need to be a climatologist to understand that, since there has been no global warming for 15 years, the extreme-weather events of the past year or two cannot legitimately be attributed to global warming.
Nor is there any evidence that the frequency, duration, or intensity of extreme-weather events is increasing. Theory would lead us to suppose that, certainly outside the tropics and arguably within the tropics as well, warmer weather would reduce the temperature differentials that cause extreme weather, reducing the frequency and intensity of extreme-weather events.
Tthe climate extremists may argue till they are blue in the face that each extreme-weather event (now much more widely reported than in previous generations) is attributable to global warming: however, this is merely another instance of the fallacy of argument from ignorance. We do not know why extreme-weather events occur, though we know they have always occurred: so, if we exploit our ignorance by arbitrarily or capriciously attributing extreme-weather events to “global warming” we perpetrate the argumentum ad ignorantiam.

major9985
April 24, 2012 7:53 am

Robbie says:
April 24, 2012 at 6:09 am
Yes, it is sad that real comments which speak the truth are not allowed to be posted on WUWT. Monckton has a tainted past that will never go away.

Jim G
April 24, 2012 7:56 am

It seems that Fermi’s quote of Johnny Van Neuman is appropriate:
“…with four parameters I can put an elephant in the room, with five I can make him wiggle his trunk.”
With six he could probably make that elephant dance a jig.
Now how many of these forcings and feedbacks are known quantities and not assumed ranges?

April 24, 2012 8:04 am

Curiousgeorge says:
April 24, 2012 at 6:23 am
@ Jean Parisot says:
April 24, 2012 at 5:52 am
Racist? ! How the hell does one get from logic and math to racist?

Racist: n, 1. term used by a member of the progressive political cult to reply to a question for which he has no answer. 2. anyone who is winning an argument with a Liberal…

major9985
April 24, 2012 8:12 am

@ Monckton of Brenchley says:
April 24, 2012 at 4:53 am
You say that “It has long been established by experiment and measurement that adding greenhouse gases to an atmosphere like ours will be likely to cause warming.” The consensus is in regards to anthropogenic climate change, you acknowledge that this consensus is true and backed up by science.
There is also a consensus on climate sensitivity, and if your maths is true, it will be accepted into this consensus. But that is yet to happen because I would think you feel the peer review process is tainted?

G. Karst
April 24, 2012 8:29 am

Monckton of Brenchley – Shine on, you crazy diamond. GK

April 24, 2012 8:41 am

A wonderfully refreshing piece, Lord Monckton! I am in the process of framing the following insightful statement, sourced from J N-G, the esteemed Texas State Climatologist, who reminded me that:
“There are lots of things wrong with existing models, so finding something else wrong with them will not get you very far unless you can demonstrate the importance of the shortcoming.”
Setting aside outright data manipulation, of which I personally know you are aware, my mathematical background compels me to remind people that multiplying one uncertainty by another uncertainty, and then taking their product and multiplying it against another set of uncertainly derived products, cannot provide reliable information.
Narrowing some factors within decent zones of probability gives the appearance of rigor, but “geo-engineering” public policy to account for those multiples of uncertainty taken to many higher powers is the essence of malfeasance, particularly when some of the claimed effects just aren’t there. Now the following isn’t definitive by any means, but should give some pause to policymakers in the process of squandering trillions of our world’s currencies:
http://www.colderside.com/Colderside/Temp_%26_CO2.html
Your willingness to continue public discourse on this topic is totally refreshing, and much appreciated!!!

John West
April 24, 2012 8:56 am

Monckton of Brenchley says:
”Even if the climate object is not chaotic, in all relevant respects it behaves as though it were, so the IPCC rightly says that “the long-term prediction of future climate states is not possible.””
Well, I could agree to that if an accuracy caveat were inserted with respect to the extent to which we cannot determine a future climate state. In other words, I believe we could determine (if not now at least in the near future) large scale future climate states given the variables we do know, could surmise, or even postulate about such as Milankovitch cycles, solar constant, atmospheric composition, volcanic activity, etc. such that it could be predicted to some degree of accuracy what the climate would be like; i.e. be able to predict glacial vs. interglacial, but not be able to predict LIA vs. MWP perhaps, and certainly not 1988 vs. 1998.

another Mike
April 24, 2012 9:00 am

: “The stresses we are creating are making the climate react in ways that are extreme. One only needs to look at the record numbers of extreme weather events occurring in recent years. Things are getting wilder. ”
This is yet another recitation of the warmist mantra. If it can be shown that there are a record number of extreme events, *and* that they are human-caused, I might be persuaded to join your ranks. Note particularly the second clause.
Meanwhile, I’ll remain skeptical. (Not holding my breath)
Mike

curious george
April 24, 2012 9:00 am

There is already a global warming consensus. It has been achieved by Ben Santer in 1995.

April 24, 2012 9:15 am

cgh & Jeremy: Yes, I thought “Drake” as soon as I read the post. If drake meant a goose, instead of a duck, it would be a canard.
Jean Parisot says: “This article just received a genuine check of approval for me. I…pasted it into an email to [a] green friend who is otherwise a competant scientist, his response: racist.”
Surely your friend was being jocular, playing on the fact that people whose arguments are totally indefensible eventually have to play “the race card,”

rgbatduke
April 24, 2012 9:37 am

Well done, I agree. I would add that if one does consider the climate as a bistable system (or more generally as a multistable system) the actual evidence is that we are already in the warm phase of what is surely the nonlinear feedback process that leads to the local stability of both the warm and the cold phase, where the cold phase is more stable, and clearly dominant, over the entire Pliestocene (extremely so over the last 700 thousand to million years).
The climate record, OTOH, has no evidence whatsover of a “still warmer phase” than what was observed some 5 mya, when the global temperature was around 1.5-2 C warmer than it is today. All of the same general sensitivities would have held at that time, one expects. The actual data, then, suggest that negative feedback strictly limits global temperatures to be no more than 1-2 C warmer than they are today even for the extreme values of CO_2 concentration that likely held back in the warm phase (values that are roughly what the IPCC thinks we might achieve by the end of the century).
Long before we get there, though, two things will happen. One is that we will simply stop burning as much carbon-based fossil fuel not to “save the earth” but to “save money”. With or without subsidy or focussed investment, alternative energy sources will inexorably come of age with improvements in technology, with both new fission and possibly fusion based nuclear still metaphorically waiting in the wings as fossil fuel resources, the latter practically inexhaustible. The other is that it is highly probable that we will gradually accrue enough data and build enough improved instrumentation to be able to fill in many of the gaps in our knowledge of the climate (once evidence forces the abandonment of the trivial “one parameter” forcing equation like the one above altogether). We’ve only had satellite based measurements of any kind for order of 50 years. We’ve only had “modern physics” for order of 150 years (or less, depending on which lines you draw — perhaps less than 100 years if you include quantum theory, perhaps order of sixty or seventy years if you include a meaningful model for the energy production mechanism used by the Sun).
Let’s be perfectly clear — we had no real idea of how the Sun worked until 1939, when Hans Bethe proposed a theory that later — really rather much later — proved to be the basis for what is correct although we are still working on the details, which is a hard problem because it is difficult to “see” into the sun to where all of the action takes place with real-time probes so much has to be inferred from indirect indicators.
We are therefore trying to use a baseline of perhaps 50 years, perhaps 30-40 years, of reasonably reliable measurements made with reasonably modern instrumentation (much of it in Earth orbit where it can see what is really going on globally instead of trying to make inferences from a handful of local observatories scattered irregularly across a vast surface in a highly biased way) to make inferences stretching back billions of years and predict the future centuries in advance. This is, of course, all but impossible. I do modelling, and you cannot squeeze blood from a turnip or make a reliable inference from inadequate data. Yes, the central limit theorem suggests that eventually you might get things right. However, it doesn’t guarantee that you will get them right soon or from small observational baselines, especially not multivariate statistics with complex multivariate probability distribution functions produced by nonlinear dynamics in coupled open hydrodynamic systems.
It’s a goddamn hard problem. Why is it so difficult for “classicists” to acknowledge that one fact?
rgb

April 24, 2012 10:27 am

All this talk about logic, great, and what also about the logic of your own eyes on sea level. My comment on Real Science (the hack is back, today postings are at: http://stevengoddard.wordpress.com/):
Now with Envisat down, how are we going to know if sea levels are rising to unsustainable levels? We’ll be flying blind on sea level. /s
Oh yeah, a person said “go to the beach!”
No, that won’t work. Don’t believe your own eyes. “You’ve got lying eyes” when it comes to sea level. We need enhanced satellite imaging and fourth generation data processing, and perhaps also if we can finally get a moon base we should observe sea levels from there also.

Gunga Din
April 24, 2012 10:34 am

Robbie says:
April 24, 2012 at 6:09 am
What: “Trolls are almost entirely absent” Lord Monckton?
They are not allowed to post and are simply filtered out.
Just like me in your previous post about the “blonde with the messy hair”. I tried to post there twice and didn’t succeed.
[The incessant Hadfield comments are to thread bombing, and violate site Policy. ~dbs, mod.]
In defense of “The Mod Squad”: I made a comment on another thread that never showed up. I took no offense. I thought it might get snipped because it veered from the direction and wasn’t really a comment that fit with the context of the thread. True, I don’t see everything that is being snipped but from what I have seen here, and the comments the mods have made when something is snipped, they are not engaging in censorship. If your comment had a point that added to the thread, even if the mods disagreed with, it would have remained … even if they know you are a troll.

Arkay
April 24, 2012 10:35 am

A minor quibble, but facts are important…
While argumentum ad populum and argumentum ad verecundiam are indeed logical fallacies, they are not amongst the 13 logical fallacies described by Aristotle in ‘On Sophistical Refutations’.
“There are two styles of refutation: for some depend on the language used, while some are independent of language. Those ways of producing the false appearance of an argument which depend on language are six in number: they are ambiguity, amphiboly, combination, division of words, accent, form of expression….
Of fallacies, on the other hand, that are independent of language there are seven kinds:
(1) that which depends upon Accident:
(2) the use of an expression absolutely or not absolutely but with some qualification of respect or place, or time, or relation:
(3) that which depends upon ignorance of what ‘refutation’ is:
(4) that which depends upon the consequent:
(5) that which depends upon assuming the original conclusion:
(6) stating as cause what is not the cause:
(7) the making of more than one question into one.”
Aristotle – On Sophistical Refutations (translation by W. A. Pickard-Cambridge)

April 24, 2012 11:25 am

A few more replies to commenters:
Mr. Henry P. says the world has been cooling since 1994. The UK Met Office says there has been no statistically-significant global warming since 1997 – a period of 15 years. Yet NOAA, in its State of the Climate report for 2008, said that a period of 15 years or more without warming would be inconsistent with what the models had predicted. In other words, either the temperature measurements are wrong or the models are exaggerating the influence of CO2 on temperature. I suspect it is more the latter than the former.
Mr. Mkelly asks what is the fundamental equation of radiative transfer. This equation, also known as the Stefan-Boltzmann equation after the Slovene who discovered it empirically and his Austrian pupil who demonstrated it formally, states that the flux of radiation at the characteristic-emission surface of an astronomical body is equal to the product of the emissivity of that surface, and the Stefan-Boltzmann constant, and the fourth power of the temperature at that surface.
Mr. Mkelly also wonders whether the Planck parameter should be zero. No. At the Earth’s characteristic-emission surface, 3-5 miles above us, the radiative flux is 238 Watts per square meter. Therefore, taking emissivity as constant at unity (that is near enough) and the Stefan-Boltzmann constant as 5.67 x 10^-8 Watts per square meter per Kelvin to the fourth power, the equation indicates that the Earth’s emission temperature is 255 Kelvin. The first differential of the equation is simply the emission temperature divided by (four times the radiative flux), i.e. 0.267 Kelvin per Watt per square meter. Increase this by about one-sixth to allow for non-uniformity in the latitudinal distribution of global temperatures, and the Planck parameter’s value as given by the IPCC is 0.313 Kelvin per Watt per square meter, or approximately the reciprocal of 3.2. However, as I have pointed out in my head posting, the lunar Diviner mission has shown that the Moon’s characteristic-emission temperature as determined by the same method is some 35-40% greater than the actual lunar global mean temperature as implied by the satellite’s measurements of lunar equatorial mean temperature. It may be, therefore, that the Earth’s theoretical characteristic-emission temperature – and consequently the theoretical value of the Planck parameter that is derived from it – is also a substantial exaggeration. In that event, though there are constraints on the extent to which the terrestrial Planck parameter may have been exaggerated, on this ground alone the models may be overestimating climate sensitivity by between one-third and one-half.
And so to a troll, skulking under the pseudonym “Major9985”, who unfairly complains that what he calls “comments that speak the truth are not allowed” at WUWT. In fact, Anthony’s policy is to allow climate extremists the freedom to hang themselves by their illogical and often intemperately-expressed arguments, though there are now sensible and proportionate limits on mere invective and on attempts by trolls to introduce red herrings calculated to disrupt the flow of legitimate scientific discussion. The policies of climate-extremist propaganda websites – such as “Real” “Climate” and “Skeptical” “Science” – are typically far less generous in allowing free speech to dissenters.
Major9985, whoever he or she may be, says I have an (unspecified) “tainted past”. That is the tired logical fallacy of the argumentum ad hominem, the attack on the man rather than his argument. It has no place in civilized human discourse, but the hard Left, following Saul Alinsky’s rules for radicals, all too frequently resort to it without the slightest provocation when they realize that they are unable to muster a respectable scientific or economic argument against the hated capitalists.
Major9985 accuses me of acknowledging that “the consensus on anthropogenic climate change is true and backed up by science”, when I had explicitly and repeatedly stated that the fact that adding greenhouse gases such as CO2 to an atmosphere such as ours will cause some warming has been established by oft-repeated experiments and does not require to be sanctified by any headcount among scientists or anyone else. It is the direction of the argument that one must be clear about here. Scientific experiments and measurements have established that there is a greenhouse effect, and that – not the imagined existence of some consensus or another – is why I accept that our adding CO2 to the atmosphere will cause some warming.
Major9985 then tries to maintain that “there is also a consensus on climate sensitivity”. This is simply untrue. Most climate scientists have never studied climate sensitivity to the point of publication in a reviewed journal. According to Professor Lindzen, only a few dozen have done so. Of these, about half are modelers (whose models, using the NOAA 2008 test, are now shown to have exaggerated the influence of CO2 on temperatures). The modelers tend to maintain that climate sensitivity is high – i.e., that there will be as much warming as a result of our adding CO2 to the atmosphere as the IPCC thinks there will be. The empiricists, on the other hand (and I am one of those) mostly find climate sensitivity to be one-third to one-fifth of the IPCC’s central estimate. In short, we may see perhaps 1 Celsius degree of warming this century as a result of our activities, and that is simply not enough to be harmful.
My head posting demonstrates why there can be no consensus as to climate sensitivity. Every single one of the seven crucial quantities on the right-hand side of the fundamental equation of climate sensitivity is unmeasured and unmeasurable, unknown and unknowable. As one or two commenters have rightly pointed out, the uncertainty as to the value of the product of several unknown quantities will be considerably greater than the uncertainty as to the value of each individual quantity. Major9985’s assertion that there is a consensus as to climate sensitivity when there is not in fact and simply cannot in theory be any such thing is not a constructive contribution to this debate.

Murali
April 24, 2012 11:35 am

@mondo: The loop gain required for the onset of violent oscillations is 1, not 0.1. One tenth of that, 0.1, gives sufficient error margin so that even under the worst case scenario, the system won’t oscillate.

April 24, 2012 11:58 am

Lord Monckton: Your are a GEM! Brilliant!

PeterGeorge
April 24, 2012 12:07 pm

I’m sorry. I’m a fan of Lord Monckton sometimes, but this is a bit too much for me. For one thing, since when does the existence or non-existence of a consensus depend on there being good reason for it. There used to be a consensus that the world was flat. Those who held this view didn’t have adequate justification for the belief, but they held it anyway and at the time almost everyone agreed, so there was a consensus. So, the claim of this essay – that there cannot be a consensus on global warming doesn’t work for me. Of course there can be a consensus. It just can’t be justified.
Secondly, human reasoning is almost always heuristic. The most important logical fallacy that I’m afraid Lord Monckton fails to identify is the fallacy of over-reliance on codified logical fallacies.
Imagine a few years from now astronomers announce the discovery of a small (1 km) rock headed toward Earth. After months of observation astronomers announce that the rock will hit the Earth and appears headed straight for New York City on November 10 at 9:53AM. Around the world astronomers concur. But you don’t know astrophysics. And you don’t have access to a good telescope. You cannot independently confirm the claim of the authorities.
Forget about right or wrong; would it even be RATIONAL to remain in New York on November 10? Can you seriously imagine yourself chiding your friends and family who are packing up to leave, for “falling” for the argumentum ad populum and argumentum ad verecundiam fallacies? No, it is they who would quite properly regard you as insane.
Obviously. But why? Think about it.

April 24, 2012 12:27 pm

Forget the analogy, PeterGeorge! Scientists don’t just present the “…claim of authorities.” They publish material, give justification, make data available (not all do this, I’m afraid), and give enough information, peer reviewed or otherwise, so that the decision to remain in NYC will NOT be made in a vacuum, given the modicum of education available to both of us.

John from CA
April 24, 2012 12:38 pm

Monckton of Brenchley,
You need to take a look at this hatch job. Talk about shooting the messenger — its inexcusable.
Capital Weekly
Opinion: The tangled tale of Lord Christopher Monckton
By Tim O’Connor | 03/21/12 12:00 AM PST
source: http://capitolweekly.net/article.php?_c=10isa7mgcq2t933&xid=10fvjrdhae3lve3&done=.10isa84vc3h29ay
“Who do you believe when it comes to climate change? The more than 97% of scientists actively publishing in the field who agree that climate change is real and human caused, or a front man who speaks for oil companies that put profits before people?”

Gunga Din
April 24, 2012 12:59 pm

PeterGeorge says:
April 24, 2012 at 12:07 pm
Imagine a few years from now astronomers announce the discovery of a small (1 km) rock headed toward Earth. After months of observation astronomers announce that the rock will hit the Earth and appears headed straight for New York City on November 10 at 9:53AM. Around the world astronomers concur. But you don’t know astrophysics. And you don’t have access to a good telescope. You cannot independently confirm the claim of the authorities.
Forget about right or wrong; would it even be RATIONAL to remain in New York on November 10? Can you seriously imagine yourself chiding your friends and family who are packing up to leave, for “falling” for the argumentum ad populum and argumentum ad verecundiam fallacies? No, it is they who would quite properly regard you as insane.
——————————————————————————
And when you find out that afterward the astronomers are buying up the real estate in NYC?
There are other threads here that show the motives of those supporting and hyping AGW (or whatever they’re calling it now). It’s being used as a tool to promote an agenda. There was a quote by some Greenpeace (I think.) guy that said it didn’t matter if AGW was real or not. They could use it to forward their financial agenda.
Maybe Mann really does believe in his Hockey Stick. Maybe after all the work he put into it, pride keeps him form admitting he was wrong. I don’t know. But those who are swinging it for their own purposes keep shouting, “BUT THERE”S A CONSENUS!”. If I understood what Monckton was saying (maybe I didn’t), there can’t be.
Does 2 apples + 2 apples = 4 apples or applesause? What’s the consensus? Depends what you’re selling.

April 24, 2012 1:00 pm

Monckton of Brenchley says:
April 24, 2012 at 4:53 am
“Mr. SonOfMulder asks how I can assert that little error will arise from the simplifying assumption that all feedbacks are linear. Well, my argument is confined to the question whether we know enough about the value of the individual feedbacks (or of their sum) to make reliable estimates of the warming to be expected in response to a CO2 doubling, and it is my contention that, even under the simplifying assumption that the feedbacks are linear, neither equilibrium nor transient climate sensitivity is determinable to any respectable precision. ”
The reason I asked the question is that given the part of negative feedback (I consider the whole picture) that we do reasonably understand ie Boltzman dF=4sT^3.dT is extremely greater than non-linear for changes in T. Latent heat of evaporation from the surface of the sea may well be more than linear as returning radiation from water vapour could preferentally ‘be absorbed and dislodge sea water molecules in the 1st micron) which then rise taking the latent heat with them and cooling the surface. ie it strikes me that negative feedbacks may well tend to be greater than linear whereas the logarithmic growth of greenhouse gas feedbacks is much less than linear.

John from CA
April 24, 2012 1:04 pm

John from CA says:
April 24, 2012 at 12:38 pm
=====
typo s/b
hatchet job

Tom in Florida
April 24, 2012 1:22 pm

PeterGeorge says:
April 24, 2012 at 12:07 pm
“Imagine a few years from now astronomers announce the discovery of a small (1 km) rock headed toward Earth. After months of observation astronomers announce that the rock will hit the Earth and appears headed straight for New York City on November 10 at 9:53AM. Around the world astronomers concur. But you don’t know astrophysics. And you don’t have access to a good telescope. You cannot independently confirm the claim of the authorities.
Forget about right or wrong; would it even be RATIONAL to remain in New York on November 10? Can you seriously imagine yourself chiding your friends and family who are packing up to leave, for “falling” for the argumentum ad populum and argumentum ad verecundiam fallacies? No, it is they who would quite properly regard you as insane.
Obviously. But why? Think about it.”
Horrible analogy. Your scenario holds a one time event to happen at an expected time and place. The results of the prediction will be well known at that time with no further threat to worry about. Once and done with people going back to their normal lives. NCCDs (natural climate change deniers) want to permanently change the way we live, redistribute wealth and seek the creation of a world government to keep the rest of us in line.

Brendan H
April 24, 2012 1:49 pm

Monckton of Brenchley: “That is to misunderstand what the form of a logical argument is. If the argument is to be an argument at all, it must comprise at least one premise and a conclusion.”
The premises, and indeed the argument, need not be explicit. And in an informal situation, such as a blog, arguments are not often fully and explicitly expressed. For example, in your previous article you quoted the assertion of the “bossy environmentalist”: “But there’s a CONSENSUS!”, confident that your reader would understand this as a reference to the consensus argument.
“I had made the surely unobjectionable – and again self-evidently true – statement that the fallacious arguments from consensus and from authority or reputation are more likely to mislead those who have not been trained in logic than those who have.”
Which is based on the assumption that, all else being equal, expertise trumps non-expertise. It is then possible to argue that on any given matter, the expert will be more likely to be correct that the non-expert. This is sufficient to establish a line of reasoning as an argument from [genuine] authority.
And this is just what you do. Take this line of argument: “The Classicist knows that the central argument of the climate extremists…is an unholy conflation of the argumentum ad populum and the argumentum ad verecundiam.”
In this argument, 1) The Classicist is claimed to be an authority; 2) The Classicist makes a claim about the consensus and authority arguments; 3) The conclusion is deemed to be true.
A clear argument from authority.

Vince Causey
April 24, 2012 2:00 pm

Tom in Florida says:
April 24, 2012 at 1:22 pm
“Imagine a few years from now astronomers announce the discovery of a small (1 km) rock headed toward Earth. After months of observation astronomers announce that the rock will hit the Earth and appears headed straight for New York City on November 10 at 9:53AM. etc etc”
I assume you believe in your analogy, and it’s not meant to be a wind up? Well, I’m not buying it. The measurement of a trajectory is a convergent process – you gather more and more data of the orbit and the error bars become smaller and smaller and the probability of impact becomes closer and closer to unity – if indeed, that is what the data shows. The science underpinning this is Newtonian physics which nobody can refute. Indeed, anybody acting in the way you described, would indeed be thought insane. That is because, as Lord Monckton has pointed out in a reply, a law of physics does not need a consensus to endorse it. It is a law, and that is the end of it.
The CAGW position is not a law – it cannot even muster as a theory. To call it a hypothesis is somewhat over generous, and some skeptics have argued that it should more accurately be called a conjecture. How then, can a conjecture be put on the same footing as a law? I just don’t understand your logic.

Greg House
April 24, 2012 2:12 pm

PeterGeorge says:
April 24, 2012 at 12:07 pm
I’m sorry. I’m a fan of Lord Monckton sometimes, …
============================================
A very good clear posting. Might be distorted and misinterpreted soon, I am afraid.

Greg House
April 24, 2012 2:49 pm

Monckton of Brenchley says:
April 24, 2012 at 11:25 am
I had explicitly and repeatedly stated that the fact that adding greenhouse gases such as CO2 to an atmosphere such as ours will cause some warming has been established by oft-repeated experiments and does not require to be sanctified by any headcount among scientists or anyone else.
==================================================
To my knowledge, the only experiment you have ever referred to is the one conducted by Tyndall in the 19th century. Now, this experiment has not established, that “greenhouse gases” cause any warming. Neither has this experiment established, that “greenhouse gases” cause any cooling. This is my understanding of the descriptions of that experiment one can easily find on the internet.
Tyndall’s experiment has only established 1 thing: that “greenhouse gases” absorb and re-emit some IR radiation, that is all. It belongs to common knowledge, that the IR radiation comes from the Sun, too, so from Tyndall’s experiment alone it is impossible to conclude on either net warming or net cooling.
In the moment it looks like you either lack proper knowledge of the issue or you use a false premise to support your claim.
Second, there is a clear indication (but not a proof) that it is probably impossible to prove “greenhouse gasses warming” experimentally. Otherwise AlGore would not have staged a fake recently, that was debunked by Anthony Watts. They would have demonstrated something genuine.
Nevertheless, we can clear the matter once and for all. Please, present clear, distinct references and links to who when and where and how experimentally physically measured (not calculated on the statistical basis) how much increase in physical temperature how much “greenhouse gasses” produce. Physics, please, no speculations.

Legatus
April 24, 2012 5:53 pm

The assumption that because a model is complex its output must be right is a fallacy that Aristotle would have pounced upon if there had been computers in his day. The greater the complexity of the model, the more likely it is – all other things being equal – to produce incorrect output.
A simpler way to say this (“occam’s razor”) Computers make very fast, very accurate mistakes.

RoHa
April 24, 2012 5:54 pm

Lord Monckton is going a long way to redeem himself for supporting Thatcher.

major9985
April 24, 2012 7:00 pm

[SNIP: Insulting, abusive and obnoxious in the same old way, get snipped in the same old way. -REP]
[Agree. Further references to Peter Hadfield, who will not engage in a face to face debate in a neutral venue will be deleted. Take the ad hominems elsewhere. ~dbs, mod.]
[I overstepped in this case. Although Peter Hadfield has never challenged Lord Monckton to a debate, that is not saying he will not debate. My apologies for giving that impression. He is still free to issue a debate challenge. ~dbs, mod.]

Legatus
April 24, 2012 7:02 pm

Arkay says: A minor quibble, but facts are important…
While argumentum ad populum and argumentum ad verecundiam are indeed logical fallacies, they are not amongst the 13 logical fallacies described by Aristotle in ‘On Sophistical Refutations’.
(3) that which depends upon ignorance of what ‘refutation’ is:
It would seem to me that #3 presented here fits the above two fallacies quite well. Specifically, the idea of argument by head count or expertise fits. A claim of expertise is an ignorance of what is, or is not, refutation, because experts (and crowds) have often been shown to be wrong, and therefore cannot be relied on as proof or refutation. “Other fallacies occur because the terms ‘proof’ or ‘refutation’ have not been defined, and because something is left out in their definition. .Define for my why your claim of “expertise” means that what you say is always, absolutely true?
There is a way, of course, where it can be true. Simply claim to be god, and provide proof of same. A claim of “expertise” that settles all doubt simply because you say so, is essentially a claim to be god. After all, if you are god, whatever you want to be true, you simply make it true. If you are not god (and I think that can be reliably demonstrated), than a simple claim of expertise is not close enough to godhood to claim infallibility, which is what you must claim under the idea that your expertise proves what you say is true.
Proof that you are not god:
I think I can say that I can find a witness that sometime, somewhere, someone heard you express dislike of something. Subsequent to that, the disliked thing continued to exist. If you were god, you could have changed it, and you would have based on your expression of dislike. You did not. Therefore you are not god.
*If you were god, you could and would change disliked things into liked things.
*You were heard to express dislike of something, and a wish that it would change.
*It did not.
*Conclusion, you are not god.
Old story.
There once was a man named Phil.
Phil claimed to be god.
Some people believed him.
One day, Phil and his followers were walking down the street, and a lightning bolt struck Phil.
All his followers looked up, and they said…
“There’s something bigger than Phil.”
And, of course, you must believe me because…
I AM GOD
No, really, I am!
Stop looking at me like that!
Your still looking at me like that!
Drat, I’m not god.
And it was such a nice dream for a second there.
i hate logic 🙁

joeldshore
April 24, 2012 7:25 pm

Monckton of Brenchley says:

Theory predicts that the Moon’s mean surface temperature should be around 270 Kelvin.

No it doesn’t. The energy balance arguments referred to constrain the average of T^4, not the average of T, on the surface. The 270 Kelvin number is obtained as the fourth root of the average of T^4 on the surface. By Holder’s Inequality, it will be greater or equal to the average of T. Only for a fairly uniform temperature distribution does one expect this upper bound to provide a reasonably good estimate of the average temperature. The moon is an airless body with a very broad temperature distribution and hence it is expected that the average temperature will be considerably less than the constraint provided by this inequality.

Remarkably, the IPCC relies upon a single paper, Soden & Held (2006), to establish its central estimates of the values of the principal temperature feedbacks. It did not publish all of these feedback values until its fourth and most recent Assessment Report in 2007.

That is because most modeling simulations and all empirical estimates of climate sensitivity do not try or are not able to divide up the contribution of each feedback. What Soden and Held did is went through the myriad of models and computed these individual contributions, which are useful for understanding the magnitude of the various feedbacks making up the climate sensitivity and how they differ from one model to another. However, this is basically irrelevant to the issue of estimating the overall climate sensitivity.

Paltridge et al. (2009) find that the water-vapor feedback may not be anything like as strongly positive as the IPCC think

Paltridge et al. results for the change in water vapor use data that has known artifacts in measuring what they want to measure and which contradicts nearly all other measurements, be they other reanalyses incorporating radiosonde data, satellite analyses, and shorter-term relationships between temperature and water vapor from all data sets that are not subject to the same sort of artifacts that longer term trends are. The chances of that paper being correct are essentially nil. See, for example, Dessler and Davis (2010) http://geotest.tamu.edu/userfiles/216/Dessler10.pdf

Lindzen and Choi (2009, 2011) report that satellite measurements of changes in outgoing radiation in response to changes in sea-surface temperature indicate that the feedback sum is net-negative, implying a climate sensitivity of 0.7 K, or less than a quarter of the IPCC’s central estimate

The first version of Lindzen and Choi’s paper was so bad that even Roy Spencer pointed out its many flaws. It is doubtful that the 2nd version will fare much better (except that it may not attract quite as much rebuttal this time around just because of the obscurity of where it was published and the fatigue of climate scientists in having to waste time on this stuff).

major9985
April 24, 2012 7:50 pm

Monckton of Brenchley says:
April 24, 2012 at 11:25 am
“And so to a troll, skulking under the pseudonym “Major9985″, who unfairly complains that what he calls “comments that speak the truth are not allowed” at WUWT.”
I tried to post a comment under my name Michael Whittemore [SNIP: Not everything happens in the time frame or at the pace you would like. This topic is exhausted. Drop it. -REP]
You go on to say that the consensus on anthropogenic climate change ‘does not require to be sanctified by any headcount among scientists”. There are a lot of people that don’t believe in anthropogenic climate change. The “head count” is to try and reassure the public that it is real.
“In short, we may see perhaps 1 Celsius degree of warming this century as a result of our activities”
You get this 1 Celsius warming from “unmeasured and unmeasurable, unknown and unknowable” climate sensitivity?

Tom in Florida
April 24, 2012 7:53 pm

Vince Causey says:
April 24, 2012 at 2:00 pm
“I assume you believe in your analogy”
Vince, it wasn’t my analogy, it was PeterGeorge posting April 24, 2012 at 12:07 pm. I was commenting on how bad an analogy it was, to wit:
“Horrible analogy. Your scenario holds a one time event to happen at an expected time and place. The results of the prediction will be well known at that time with no further threat to worry about. Once and done with people going back to their normal lives. NCCDs (natural climate change deniers) want to permanently change the way we live, redistribute wealth and seek the creation of a world government to keep the rest of us in line.”

PeterGeorge
April 24, 2012 8:10 pm

tomwys says:
April 24, 2012 at 12:27 pm
Forget the analogy, PeterGeorge! Scientists don’t just present the “…claim of authorities.” They publish material, give justification, make data available (not all do this, I’m afraid), and give enough information, peer reviewed or otherwise, so that the decision to remain in NYC will NOT be made in a vacuum, given the modicum of education available to both of us.
—————————-
Climate scientists have done all that. By golly, they’ve even spent billions of our tax dollars doing it. But most of us who frequent WUWT don’t believe them. And we’re probably right and they’re probably wrong.
But it would be really different in the Chicken Little (rock falling from the sky) case, wouldn’t it. We – no I’ll be more modest, I would believe the Chicken Littles. And it DOES have something to do with authority – a legitimate kind of earned authority that comes from being right a lot, being able to reliably demonstrate key facts, knowledgeable people being free and able to disagree and debate until there is convergence, and so on.
But, critically, it is a heuristic authority; few people believe or argue that scientists CANNOT be wrong. And it is only a fallacy to argue from authority in absolute terms. Any statement like: “Authority says its true, so it MUST be true” is false, just as Lord Monckton reminds us, and for the reason he cites. But a statement like: “These people have been right nearly 100% of the time on a huge number of claims, so they’re probably right this time” is not false. It’s a perfectly valid heuristic argument.
People can and do disagree on appropriate heuristics in different cases. Hence, I’m afraid I can’t agree that the argument from consensus used in the climate debate can be simply dismissed on the grounds of argumentum ad verecundiam (argument from authority).
Putting it bluntly, if Drs. Lindzen, Christie, Spencer, Michaels, Lord Monckton, Anthony Watts, and the Idso and Pielke families published a joint paper in which they claimed compelling evidence that the temperature sensitivity to a CO2 doubling was at least 3C, I would believe that they were probably right even before reading the paper. I admit it. And its presumptuous, but I think most WUWT would too. And like it or not, there is nothing wrong with that heuristic reasoning.

April 24, 2012 8:53 pm

PeterGeorge
There’s something wrong with heuristic reasoning. This is that it is ILLOGICAL.

Gunga Din
April 24, 2012 8:49 pm

major9985 says:
April 24, 2012 at 7:50 pm
You go on to say that the consensus on anthropogenic climate change ‘does not require to be sanctified by any headcount among scientists”. There are a lot of people that don’t believe in anthropogenic climate change. The “head count” is to try and reassure the public that it is real.
—————————————–
“Reassure”!?! If you want to “reassure” us poor, ignorant folk how about getting Mann to release the info he’s hiding and has been fighting tooth and nail to keep hidden for years. (Thank you, whoever released the Climategate stuff.)
The “head count” among a select few is not to reassure us dolts. It is to dupe the public into agreeing to whatever policy or regulation or installation of a political power to fix the problem until, when we wake up, it will be to late to stop those pushing for the “change you can believe in”.

HankHenry
April 25, 2012 12:13 am

These fallacies don’t make things false. It’s a “take it for what it’s worth” situation. As in, take it for what its worth that a senator named Al Gore raises a call to alarm.

Andrew
April 25, 2012 4:50 am

RE
John Coleman says:
@ April 23, 2012 at 7:20 pm
Yes. Agree fully. I have often found myself dumfounded by the ignorant cultist’ prostetations that ‘Monckton isn’t a scientist’. Well, all I can say is, I spent many years training in the company of some publicly celebrated scientists at more than one of the commonly-regarded ‘elite’ universities. And what I can tell you is, Monckton is most certainly a scientist and a highly accomplished thinker. As this article so eloquently demostrates.

Andrew
April 25, 2012 5:03 am

RE
Greg House says:
@ April 24, 2012 at 2:49 pm
You must be a lawyer. You are splendidly articulate in appearing almost-knowledgable yet saying practically nothing of any substance. I bet you read a lot of crime novels.

cba
April 25, 2012 5:11 am

Monckton of Brenchley,
This notion of effective radiating altitude is nonphysical and apparently some fabrication of james hansen from back in the 90s or even 80s. Gases emit and absorb in a characteristic spectrum while liquids and solids emit and absorb in a continuum. A gas could emit/absorb in a continuum only when it is optically opaque over the whole continuum. The outgoing radiation from Earth comes from many altitudes and from the surface. Probably none of the radiation is actually emitted from the characteristic altitude as it will have been absorbed and reradiated many times at those wavelengths where molecules absorb and emit radiation. Where they don’t, it is coming from the surface or from the cloud tops and solids and liquids present in the atmosphere.
Stefan’s law was originally determined through experiment but was later found to derive from Planck’s law which deals with emission intensities by wavelength or frequency. It turns out Stefan’s law is the integration of Planck’s equation over all frequencies and all angles outward from a surface.

joeldshore
April 25, 2012 5:20 am

cba says:

This notion of effective radiating altitude is nonphysical and apparently some fabrication of james hansen from back in the 90s or even 80s.

It is not “nonphysical”. It is a simplification, as any such notion must be that takes a complicated process with both wavelength dependence and probability distributions and reduces it to a single number.

cba
April 25, 2012 5:42 am

Monckton of Brenchley,
There is a way to determine an average sensitivity that includes existing feedbacks and doesn’t require lots of modeling or assumptions.
After albedo reflections, Earth receives about 239 W/m^2 averaged over its surface which is captured by the surface or atmosphere and must be shed for balance. Surface temperature averages around 288.2K and varies by small amounts permitting us to approximate differences in power as linear. The average emission according to Stefan’s law is around 391w/m^2 since at these wavelengths, emissivity is almost 1.0. 391-239 = ~ 151 W/m^2 which must be trapped by the atmosphere. Analysis of ghg components indicate all major ghgs trap about 110 W/m^2 with the balance being clouds. particuates. Note that there is both absorption and emission going on with the absorption not being as temperature dependent as emission. If we compare a blackbody emitting 239 W/m^2 to Earth, we find that it would have a temperature of around 33 deg C less than Earth, around 255 K. Consequently, we can say that 151 W/m^2 of atmospheric trapping has led to 33 deg C of temperature increase. We can also say that 391 W/m^2 of radiated surface temperature has led to 239 W/m^2 escaping, around 61%. 33/151 = ~ 0.22 deg C per W/m^2 sensitivity. The 61% escape fraction indicates that for 1 W/m^2 increase in atmospheric absorption means for a simple case that 1/0.61 = 1.6 W/m^2 additional power must be radiated from the surface for 1 W/m^2 additional to escape. Stefan’s law shows that for our surface radiator, there must be a 0.29 deg C rise in temperature to emit 1.6 W/m^2additional. Note that the difference between 0.29 – no feedbacks and 0.22 W/m^2 sensitivity with all feedbacks shows us a negative net feedback at work.
Also note that this 0.22 deg C/W/m^2 includes all existing feedbacks and it is an average value over all the W/m^2 currently present. It doesn’t include added feedbacks due to the 0.22 deg C rise which contribute additional W/m^2. One must go to the absolute humidity tables and radiation effects of water vapor to gain insight into the most important feedback – which turns out to be miniscule with results that are less than a simple black body example.

garymount
April 25, 2012 5:45 am

I have a Calculus book. I wasn’t even sure what calculus was a few years ago. With nothing more than having this physical manifestation of a persons thoughts, knowledge and experience, typeset onto 984 pages, I was able to learn what calculus was. I am nearly an expert now on calculus, and I intend on becoming an calculus expert, a mathematician (for my own satisfaction).
Just over 2 years ago I wanted to get up to speed on climate science, to see what I could find out there on the internet, having armed myself with enough math abilities to handle whatever math would be thrown my way, differential equations and all that.
So when I read that one has to listen to a “consensus”, I say B.S. I want to see the math, like what Lord Monckton does here. I don’t need to defer to a higher authority, or any authority. Show me the math.
For others who think that they should be doing something about global warming / climate change, may I suggest that the most effective thing you could do is, study math.

rgbatduke
April 25, 2012 5:59 am

There’s something wrong with heuristic reasoning. This is that it is ILLOGICAL.
And sadly, as Hume showed so very long ago, all reasoning about things that exist is heuristic.
Bummer, eh?
rgb

April 25, 2012 7:44 am

rgbatduke (April 25, 2012 at 5:59 am ):
Thanks for the thoughful reply and for giving me the opportunity to amplify.
Hume addressed the so-called “problem of induction” – the problem of how to extend logic from the deductive branch that was described by Aristotle and through its inductive branch. Hume attempted a proof that this problem had no solution.
In the construction of model, the builder of this model is repeatedly faced with the necessity for selecting the inferences that will be made by this model. In each instance in which an inference is made there are candidates a, b,… for being made. The builder must select the one inference that is correct from among the many candidates, but how?
Currently and in virtually every case, model builders make the selection using the intuitive rules of thumb that I call “heuristics.” As you point out, if Hume’s proof is correct we are forced to do so. However, in each case in which a particular heuristic selects a particular candidate for being made as the one correct inference a different heuristic identifies a different candidate as the one correct inference. In this way, the method of heuristics violates Aristotle’s law of non-contradiction. Thus, if Hume’s proof is correct, science lacks a basis in logic.
In the period between 1963 and 1975, the engineer, lawyer and theoretical physicist Ronald Christensen exploited a weakness in Hume’s proof and solved the problem of induction. The weakness was that Hume’s proof did not deal with the possibility of selecting the one correct inference by optimization. In particular, the one correct inference is the optimal inference.
Optimization is facilitated by the fact that the measure of an inference exists and is unique. The unique measure of an inference is the missing information in it for a deductive conclusion, the so-called “entropy.” That an inference has a unique measure and that this measure is its entropy is of colossal philosophical and scientific importance.
By 1975, the team that I then managed for the Electric Power Research Institute was using Christensen’s ideas in employing logic rather than heuristics in selecting the inferences that would be made by models that were the product of scientific studies. By 1980, Christensen and his colleagues were using these ideas in meteorological studies. Early in the 1980s, Christensen published a voluminous description of his work. the “Entropy Minimax Sourcebook.” Academic philosophers and statisticians ignored it. Thus, the situation came to pass in which academia continues to award PhD degrees to illogical scientists while prestigious journals continue to publish their illogical and often pseudo-scientific works.

Greg House
April 25, 2012 8:34 am

Andrew says:
April 25, 2012 at 5:03 am
You must be a lawyer.
=================================
Andrew, the whole AGW thing needs a sort of lawyer approach to catch the AGW proponents (both radical and moderate) lying. Dealing with them, you should first ask the question, whether their premises are true. Of course, not everyone consciously misleads others, some of them are good people, who themselves have been misled by the liars.
In my previous posting on this thread I asked Lord Monckton to present scientific evidence concerning the core issue. It doesn’t look like his scientific answer is going to come soon.
My guess is, that warmists produce an enormous amount of papers and postings on the blogs to obfuscate the simple truth: they have nothing real in the hand on the core issues and are selling speculations based on speculations as scientific facts.
You do not need to wait for the Lord Monckton’s answer, you can do some research yourself and I can predict the outcome: zero evidences. And the whole thing is certainly not about me being a lawyer or not.

joeldshore
April 25, 2012 8:39 am

cba says:

Monckton of Brenchley,
There is a way to determine an average sensitivity that includes existing feedbacks and doesn’t require lots of modeling or assumptions.

Also note that this 0.22 deg C/W/m^2 includes all existing feedbacks and it is an average value over all the W/m^2 currently present. It doesn’t include added feedbacks due to the 0.22 deg C rise which contribute additional W/m^2. One must go to the absolute humidity tables and radiation effects of water vapor to gain insight into the most important feedback – which turns out to be miniscule with results that are less than a simple black body example.

You are pretty much just repeating back to Monckton an argument that he loves to make which is, of course, completely wrong. That number does not include feedbacks because it considers everything, including changes in water vapor, clouds, etc., to be forcings not feedbacks. (And, it ignores changes in ice-albedo because the 33 C number assumes albedo doesn’t change.)
As for the strength of the water vapor feedback, I would trust those who actually calculate it to those who “look at absolute humidity tables and radiation effects” to conclude something totally at odds with the calculations.
Oh, and the reason that it is smaller than the no-feedback value is because the one feedback that it includes (as you roughly describe) is the one odd-man-out feedback, i.e., the lapse rate feedback which is the one feedback known to be negative. (The reason that the lapse rate feedback is included is it is different in nature than the other feedbacks. It is not really a feedback on the process … but rather just a correction to account for the fact that the mid- and upper-troposphere where most of the radiation escapes to space from warms faster than the surface so the surface doesn’t have to warm as much to restore radiative balance.)

April 25, 2012 9:58 am

One should not allow some of the comments here to divert attention away from the fact that the arguments from consensus and from appeal to authority or reputation are logical fallacies. Any argument founded upon any of these fallacies is not a valid argument and cannot, therefore, be a sound argument. That is why the assertion that one should believe the climate-extremist notions on the ground that there is a consensus among the experts and those opposing it are cads, and bounders in the pay of Big Oil is altogether without merit, and is intellectually feeble-minded. And that is why, in every opinion poll, members of the public are expressing increasing skepticism. They have seen the savage, persistent and often childish attacks of the climate extremists on those of us who dare to raise legitimate scientific and economic doubts; they have seen the extremists, time and again, take refuge behind their mantra of a consensus of experts, and they don’t buy it any more. They are right not to do so.
In the Aristotelian canon the fallacies of consensus and of appeal to authority or reputation are categorized, along with the argumentum ad hominem, as special instances of the red-herring fallacy, which the medieval schoolmen would later label ignoratio elenchi, the fallacy of fundamental ignorance of the proper method of examining the validity of an argument. The reason why the consensus, reputation, and ad-hom fallacies are fallacies is precisely that each of them introduces to the argument a matter that is extraneous to it and, because it is extraneous, cannot tell us anything either about the logical self-consistency of the argument or about whether the premises are true.
Hank Henry says: “These fallacies don’t make things false.” No, they don’t: but they don’t make things true either. A sound argument is one in which the premises validly entail the conclusion and the premises are true, whereupon the conclusion is necessarily true. A fallacious argument is invalid: therefore, the argument itself tells us nothing about whether its conclusion is true or false.
Peter George says it is reasonable to rely upon a consensus of people who have been right nearly 100% of the time. Yet climate scientists have not been right nearly 100% of the time, or the long-range weather forecasts of no snow by 2010 (followed by the second-coldest, second-snowiest winter since 1659 in central England) or of a “barbecue summer” (followed by the wettest, coldest summer in living memory) would not be so much of a joke that the Met Office has had to abandon its former practice of issuing them. Even if climate scientists had been right nearly 100% of the time, the question whether they were right on other matters is extraneous to the argument that because we are adding CO2 to the atmosphere there will be dangerous warming. Therefore, like it or not, the argument from consensus is a fallacy, and, however many attempts are made to dress it up with sprigs of parsley, it remains a fallacy, and that is that.
Mr. Whittemore, a.k.a. Major9985, says the “headcount” among the supposed experts is “to try to reassure the public that [climate change] is real”. But the attempts to reassure the public of a dubious proposition by the use of an ancient logical fallacy are not, in fact, inspiring in the public a confidence that the supposed experts are correct. And it is intellectual dishonesty of the worst kind to deploy what one knows to be a fallacious argument on the ground that one hopes the public will not realize that it is fallacious.
Mr. Whittemore goes on, in that sneering tone of his, to say, “You get this 1 Celsius of warming [per CO2 doubling] from unmeasured and unmeasurable, unknown and unknowable climate sensitivity?” No, I advanced the estimate of 1 Celsius of warming per CO2 doubling on the basis of the 3.1 Watts per square meter of radiative forcing from all anthropogenic greenhouse gases since 1750, less 1.1 Watts per square meter of negative non-greenhouse forcings, compared with just 0.9 Celsius of warming since 1750, of which perhaps less than half was anthropogenic. In short, rather than trying to model climate sensitivity using the seven unknown and unknowable unknowns, I approximated it by using measurements and observations from the real world. It is the IPCC that attempts to derive its 3.3 Celsius of warming per CO2 doubling from the use of unmeasured and unmeasurable parameters. It is wrong to do so, and still more wrong to claim a high confidence for its prediction.
Joel Shore says theory does not predict that the global mean surface temperature of the Moon is 270 K. Yes it does: which is why he will find this value explicitly stated in NASA’s lunar fact-sheet. The 255 K characteristic-emission temperature of the Earth is calculated by a similar method, and may, therefore, be similarly exaggerated. Mr. Shore makes the mistake of assuming that, thanks to the Holder inequality as applied to the non-uniform latitudinal distribution of temperatures, the true mean temperature of a planetary body will be less than the theoretically-predicted mean temperature. However, as the IPCC itself makes clear, the terrestrial value of the Planck parameter – and consequently of the characteristic-emission temperature – is approximately one-sixth higher than the theoretically-determined value, precisely to allow for the latitudinal distribution of temperatures. Yet the measured lunar temperature is very considerably lower than that which theory predicts, and the same may well be the case on Earth. Like it or not, the Diviner mission has cast a legitimate doubt upon the terrestrial value of the Planck parameter, which has probably been exaggerated, taking climate sensitivity with it.
Mr. Shore then makes the bizarre suggestion that the values of individual temperature feedbacks are irrelevant to the issue of climate sensitivity. This, too, is simply wrong. The IPCC assigns almost two-thirds of all CO2-driven warming to temperature feedbacks, so their collective value is unquestionably essential to the determination of climate sensitivity: and that collective value cannot be reliably determined, not least because the true values of the individual feedbacks that constitute the feedback sum cannot be reliably determined, nor readily distinguished from the contributions of other feedbacks, or from the forcings that triggered them. This is a highly relevant consideration.
Mr. Shore also attacks Professor Lindzen’s papers of 2009 and 2011 demonstrating a net-negative feedback and a climate sensitivity of just 0.7 Celsius per CO2 doubling. He says the first version of the paper was “bad”. Yet the statistical infelicities of the paper were not in fact bad enough to change the result. As the ocean surface warms or cools, the satellites can measure the corresponding changes in outgoing radiation from the Earth to see how much of it is being trapped as the official theory predicts. The results seem to show that most of the radiation is escaping to space much as usual, so that climate sensitivity is low. That is, no doubt, an inconvenient result, but – as far as I know – the second version of Professor Lindzen’s paper has not been seriously challenged in the climatological literature. It is one of a growing body of papers that establish, by a variety of empirical rather than numerical methods, that climate sensitivity is low.
Mr. House challenges me to prove the existence of the greenhouse effect. He may care to refer to any elementary textbook of climatology. However, he is – as usual – off topic. For it is my contention in the head posting that there is no scientific basis for the official conclusion that, as a result of our enhancement of the greenhouse effect, catastrophic global warming will be likely. If there were no greenhouse effect, then my conclusion would be true a fortiori.
Brendan H says that, in a logical argument “the premises and the argument need not be explicit”. In logic it is a requirement that each premise and the conclusion they entail be made explicit.A mere declarative statement, such as “There’s a CONSENSUS!”, may be either a premise or a conclusion, but on its own it is simply a statement and no argument exists.
Son of Mulder says the first differential of the Stefan-Boltzmann equation, from which the Planck parameter is derived, is a non-linear feedback. Some months ago I discussed the Planck Parameter with Professor Roe, who shares my own opinion that the Planck parameter is better understood as an essential ingredient in the reference-frame for determining both zero-feedback direct warming and the post-feedback indirect warming that the direct warming is thought to trigger. If so, it should be expressed in Kelvin per Watt per square meter, rather than in Watts per square meter per Kelvin. Again, however, the question whether feedbacks are linear or non-linear is not relevant to the argument in my head posting. If some of the feedbacks are non-linear, then in addition to the seven unknowable unknowns on the right-hand side of the climate-sensitivity equation there are several additional unknowable unknowns in the shape of the non-linearities in the feedbacks, from which my conclusion that no reliable conclusion as to climate sensitivity can be drawn from an equation in multiple unknowable unknowns manifestly follows a fortiori.
CBA deploys a beautifully simple demonstration of low climate sensitivity based on the observation that the total forcing from the presence as opposed to absence of all greenhouse gases in the atmosphere is about 110 Watts per square meter, and that the corresponding increase in temperature, after very nearly all feedbacks have acted, is only 33 Celsius degrees. However, the usual suspects have seen how dangerous that argument is to their notion that climate sensitivity is high, so they are now maintaining that the 110 Watts per square meter, though plainly described as Watts per square meter (i.e. as forcings) rather than as Watts per square meter per Kelvin (i.e. as feedbacks) nevertheless include feedbacks. They don’t, of course: but this moving of the goalposts on their part, knowing that the average Minister or journalist will not have a clue what they are on about, is exactly how they have kept this long-dead scare going at official level when everyone else has long since seen through the nonsense.
Gary Mount has it right: he has just taught himself calculus (no mean feat, that) and recommends that those who want to Save The Planet should study math first.
In recent weeks, thanks to Anthony’s courtesy, I have now advanced three fatal arguments against taking any action about “global warming”: first, that even if it were going to happen at the officially-predicted rate (which it is not), it would be at least an order of magnitude more cost-effective to do nothing about it now and to adapt to any adverse consequences of warming later; secondly, that most of the arguments most frequently deployed by the climate extremists are instances of logical fallacies that would have been recognized as such at once by an educated man in any previous generation from the Middle Ages until half a century ago; and thirdly, that there are too many unknowable unknowns in the climate-sensitivity equation to allow anyone to maintain that there is a serious likelihood of a future warming rate significantly higher than what we have seen globally over the past 150 years. Two of these arguments rely upon math; the other relies upon logic. All I am asking for is a degree of intellectual honesty, mathematical rigor and logical analysis in addressing the climate question that has been absent among the climate extremists until now. “The road to the truth,” said the father of the scientific method, “is long and hard: but that is the road we must follow.”

Brendan H
April 25, 2012 11:46 am

Monckton of Brenchley: “Brendan H says that, in a logical argument “the premises and the argument need not be explicit”. In logic it is a requirement that each premise and the conclusion they entail be made explicit.”
As I said, in practice, in informal situations, it is unlikely that writers or speakers will fully explicate their argument, and even less so as a syllogism.
“A mere declarative statement, such as “There’s a CONSENSUS!”, may be either a premise or a conclusion, but on its own it is simply a statement and no argument exists.”
In the context of the comment, the claim was almost immediately followed by a reference to the consensus argument. Even the not-so-alert reader would have linked the two. However, I am happy to accept that you do not consider the above-expressed claim of consensus to be a logical fallacy.
Presumably, you are also happy to accept that the claims made about logic by a person who is Classically trained in logic are about as good as the next person’s.

rgbatduke
April 25, 2012 12:02 pm

In the period between 1963 and 1975, the engineer, lawyer and theoretical physicist Ronald Christensen exploited a weakness in Hume’s proof and solved the problem of induction. The weakness was that Hume’s proof did not deal with the possibility of selecting the one correct inference by optimization. In particular, the one correct inference is the optimal inference.
By which time he was already decades late. The problem of induction was solved by physicist Richard Cox in a set of papers in the 40s, and solved (indirectly) at a slightly later date by Claude Shannon’s information theory. The two different approaches were shown to be more or less equivalent and to form the “logic” of science by E. T. Jaynes in “The Mobil Lectures” in the mid-50’s. Those lectures were turned into a book that Jaynes never quite finished but that was published posthumously as “Probability Theory: The Logic of Science”. The draft of this book and the Mobil lectures themselves are both available online, although the book is well worthwhile to anyone who cares to learn the mathematically precise basis of scientific inference. Cox’s work is available in a monograph entitled “The Algebra of Probable Inference” that is again a must for the shelves of any thoughtful person interested in this general topic.
Both Cox and Jaynes acknowledge, however, that the true master of this was none other than John Maynard Keynes, who wrote a lovely book on probability theory back in the 1921. Keynes introduced the principle of indifference, which eventually became the maximum entropy principle used by all of the rest (originally championed by Jaynes, IIRC), which was further advanced by Jefferies in 1939.
This is still hardly complete, but it is close. It is also important to note that “the correct inference” is not the optimal inference. All the correct theory (and science) can do is make things more or less plausible, given the evidence and the rest of the (Bayesian) network of plausible evidence supported belief. In my book Axioms (which I’m still working on, sigh) I try to reduce the idea to a pithy little sound bite: It is best to believe the most that which you can doubt the least given the evidence and the network of best evidence-supported belief.
Even this isn’t quite adequate — in order for science to function as a system of improving knowledge, that is, an optimizing algorithm, it has to be quite tolerant of error. That is one of many reasons why “correct inference” is an ill defined concept, let alone term. One can argue that x or y is the best inference to make from the data, given this or that, but because Bayesian reasoning depends on the priors and we don’t KNOW the priors, but are trying to optimize on a varying multivariate surface with lots of complexity, in many cases there is not “best” inference.
If you are interested, you can grab http://www.phy.duke.edu/~rgb/axioms.php to take a look at the first 1/3 or so of the book in draft form, but it includes most of this part of it.
rgb

April 25, 2012 1:51 pm

rgbatduke (April 25 at 12:02 pm):
Thanks for taking the time to reply. It sounds as though you are unaware of some of the history.
Though Shannon contributed to solving the problem of induction, he didn’t complete the job. Among the unfinished business that was left in Shannon’s wake was a method for the assignment of numerical values to probabilities. Maximum likelihood estimation doesn’t work as it overestimates the amount of information in the data causing the model building process to blow up. Christensen solved this problem with the application of the principle of entropy maximization which he called “maximum entropy expectation.”
Though Jaynes’s posthumously published book is subtitled “The logic of science,” Jaynes seems to have been unaware of two necessary components of a solution to the problem of induction. One is maximum entropy expectation. The other is entropy minimization. In the absence of entropy minimization, the logic fails to converge to the classical logic in the limit that the missing information is reduced to nil. In his written works, Jaynes seems to have applied only entropy maximization. I’ve read many of these works but not all.
Many people contributed to solving the problem of induction. There is strong evidence that person who finished the job is Christensen. In one of his published works, Christensen reports that the year of this invention was 1963. Eleven years later, while managing a program of theoretical research for the Electric Power Research Institute, I conducted a worldwide search for persons with the ability to construct a model from the available information using information theory. A single person exhibited this ability. This person was Ron Christensen.
Regarding the question of the identity of the prior probability density function, in the absence of available information, the prior PDF must be uninformative. It is generally true that uninformative prior PDFs are of infinite number. In this circumstance, the model builder has no option but to select one of them by the method of heuristics. Historically, the uniform prior has been popular model builders. According to IPCC AR4, a flat prior PDF has been popular among climatologists seeking to extract posterior PDFs for the equilibrium climate sensitivity from observational data. The associated procedure is illogical from its violation of non-contradiction. No logical procedure for the extraction of a posterior PDF for the equilibrium climate sensitivity from observational data is a possibility.
There is an exception to the rule that uninformative prior PDFs are of infinite number. This exception is the basis for maximum entropy expectation. As the associated uninformative prior PDF is unique, violation of non-contradiction is avoided.

Vince Causey
April 25, 2012 12:05 pm

Tom in Florida says:
April 24, 2012 at 7:53 pm
Vince Causey says:
April 24, 2012 at 2:00 pm
“I assume you believe in your analogy”
Vince, it wasn’t my analogy, it was PeterGeorge posting April 24, 2012 at 12:07 pm. I was commenting on how bad an analogy it was, to wit:
“Horrible analogy. Your scenario holds a one time event to happen at an expected time and place. The results of the prediction will be well known at that time with no further threat to worry about. Once and done with people going back to their normal lives. NCCDs (natural climate change deniers) want to permanently change the way we live, redistribute wealth and seek the creation of a world government to keep the rest of us in line.”
Yes, my apologies. I didn’t take time to see that you were in fact quoting PeterGeorge.

joeldshore
April 25, 2012 12:46 pm

Monckton of Brenchley says:

Mr. Shore makes the mistake of assuming that, thanks to the Holder inequality as applied to the non-uniform latitudinal distribution of temperatures, the true mean temperature of a planetary body will be less than the theoretically-predicted mean temperature. However, as the IPCC itself makes clear, the terrestrial value of the Planck parameter – and consequently of the characteristic-emission temperature – is approximately one-sixth higher than the theoretically-determined value, precisely to allow for the latitudinal distribution of temperatures.

Where does the IPCC make this clear?!?! I have no clue what you are talking about here. And, the calculation for the moon is so trivial (since there is no complicating factor on where in the atmosphere they emission is coming from) that one can clearly see that the 270 K figure is a constraint on the fourth root of the average of T^4 and not on the average of T itself, which will be much lower because of the dramatic temperature variations on the moon.

Mr. Shore then makes the bizarre suggestion that the values of individual temperature feedbacks are irrelevant to the issue of climate sensitivity. This, too, is simply wrong. The IPCC assigns almost two-thirds of all CO2-driven warming to temperature feedbacks, so their collective value is unquestionably essential to the determination of climate sensitivity:

Yes, their collective value is unquestionably essential. But, the point is that most calculations and most empirical determinations of the climate sensitivity just obtain this collective value and not the individual values of each feedback. Soden and Held is one of the few papers to study all the models and attempt to break down the contribution from each individual feedback (which involves some estimation since the feedbacks are not simply additive). Hence, saying that only one paper is referenced as giving the individual values does not imply what you seem to think it implies.

but – as far as I know – the second version of Professor Lindzen’s paper has not been seriously challenged in the climatological literature. It is one of a growing body of papers that establish, by a variety of empirical rather than numerical methods, that climate sensitivity is low.

Lindzen and Choi’s second version was only published in the latter half of 2011….and in a rather obscure journal to boot. There hasn’t been enough time for it to be challenged in the literature. One can almost always argue for one’s pet scientific conclusion in a scientific field of any significant size by using papers so new that nobody has had time to debunk them yet.
As for the “growing body of papers” that you speak of, it is still a rather small body and most of them are known to have serious flaws. And, the many papers that establish that climate sensitivity is in the range of what the IPCC concludes also are based on empirical data.

CBA deploys a beautifully simple demonstration of low climate sensitivity based on the observation that the total forcing from the presence as opposed to absence of all greenhouse gases in the atmosphere is about 110 Watts per square meter, and that the corresponding increase in temperature, after very nearly all feedbacks have acted, is only 33 Celsius degrees. However, the usual suspects have seen how dangerous that argument is to their notion that climate sensitivity is high, so they are now maintaining that the 110 Watts per square meter, though plainly described as Watts per square meter (i.e. as forcings) rather than as Watts per square meter per Kelvin (i.e. as feedbacks) nevertheless include feedbacks.

No, we are saying that because it is obvious to anybody who understands what a forcing and a feedback is, and the contextual nature of those terms, that we are in fact absolutely correct. In particular, it is clear that the 110 W/m^2 includes the radiative effect of water vapor currently in the air and the whole point is that if the water vapor feedback behaves the way that all the empirical evidence and theoretical understanding says it does then most of that water vapor in the atmosphere is there because of the feedback on the non-condensable greenhouse gases. In other words, if you removed the non-condensable greenhouse gases account for only a small fraction of the 110 W/m^2 but if you remove them from the atmosphere, you lose much of the rest of the 110 W/m^2 because as the atmosphere cools the water vapor condenses out of the atmosphere.

cba
April 25, 2012 1:50 pm

joelshore;
not quite right there. We know that there is 150 W/m^2 blocked from leaving due to all atmospheric effects since the surface is going to emit around 390 and for a rough balance, 239 W/m^2 is all that can escape on average. We know that to emit 239 W/m^2, the Earth without an atmosphere would have to be 33 deg C lower. One must hold variables constant to find effects of other variables – in math, it’s called partial derivatives. That 33 deg C per 150W/m^2 is the real effect averaged which includes all feedbacks to date. THIS is what the Earth has done, not what some video game model says it should do.
For small pertubations to these numbers, the results will change linearly. All feedbacks can only work due to changes in temperature. If you add a forcing of 1 W/m^2, temperature only has to increase by 0.22 deg C to compensate. If you remove a forcing of 1 W/m^2, temperature only has to drop 0.22 deg C to compensate for the emission. If you compare it to a grey body or blackbody with an atmosphere that blocks the same fraction of outgoing power as does our atmosphere, you would have to increase or decrease the temperature by 0.3 deg C to achieve a 1 W/m^2 increase or decrease that escapes the atmosphere to space. 0.3-0.22 = 0.08 deg C negative feedback net.
Now, if you want to pretend that this is not the situation, and that the 0.22 deg C/W/m^2 is actually only from the forcing and that it contains no new feedback values, then you may go look up the nature of absolute humidity which is what determines the water vapor absorption and what determines the relative humidity along with the temperature. Note that holding RH constant is a valid and acceptable assumption during temperature changes in climate research. what results is the realization that small changes in temperature result in really small changes in absolute humidity and water vapor is only two to three times more potent than co2 per doubling and that it is like CO2 in that it is a log function over about 10 or 11 doublings. Not even a 5 deg C total temperature change results in more than a 30% increase increase in h2o vapor at constant RH and that is a long way from the 100% increase necessary for a doubling. In short, the IPCC professed major feedback can add about 3.1 W/m^2 to the total blocking as compared to the CO2 forcing of 3.7 W/m^2 for a doubling assuming a 5 deg C total rise. In fact, the total T rise due to forcing + its major feedback becomes 1.5 deg C and that is 3.5 deg C less than the temperature needed to create the 5 deg C rise and humidity increase. Now, after your forcing and feedback have been accounted for, you are still short by 16 W/m^2 that must come from the “lesser” feedbacks instead of the major feedback of h2o vapor. No matter what you try, the sensitivty adding h2o vapor to the forcing results in only a tiny fraction of an increase in T above the rise in forcing. Unlike the CAGW view, this is a stable equilibrium feedback.

April 25, 2012 2:07 pm

Brendan H seems dismayed to find that the key arguments he and other climate extremists have been using are logical fallacies. He has made various attempts to confuse the issue – which was not even raised in my head posting. That posting explicitly concerned itself with demonstrating by mathematics why there is no scientific basis for the existence of any consensus as to how much global warming our enrichment of the atmosphere with greenhouse gases may cause.
He persists in trying to maintain that there is no need for the premises and conclusion of an argument to be made explicit, whereas it is essential to the rigor of logic that they be made explicit even if they were not explicit at the time they were uttered. For instance, when the bossy environmentalist with the messy hair at Union College said, “There’s a CONSENSUS!”,, she was making a declarative statement which, on its won, did not and does not constitute an argument. However, in the context, she was attempting to justify the notion that anthropogenic global warming will be serious enough to be catastrophic by asserting that there was a consensus about it. The premise, then, is that there is a consensus as to manmade climate disaster: the conclusion is that the threat of manmade climate disaster is real and credible, not so much in se as because there is a consensus. And that, once the premise and the conclusion have been made explicit, is a logical fallacy – or, as I bluntly put it at the time, intellectual baby-talk.
Brendan H. indulges in further intellectual baby-talk when he writes: “Presumably, you are also happy to accept that the claims made about logic by a person who is Classically trained in logic are about as good as the next person’s.” No, of course not: that is a characteristically sloppy formulation. Are the fallacies that I have said are fallacies fallacies? Yes, they are: one can look them up in any textbook of logic. The fact that they are fallacies does not depend upon my Classical training: but my Classical training allows me to recognize fallacies more quickly and more correctly than someone who has taken no trouble to learn the elements of logic.
Brendan H, in attempting to assert that the consensus fallacy is not really a fallacy, and that no part of an argument need be made explicit before the logician considers whether the argument is valid, departs altogether from the norms of logic. That will not do. It is the obligation of all scientific enquirers to make a genuine attempt to reach the truth rather than attempting to deploy sophistical arguments that are intended at best to confuse the issue and at worst to lead to a prejudiced and false conclusion. It does not matter what one wants the truth to be: it matters what the truth is.

Greg House
April 25, 2012 2:18 pm

Monckton of Brenchley says:
April 25, 2012 at 9:58 am
Mr. House challenges me to prove the existence of the greenhouse effect.
=================================================
This is not true. I did not ask you to prove that.
I specifically asked you to prove your claim you made on this thread, that (your exact words) “the fact that adding greenhouse gases such as CO2 to an atmosphere such as ours will cause some warming has been established by oft-repeated experiments”.
I asked you specifically to (quote) “present clear, distinct references and links to who when and where and how experimentally physically measured (not calculated on the statistical basis) how much increase in physical temperature how much “greenhouse gasses” produce. Physics, please, no speculations.”
You answer unfortunately does not contain any “clear, distinct references and links to who when and where and how experimentally physically measured (not calculated on the statistical basis) how much increase in physical temperature how much “greenhouse gasses” produce.” Hence your claim remains unsupported by evidences.
Maybe you did not understand what you had been asked to present, but now I hope we have cleared that.
So, Christopher, please, present clear, distinct and exact references and links to who when and where and how experimentally physically measured (not calculated on the statistical basis) how much increase in physical temperature how much “greenhouse gasses” produce.

April 25, 2012 3:23 pm

To Robert G. Brown @ 4/25 – 5:59 a.m.:
Are you claiming that numbers, sets, groups, integrals, conic sections, etc., etc. do not exist?
To Robert and Terry Oldberg:
Could each of you provide a concise statement of the “problem of induction” as you understand it.

Reply to  Leigh B. Kelley
April 25, 2012 5:40 pm

Leigh B. Kelley
“Induction” is the process by which a model is extracted from observational data and other sources of information. The “problem of induction” is to provide a logical basis for induction.

rgbatduke
April 25, 2012 3:27 pm

Though Shannon contributed to solving the problem of induction, he didn’t complete the job. Among the unfinished business that was left in Shannon’s wake was a method for the assignment of numerical values to probabilities. Maximum likelihood estimation doesn’t work as it overestimates the amount of information in the data causing the model building process to blow up. Christensen solved this problem with the application of the principle of entropy maximization which he called “maximum entropy expectation.”
Well, he really didn’t start the job. Information theory wasn’t concerned with induction per se, it was (and continues to be) concerned with communication. It was Jaynes, primarily, who recognized that it also gave e.g. statistical mechanics an a priori basis that didn’t depend on Gibbs.
Cox, on the other hand, actually published something from which Shannon’s result can be trivially derived (slightly before Shannon, I should point out), but was looking directly at probability with the specific intent of putting stat mech on firmer ground using the ideas of Keynes and Jeffries. He is the person that introduced as what Jaynes called “disiderata” (but are really just axioms of the theory) — the Cox axioms, which are basically a prescription for how to represent and alter plausibilities. From the Cox axioms one can derive the algebra of probability theory (e.g. the probabilistic algebra of Laplace or Boole as laid out in Boole’s monograph) and hence the theory of inference, as is laid out quite concisely in his monograph (or by Jaynes).
I believe it was Jaynes who was almost singlehandedly responsible for the notion of maximum entropy being equivalent to the principle of indifference and for establishing it as a fundamental principle of probability theory. As evidence, I can only offer the Wikipedia page:
http://en.wikipedia.org/wiki/Principle_of_maximum_entropy
and quote from it: “The principle was first expounded by E.T. Jaynes in two papers in 1957 where he emphasized a natural correspondence between statistical mechanics and information theory. In particular, Jaynes offered a new and very general rationale why the Gibbsian method of statistical mechanics works. He argued that the entropy of statistical mechanics and the information entropy of information theory are principally the same thing. Consequently, statistical mechanics should be seen just as a particular application of a general tool of logical inference and information theory.” Note well the date.
Jaynes later corrected his earlier work to give precedence to Cox and to acknowledge the cleanness of Cox’s axioms, as opposed to the fairly convoluted way of proceeding from information theory. Nobody else in the world noticed at that point, of course — Shannon’s theorem was by then established as was the maximum entropy connection proceeding from it.
http://bayes.wustl.edu/etj/articles/mobil.pdf
(the Mobil lecture). Note well this is a republishing of his typed up notes for the lecture he gave in 1957. Note also that although he gives major kudos to Shannon, the three principles he associates with the theory — real number plausibilities, consistency, common sense variation (plus all of the axiomatic baggage of real numbers as an ordinal system, there but not explicitly acknowledged) are all Cox, in a paper published in 1946, solidly before Shannon’s publication. The point is that this is an idea that suddenly “came of age” in the decade between 1946 and 1956, but it was Jaynes who seems to have best understood its immense power (and was its most indefatigable advocate).
As for confounding Hume — not really, Hume still gets the last word even after the methodology of science and the theory of epistemology is put on as sound a footing as it will ever get. Hume was wrong — one can derive the algebra of inference from a small set of axioms, and it turns out to be Bayesian probability theory. Hume was right — one cannot derive the axioms themselves, and one of them is the “axiom of common sense” for all intents and purposes — evidence should increase our degree of believe in consistent propositions and decrease our degree of belief in inconsistent ones — meaning that you still cannot escape the need for heuristics and can never establish any proposition about the real world as certain truth.
I personally like to view the resulting theory of knowledge and empirically founded mathematically and logically sound science as the ultimate validation of both Hume and Descartes. Descartes invented methodological doubt as a path to truth. Hume realized that “truth” was a chimera, a mirage that fades as we apply Descartes skepticism in its fullest form as everything can be doubted (except, perhaps, the immediate empirical reality of our own existence) if you try hard enough. Cox and Jaynes succeeded in quantifying doubt, so that instead of truth, we seek best belief as that which we can doubt the least.
As for Ron Christensen — I have no clear idea who you are talking about. He isn’t even listed as a reference on the maximum entropy page, and the only Ron Christensen I could find who is a professor of statistics got his BA in mathematics in 1974, so he could hardly have done anything fundamental or original with maximum entropy before then, although he is indeed a Bayesian statistician now.
rgb

April 25, 2012 5:31 pm

rgbatduke:
The bibliography at http://www.knowledgetothemax.com provides citations to the work of the right Ronald Christensen. The one who is an academic statistician is the wrong Ronald Christensen. Using ideas that were first developed by the right Ronald Christensen plus others that were developed by predecessors of Christensen that include Cardano, Clausius, Boltzmann, Gibbs, Lebesgue, Shannon and Jaynes, it is possible to build a model without resorting to the method of heuristics. Optimization replaces the method of heuristics. Absent Christensen’s contributions, this would be impossible.

April 25, 2012 3:37 pm

Mr. House, yet again, is sneeringly off topic. My head posting demonstrated by mathematics that there is no scientific basis for assuming a high climate sensitivity by the IPCC’s methods, because all seven of the key parameters in the fundamental equation of climate sensitivity are unmeasured and unmeasurable, unknown and unknowable, and because the climate behaves as a mathematically-chaotic object, so that the reliable, long-term prediction of future climate states is not possible by any method.
Precisely because the relevant quantities are unknown and unknowable, it ought to be blindingly obvious that it is not possible to do as Mr. House asks and demonstrate how much warming greenhouse gases produce: for that is the question that the climate-sensitivity equation attempts to answer but cannot answer. That greenhouse gases produce warming is not in doubt, and has been well established by experiment, over and over and over again. But how much warming they produce is unknown and unknowable, because – it cannot be said often enough – all of the relevant quantities in the fundamental equation of climate sensitivity are unknown and unknowable to anything like a sufficient precision.
It would perhaps have been helpful if Mr. House had read the head posting with due care and attention. Had he done so, he would have grasped my central point, which is surely not expressed in an obscurantist fashion; and, whether or not he agreed with that point, he would have realized that it was not and is not reasonable for him to persist in asking me to provide precisely what that posting says it is impossible provide. Once again, it seems that Mr. House is not genuinely seeking objective scientific truth, but is trying inexpertly to cloud the issue in an unconstructive manner.

April 25, 2012 4:10 pm

Mr. Shore asks where the IPCC makes it plain that the terrestrial value of the Planck parameter is greater than, not less than, the value 0.267 Kelvin per Watt per square meter that is derivable from the theoretically-determined 255 K by taking the first differential of the Stefan-Boltzmann equation on the assumption that radiative flux at the characteristic-emission altitude is 238 Watts per square meter. He may like to read the footnote on page 631 of the IPCC’s 2007 report, where the value of the Planck parameter is deducible as the reciprocal of 3.2 – i.e. 0.313 Kelvin per Watt per square meter, or greater by approximately one-sixth than that given by the first differential of the fundamental equation of radiative transfer, and the references on which the IPCC relies make it plain that the reason for this difference is latitudinal temperature variation. The discrepancy between theory and observation on the Moon, by which the true mean lunar temperature is considerably below rather than somewhat above the theoretically-determined 270 K given in NASA’s lunar fact sheet, is accordingly significant and has obvious iimplications for the determination of climate sensitivity on Earth.
Mr. Shore says that the feedback sum is obtained in the models by methods other than considering the value of each individual feedback. However, the IPCC’s discussion of feedbacks simply does not bear out Mr. Shore’s assertion: each feedback is considered separately. One agrees with Mr. Shore that it is remarkably difficult to determine the overlap between one feedback and another: that is yet another reason why determining climate sensitivity by the numerical methods on which the IPCC relies, rather than by observing what has actually happened in the climate, is doomed to fail – and why there is simply no scientific basis for assuming that climate sensitivity will be anything like as high as the IPCC says it is.
Mr. Shore says that no one has yet responded to Lindzen and Choi’s 2011 paper demonstrating a very low climate sensitivity because the paper was not published until late in 2011. It was not published “late in 2011” but in May of that year. A full year has passed since publication, therefore.
He adds that most of the papers establishing a high climate sensitivity are empirically based. No, most of them are numerically based, relying on modeling and not on real-world results. It is claimed, of course, that the models adequately represent physical realities, but they manifestly do not do so to a sufficient precision, not least because the object they are attempting to model behaves as a chaotic object, and also because the data are insufficiently precise and insufficiently available, and the processes by which the climate evolves are insufficiently understood.
Mr. Shore also makes a further attempt to say that the 110 Watts per square meter of forcing from the presence rather than the absence of all greenhouse gases in the atmosphere is chiefly feedbacks rather than forcings. In that event, the paper in which the 110 Watts per square meter estimate was given could have and should have denominated – but crucially did not denominate – two-thirds of the 110 Watts per square meter not in Watts per square meter nut in Watts per square meter per Kelvin.
One appreciates that it is essential to the maintenance of a case for high climate sensitivity to suggest that most of the water vapor in the atmosphere is only there because the non-condensing greenhouse gases are present, and that it would disappear if they did. But it is not clear to me that this point has been sufficiently established by the papers that purport to establish it. Even if the 110 Watts per square meter were to some extent feedbacks, the difficulty of distinguishing the forcings from the feedbacks and the feedbacks from each other would be no less formidable than it is in today’s climate. Once again, unduly precise quantitative claims are being made on the basis of manifestly inadequate evidence: and perhaps this is the main reason why the warming over the past generation has been well below what the IPCC predicted it would be in its 1990 report. The models have been tried and found wanting.

Tomazo
April 25, 2012 4:35 pm

I am a process engineer, and in my field experience in troubleshooting dynamic multi-variate systems [Temperature, Pressure, Flowrate, Composition, AND Valve Position (the controller)], I have generally found that about one-third of my problems have been solved with the correct tuning of control loop parameters for the system suffering the “trouble”. As a result, (along with a focus on Process Control when obtaining my Masters in Chemcial Engineering), I am quite familiar with determining the proper gain, offset, lag, etc, for both feed-forward, feed-back control loops, using proportional-integral-differential (PID) controllers and the like. For the most part, these systems exhibit first order responses (rarely second order – but can indeed be non-linear), and the appropriate parameters are determined by a proper analysis of the system’s reponse using pulse and step-change inputs. This can be done by experiment (i.e “making it happen”), but it is often found by just looking at the historical data, if available. The point is to ensure the system has a robust and controlled response to “upset” input variable conditions.
As a participant in a large “Denver Climate Study Group” (a sort of email society which includes members from both sides of the debate, but whose total membership is only known by the “monitor”), I was responding to just this question from a participant. In my research, I stumbled upon the Lord’s March 10, 2012 presentation at the University of Minnesota here:

I was delighted also to find this article in my research, and have enjoyed observing the legitimate discussion of the mathematics of control theory, with the philosophy of logic “discussion” thrown in for fun!
As usual, the Viscount is winning on this thread, as he has in all the forums I’ve seen him, by using CLASSICAL LOGIC and FACTS, and, most importantly, MATHEMATICS (actually, rather simple math to boot!)
To assist in the Lord in his efforts, I hope I can provide some clarification on his calculations and units for the readers here:
3.3K=3.708W/m^2*0.313Km^2*[1/(1-0.313Km^2/W*([1.8-.84+.26+.68+0.15]W/Km^2)]
Note that 0.313Km^2/W*([1.8-.84+.26+.68+0.15]W/Km^2)=0.648 (the number which needs to be between 0 and 1) and that both it, and its reciprocal, are dimensionless.
He is absolutely right about the value of the second number being critical. If between 0 and 0.1, it is stable, if between 0.1 and 1, it is highly likely to be unstable, and if greater than 1, most assuredly unstable and most likely runaway. He is also right that electrical engineers allow this number to go to =or>-1 (and back to =or<+1) to create oscillating circuits.
I count myself honored to participate in this august group of posters, even if some of the members exhibit seriously flawed and "self-emoliating", ad hominem, red-herring, and data/logic bereft arguments for their "cause".

Greg House
April 25, 2012 4:36 pm

Monckton of Brenchley says:
April 25, 2012 at 3:37 pm
That greenhouse gases produce warming is not in doubt, and has been well established by experiment, over and over and over again.
===================================================
I am afraid you did not understand the point again. No problem. Let me first tell you that if in a scientific debate someone claims (like you did on this thread): “the fact that adding greenhouse gases such as CO2 to an atmosphere such as ours will cause some warming has been established by oft-repeated experiments” and is then asked to produce evidences (exact references, links etc to these alleged experiments), then a simple repetition of the same claim is not a scientific answer, Christopher.
Your part about “relevant quantities are unknown and unknowable” can be accepted, however, because you indeed claimed “some warming”. I am not going to distract the readers analysing how your “some” corresponds to your “unknown and unknowable”, let us put it aside for a while. But, Christopher, your claim unfortunately still remains unsupported by any evidence. So, let me drop the “how much” part.
Referring to your claim you made earlier on this thread (“the fact that adding greenhouse gases such as CO2 to an atmosphere such as ours will cause some warming has been established by oft-repeated experiments”) I am humbly asking you to present clear, distinct and exact references and links to who when and where and how experimentally physically proved (not calculated on the statistical basis) that “greenhouse gasses” produce SOME (as you put it) warming.

Greg House
April 25, 2012 5:25 pm

Tomazo says:
April 25, 2012 at 4:35 pm
To assist in the Lord in his efforts, I hope I can provide some clarification on his calculations and units for the readers here:
3.3K=3.708W/m^2*0.313Km^2*[1/(1-0.313Km^2/W*([1.8-.84+.26+.68+0.15]W/Km^2)]
============================================
Tomazo, a mathematician must be able to operate with things like cares driving at 99999 miles per hour, although it is a fiction.
Just for the sake of science and common sense, please, ask yourself why you assume or believe that this “3.708W/m^2” is not a fiction.
As you can possibly conclude from Lord Moncktons’s zero presented evidences for a physical proof of “CO2 forcing”, this number must have been derived from something else. If you look carefully at the core assertion about 33K “greenhouse” warming, you will notice that they simply claim the “greenhouse gasses” cause this warming. This is a fiction, Tomazo. Then they derive those “3.708W/m^2” mathematically from their claim and then their “climate sensitivity” again from this “3.708W/m^2”. Unfortunately, it works with so many people, a lot of them are well educated, but not trained in finding well hidden fallacies.

joeldshore
April 25, 2012 6:53 pm

Monckton of Brenchley says:

The discrepancy between theory and observation on the Moon, by which the true mean lunar temperature is considerably below rather than somewhat above the theoretically-determined 270 K given in NASA’s lunar fact sheet, is accordingly significant and has obvious iimplications for the determination of climate sensitivity on Earth.

No…It has no implications whatsoever. The fact that the average temperature is below the average temperature that a uniform body emitting the necessary amount of radiation to be in radiative balance is what is a consequence of Holder’s Inequality. On the Earth with its current albedo, the limit set by Holder’s Inequality is 255 K. The fact that the actual surface temperature is 288 K is a fact that can only be explained by the fact that the atmosphere absorbs some of the radiation emitted by the Earth’s surface, i.e., there is a greenhouse effect.

Mr. Shore says that the feedback sum is obtained in the models by methods other than considering the value of each individual feedback. However, the IPCC’s discussion of feedbacks simply does not bear out Mr. Shore’s assertion: each feedback is considered separately.

All because they are discussed separately does not mean that they can be measured separately in the real world or that it is even very common to measure them separately in climate models.

Mr. Shore says that no one has yet responded to Lindzen and Choi’s 2011 paper demonstrating a very low climate sensitivity because the paper was not published until late in 2011. It was not published “late in 2011″ but in May of that year. A full year has passed since publication, therefore.

I said it was published in the latter half of 2011. Given that it was not accepted for publication until May 22 and that it did appear in 2011, I was able to narrow down the range of publication date to between May 22 and Dec. 31, which would put it in the latter half of the year unless it was published between May 22 and June 30, which would put it just barely in the first half, but that would be pretty fast turn-around between acceptance and publication. And, less than a year is not a very long time for a scientific paper to have garnered significant response from the scientific community, particularly for a paper that is published in a relatively obscure journal and for which the previous version of the paper was already debunked.

He adds that most of the papers establishing a high climate sensitivity are empirically based. No, most of them are numerically based, relying on modeling and not on real-world results.

The IPCC estimate of climate sensitivity relies primarily on empirical data, including the paleoclimate data (especially the last glacial maximum), the response of the climate to the Mt Pinatubo eruption, the current seasonal cycle, and the historical temperature record, among other things. It is also supported by the numerical models but, no, that is not the only or primary source of these estimates.

Mr. Shore also makes a further attempt to say that the 110 Watts per square meter of forcing from the presence rather than the absence of all greenhouse gases in the atmosphere is chiefly feedbacks rather than forcings. In that event, the paper in which the 110 Watts per square meter estimate was given could have and should have denominated – but crucially did not denominate – two-thirds of the 110 Watts per square meter not in Watts per square meter nut in Watts per square meter per Kelvin.

How can you denominate 2/3 of something in different units. Whether something is a forcing or a feedback depends on the particular climate experiment being contemplated, as anyone who actually considers himself at all conversant with the concepts of forcing or feedback would understand. The paper that you are referencing wasn’t dealing with the question of how the water vapor observed in our current climate got into the atmosphere. They were just interested in the radiative effect of the greenhouse gases currently in the atmosphere without regard to how they got there. This is why you can’t find any serious climate science endorsing your non-sensical argument. If it was such a wonderful argument, why do you think that Roy Spencer or John Christy or Richard Lindzen don’t make this argument? They don’t because they know enough to know that it is wrong and that they would lose a lot of credibility if they were to try to pass off such silliness as a real argument.

One appreciates that it is essential to the maintenance of a case for high climate sensitivity to suggest that most of the water vapor in the atmosphere is only there because the non-condensing greenhouse gases are present, and that it would disappear if they did. But it is not clear to me that this point has been sufficiently established by the papers that purport to establish it.

I think it has but, furthermore, your opinion on this matter is not relevant. The fact that you believe that the mechanism doesn’t exist does not mean that you can perform a calculation that ignores this mechanism and then claim that it includes this mechanism!!!

Even if the 110 Watts per square meter were to some extent feedbacks, the difficulty of distinguishing the forcings from the feedbacks and the feedbacks from each other would be no less formidable than it is in today’s climate.

…Which is precisely why your and cba’s argument is not made by any serious scientists. So, why don’t you stop using it too?

joeldshore
April 25, 2012 7:32 pm

Monckton of Brenchley says:

However, as the IPCC itself makes clear, the terrestrial value of the Planck parameter – and consequently of the characteristic-emission temperature – is approximately one-sixth higher than the theoretically-determined value, precisely to allow for the latitudinal distribution of temperatures.

This sentence is very confused and, to the extent it has any correct physics in it at all, gets things exactly backwards. First of all, a larger Planck parameter corresponds to a LOWER emission temperature since the Planck parameter tells us how much temperature rise is necessary to produce an additional W/m^2 of emission. And, the amount of temperature rise necessary to do this is higher at lower temperatures than it is at higher temperatures. (This is why people who mistakenly try to calculate the no-feedback climate sensitivity of the Earth using the surface temperature of 288 K rather than the characteristic emission temperature of 255 K get a SMALLER value for that climate sensitivity.)
Second of all, I don’t think the fact that the Planck parameter is higher than estimated assuming the characteristic emission temperature of 255 K means that the characteristic emission temperature itself is higher or lower than the 255 K value. Rather, the Planck parameter is higher because the INCREASE in temperature, and thus the INCREASE in emission, occurs more in colder regions than in warmer regions (polar amplification). In those colder regions, the colder emission temperature leads to a higher value of the Planck parameter, i.e., it takes a larger temperature increase to produce each additional W/m^2 than it would if the warming occurred more uniformly (or predominantly in the warmer regions).

April 25, 2012 7:39 pm

If CO2 causes warming, it doesn’t cause very much. Most of the rise in CO2 comes from ocean outgassing. But let’s pretend for a moment that all warming since the 1800’s is due entirely to human CO2 emissions [preposterous, I know. But remember that we’re pretending here]. If that were the case, there would still be nothing to worry about. Joel Shore refers to Prof Richard Lindzen, so let’s hear what the esteemed Professor says:
“If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1ºC.”
Since most of the warming [possibly all] since the LIA is natural [per the un-falsified null hypothesis], then the true effect of CO2 is minuscule. It is too small to measure. And of course the past decade and a half deconstructs the “carbon” scare.
The funniest thing about Joel Shore’s always wrong model-based pontifications is the fact that the planet — the ultimate Authority — is debunking his whole CO2=CAGW belief system. Maybe planet earth is an ‘ideologue’, eh? ☺

joeldshore
April 25, 2012 7:43 pm

cba says:

We know that to emit 239 W/m^2, the Earth without an atmosphere would have to be 33 deg C lower. One must hold variables constant to find effects of other variables – in math, it’s called partial derivatives.

Yeah…One must do that if one is interested in finding the no-feedback value! If you are interested in figuring out what the effect of adding a certain amount of CO2 is when you hold all the other greenhouse gas concentrations fixed, then you have outlined the correct method. However, that is the no-feedback value. (And, it holds not only the concentrations fixed but also the ice-albedo fixed.) To get the value including feedbacks, you must take the equivalent of the total derivative that includes the effects of the change in CO2 causing changes in water vapor and so forth. Your reducing this to mathematical terms is indeed useful in explaining exactly where your argument is wrong!!

That 33 deg C per 150W/m^2 is the real effect averaged which includes all feedbacks to date. THIS is what the Earth has done, not what some video game model says it should do.

No…As you have just explained, what you have calculated is the effect of changing one greenhouse gas concentration while holding the others fixed (and fixing the ice-albedo). To get the effect of the feedbacks, you have to consider the fact that an increase in the concentration of the non-condensable greenhouse gases leads to increases in the water vapor (and some change in clouds that is non-trivial to predict).

joeldshore
April 25, 2012 7:49 pm

Smokey says:

Most of the rise in CO2 comes from ocean outgassing. But let’s pretend for a moment that all warming since the 1800′s is due entirely to human CO2 emissions [preposterous, I know. But remember that we’re pretending here].

Not preposterous at all given that it is known to a high degree of certainty that the oceans have in fact been a net sink of CO2. The oceans are not outgassing in net; they are absorbing. Any “outgassing” due to warmer ocean temperatures is more than offset by absorption due to the fact that the partial pressure of CO2 in the atmosphere above the oceans has been increasing due to anthropogenic emissions.

Joel Shore refers to Prof Richard Lindzen, so let’s hear what the esteemed Professor says:
“If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1ºC.”

I only referred to Lindzen in noting that his views on global warming have been debunked time and time again. He once at least made interesting hypotheses but as these hypotheses were found to be contradicted by real-world data, he has turned to more and more desperate arguments.

April 25, 2012 8:32 pm

Joel Shore says:
“I only referred to Lindzen in noting that his views on global warming have been debunked time and time again.”
As I’ve often pointed out, if it were not for psychological projection, Joel Shore wouldn’t have much to say. The plain fact is that it is Joel Shore’s nonsense that is being debunked by planet earth, and he is dreaming if he actually believes that Prof Lindzen — his scientific better — has had his work falsified. Only in Joel Shore’s fevered dreams [and I note that Willis decisively b!tch slapped Shore all around the playground last week]. Here, I’ll repeat what I posted because it obviously went right over Joel’s head the first time:
The funniest thing about Joel Shore’s always wrong model-based pontifications is the fact that the planet — the ultimate Authority — is debunking his whole CO2=CAGW belief system.
So who should we believe? The projection afflicted Joel Shore? Or Planet Earth?

Henry Clark
April 25, 2012 8:51 pm

Monckton of Brenchley says:
April 25, 2012 at 4:10 pm
Mr. Shore asks where the IPCC makes it plain that the terrestrial value of the Planck parameter is greater than, not less than, the value 0.267 Kelvin per Watt per square meter that is derivable from the theoretically-determined 255 K by taking the first differential of the Stefan-Boltzmann equation on the assumption that radiative flux at the characteristic-emission altitude is 238 Watts per square meter. He may like to read the footnote on page 631 of the IPCC’s 2007 report, where the value of the Planck parameter is deducible as the reciprocal of 3.2 – i.e. 0.313 Kelvin per Watt per square meter, or greater by approximately one-sixth than that given by the first differential of the fundamental equation of radiative transfer, and the references on which the IPCC relies make it plain that the reason for this difference is latitudinal temperature variation. The discrepancy between theory and observation on the Moon, by which the true mean lunar temperature is considerably below rather than somewhat above the theoretically-determined 270 K given in NASA’s lunar fact sheet, is accordingly significant and has obvious iimplications for the determination of climate sensitivity on Earth.
Overall I have enjoyed your articles, very much a skeptic myself.
I think, though, there is a weak point in a section of your current argument which unnecessarily opens you up to partially successful attack if someone reads closely.
The Moon’s temperature varying more from its average than Earth does is very important to the net result. That can be expected to make application of the same one-sixth increase to the Planck parameter not remotely sufficient to account for lunar extreme temperature variation utterly whether or not it would account for terrestrial lesser temperature variation by latitude. The first random web page googled says that lunar temperature varies -233 degrees Celsius to +123 degrees Celsius, so it varies far more than Earth from its mean.
Let me illustrate the principle by a simplified analogy. In this example, all of the following three objects have an average temperature of 192 K.
Object A is to represent a body of no temperature variation from its 192 K average. The T^4 term in the formula would equal 1.36 * 10^9 K^4.
Object B is like Object A but varies in temperature moderately from its 192 K average, by plus and minus 30 K. In this simplified analogy, pretend one half is at 162 K and the other half is at 222 K. The respective T^4 terms in the formulas for each half are about 0.689 * 10^9 K^4 and 2.43 * 10^9 K^4 respectively.
Object C is like the other two objects but varies in temperature vastly from its 192 K average, by plus and minus 150 K. In this simplified analogy, pretend one half is at 42 K and the other half is at 342 K. The respective T^4 terms in the formulas for each half are about 0.00311 * 10^9 K^4 and 13.68 * 10^9 K^4 respectively.
If all objects have the same albedo throughout, then, even with all three having 192 K average surface temperature, the net result is:
Object B = 15% more watts of radiative heat emitted from its surface than object A
Object C = 400% more watts of radiative heat emitted from its surface than object A
Now, suppose one makes a constant multiplier adjustment to account for the mild difference between object A and object B, to account for moderate temperature variation. That same multiplier which worked for moderate temperature variation from the average would just automatically be totally off if applied to a a body (object C) with far greater temperature variation.
Obviously the example is exaggerated for the point, but one can see how greater temperature variation just automatically makes the adjustment needed not remotely the same constant.
One can’t apply the same adjustment multiplier to the Moon as Earth in any case, because the Moon has more temperature variation.
So finding application of a formula with that adjustment multiplier to be wrong on the Moon doesn’t show anything directly about its validity or not on Earth. Whatever would be the true temperature variation correction for the Moon (which has a 14 day lunar night) would be alien to that for Earth.
I think the NASA lunar fact sheet you are mentioning is http://nssdc.gsfc.nasa.gov/planetary/factsheet/moonfact.html . It is misleading / wrong for implying 271 K equilibrium blackbody temperature. However, such just means some random employee posting that particular web page messed up. The IPCC has inaccuracies in other regards but not necessarily the same ones on this.
I highly doubt I would agree with joeldshore on the accuracy (or not!) of historical IPCC predictions compared to observations, cosmoclimatology in general, whether there has been a remotely honest portrayal of claimed AGW (or as it really boils down to for the activists: CAGW) effects versus observations, what is seen in the paleoclimate record, and much else. But that doesn’t prevent him being right on one aspect, namely:
joeldshore says:
April 24, 2012 at 7:25 pm
>> Monckton of Brenchley says:
>> Theory predicts that the Moon’s mean surface temperature should be around 270 Kelvin.
No it doesn’t. The energy balance arguments referred to constrain the average of T^4, not the average of T, on the surface. The 270 Kelvin number is obtained as the fourth root of the average of T^4 on the surface. By Holder’s Inequality, it will be greater or equal to the average of T. Only for a fairly uniform temperature distribution does one expect this upper bound to provide a reasonably good estimate of the average temperature. The moon is an airless body with a very broad temperature distribution and hence it is expected that the average temperature will be considerably less than the constraint provided by this inequality.

Calculations for Earth would need a lot different adjustment if Earth had a broader temperature range and a 14-day night like the Moon does.
Anyway, I’d just like to see that weak point closed, as you’re great in arguing in some other ways and one of the top public advocates in the world for our side.
The IPCC’s climate sensitivity itself can be disproved by empirical comparison to past history and observations, including how wrong their predictions a couple decades old have been as you already know.

cba
April 25, 2012 8:58 pm

joel shore,
it would seem you have a real case of fanaticism there. You claim my presentation has one feedback included and no other. Considering it does not specifically include or exclude anything perhaps we can qualify what is included and excluded by the simple criteria of whether a feedback is real or imaginary. Since I’m using real Earth averaged data to reach my result, it contains all of the real feedbacks in operation. Only in your mind can it strip out some feedbacks and retain others.
As for feedbacks versus forcings, that is I believe a red herring. CO2 concentrations are temperature dependent because of the ocean gas concentrations, hence it, like most everything else involved has both attributes.
Concerning ice, you’ve got serious problems with that too. surface albedo contributions are a small part of the overall albedo with sky and cloud albedo comprising the vast majority of Earth’s albedo. Since 70% of the surface is water and that is under 4% albedo for sun angles that convey most solar power to the surface, and considering about62% of the Earth is covered in cloud at any one time, ice and snow have very little effect. You will find if you look that every effort is taken by certain CAGW proponents to push the numbers as far in the other direction as possible.
Your biases and odd ideas are exhibited by your comment concerning absolute humidity tables. You prefer to believe someone who calculates the value rather than measures the value. Why should I mess with the Clausius Clapeyron equation and theory when the actual measured information is available?
Your comment about “serious scientists” not using my simple little explanation is also quite telling. First off, it brings up the question of what you think is a serious scientist. Judging by your comments, I would have to think that you are referring to the likes of james hanson, michael mann, and other political advocates masquerading as real scientists yet acting with an organized and concerted effort to thwart the scientific method and prevent it from working. Assuming you aren’t referring to these there is still the vested interested problem of millions of dollars of research grant monies being extorted from funding agencies by claims of catastrophic outcomes. This is crowding out legitimate research funding in other areas, some of which may have legitimate catastrophic concerns.
A real scientist follows the scientific method. A real scientist is skeptical of practically everything and knows that the scientific method does not produce results that are unarguable or that are “settled” – EVER! Peer review is not part of the scientific method. Rather it is the duplication of experiments by truly independent researchers that either squashes a budding theory or provides support to the theory. It never proves a theory and it only takes one correctly done experiment to falsify a theory, no matter how long it has been around. The ultimate arbitor of science is momma nature, not some supposed concensus of a bunch of people with vested interests. Note that I decided long ago to not become a scientist in the field of atmospheric physics as my interests were much further out. BTW, I am in full agreement with rutherford that physics is science, all else is stamp collecting and that specifically includes all these biology graduates that comprise so many of the global warming activists along with filling the ranks of hamburger flippers at the fast food joints.

April 25, 2012 9:17 pm

cba,
Joel Shore is a nutcase. He actually believes that “ideology” motivates scientific skeptics, not understanding that numerous commenters here have explicitly stated that their views are politically Leftist, yet they do not accept the AGW arguments. Like most intelligent folks they are scientific skeptics.
As you know, the alarmist gang totally ignores the Scientific Method, which absolutely requires transparency. But fourteen years after MBH98, Mann still stonewalls. That’s not what honest scientists do.
The planet is confirming that CAGW is bunkum, and that “feedbacks” are feeding back nothing. They are only found in computer models, not in the real world. And Joel Shore can wake me if the climate null hypothesis is ever falsified.

joeldshore
April 26, 2012 5:49 am

cba: There is very little science in your last reply, which I can understand given the fact that you have lost the scientific argument. But, I will try to reply to what little science there is.

it would seem you have a real case of fanaticism there. You claim my presentation has one feedback included and no other. Considering it does not specifically include or exclude anything perhaps we can qualify what is included and excluded by the simple criteria of whether a feedback is real or imaginary. Since I’m using real Earth averaged data to reach my result, it contains all of the real feedbacks in operation. Only in your mind can it strip out some feedbacks and retain others.

I have explained it to you already. In order to get the effect of feedbacks, you can’t treat them as forcings. If you assume that all the water vapor in the atmosphere has to be put in explicitly as a forcing, you are not going to get its effect as a feedback. It comes down exactly to the partial derivative issue that you identified: You correctly noted that you were effectively computing the partial derivative of the temperature with respect to CO2 concentration. However, what we are interested in if we want to include the effect of feedbacks is the TOTAL derivative of the temperature with respect to CO2 concentration.
And, I explained to you why the lapse rate feedback was different: The other feedbacks involve processes that actually change the radiative balance of the Earth, e.g., if more water vapor goes into the atmosphere, this causes more absorption of the longwave radiation emitted by the Earth back into space; if ice melts and changes the albedo, this causes more absorption of incoming solar radiation to occur.
The lapse rate feedback is an odd-man-out in this respect; it simply refers to the fact that it is the temperature at altitude that is relevant in determining the amount the Earth radiates back out into space, but it is the temperature at the surface that we are usually interested in (and that you noted when you talked about the 33 C temperature increase relative to an Earth without a greenhouse effect).
The fact that your argument includes the one known negative feedback and none of the positive ones does not show my fanaticism but yours. This is what makes the argument so tempting to people like you and Lord Monckton who are trying to reason backwards from your desired conclusion to a scientific argument supporting it.

Concerning ice, you’ve got serious problems with that too. surface albedo contributions are a small part of the overall albedo with sky and cloud albedo comprising the vast majority of Earth’s albedo.

I agree that the ice-albedo feedback is not as big a player as some of the other feedbacks like water vapor…and potentially clouds. However, it is still relevant….and it is another example of a feedback that is not included in your calculation.

Your comment about “serious scientists” not using my simple little explanation is also quite telling. First off, it brings up the question of what you think is a serious scientist. Judging by your comments, I would have to think that you are referring to the likes of james hanson, michael mann, and other political advocates masquerading as real scientists yet acting with an organized and concerted effort to thwart the scientific method and prevent it from working.

I’ll leave aside your denigration of respected scientists who don’t happen to want to bend science to conform to the political ideology that you endorse. But, no I wasn’t referring to Hansen and Mann or even scientists supporting the consensus. I was referring to the fact that even the few reputable climate scientists who dispute the consensus, like Roy Spencer, John Christy, and Richard Lindzen don’t make your argument. If there is such a simple argument to support what they so desperately want to believe, why would they avoid making it?

joeldshore
April 26, 2012 5:53 am

Smokey says:

The funniest thing about Joel Shore’s always wrong model-based pontifications is the fact that the planet — the ultimate Authority — is debunking his whole CO2=CAGW belief system. Maybe planet earth is an ‘ideologue’, eh? ☺

This is, of course, another falsehood. It is only the planet, as interpreted by Smokey, that is debunking the science that you are ideologically opposed to. The planet, as interpretted by scientists is not.

April 26, 2012 7:41 am

joeldshore:
Arguments for regulation of CO2 emissions that are based upon claims regarding the magnitude of the equilibrium climate sensitivity (TECS) have a scientifically fatal flaw. This is that the equilibrium temperature is not observable. It follows that: claims regarding the magnitude of TECS are not falsifiable, thus lying outside science.

April 26, 2012 6:42 am

Mr. Shore, as is his wont, argues testily, illogically, and unscientifically. For instance, he attacks Professor Lindzen ad hominem in an unpleasant and unconstructive manner for having allegedly been repeatedly “debunked” in the past, and says that the earlier version of his 2011 paper demonstrating a climate sensitivity of 0.7 Kelvin per CO2 doubling was also “debunked”. In using this sort of childish, yah-boo language, Mr. Shore not only reveals an unbecoming prejudice that casts doubt upon all else that he says, but also a discourtesy to the Professor that indicates just how little Mr. Shore understands about the manner in which rational scientific argument is supposed to be conducted: and that is the fallacy of ignoratio elenchi, of which the ad-hom fallacy is a specific instance.
The earlier version of Professor Lindzen’s paper was not “debunked”. A remarkably impolite and ill-tempered paper co-authored by one of the most prominent Climategate emailers said he had used statistical techniques that were inappropriate. The methodology in the original paper was indeed rough and ready – but that was because greater statistical precision would not have been likely to alter the outcome one jot. Sure enough, the second version of the paper, this time carrying out the unnecessarily pernickety requirements of the Climategate emailer, did not produce so much as a tenth of a Celsius degree of variation in climate sensitivity from the original version that had allegedly been “debunked”. One realizes that, to a fanatic such as Mr. Shore, a paper in which his religion is so thoroughly debunked (to use his own childish language, so that he knows what it must feel like to be on the wrong end of it) appears heretical. But Mr. Shore would have been less monumentally unconvincing if, instead of using intellectual baby-talk, he had actually addressed the interesting arguments made in Professor Lindzen’s paper.
Besides, Mr. Shore’s argument is to the effect that the latest version of Professor Lindzen’s paper is untrue because none of the usual suspects has yet had time to debunk it. Now, the argumentum ad ignorantiam, the fallacy of argument from ignorance, usually asserts that a proposition is false because it has not been proven true, or true because it has not been proven false. Mr. Shore offers a novel and still more absurdly and self-evidently fallacious argument: that Professor Lindzen’s paper is false because it has not yet been proven false. Whatever else Mr. Shore’s argument is, it is certainly not scientific. One suspects it is political: for illogicality on this heroic scale is the province of the lesser sort of political mind.
Mr. Shore goes on to confuse the Earth’s characteristic-emission temperature of 255 K with the terrestrial surface temperature of 288 K. Of course he is right to point out that it is the greenhouse effect fatuously denied by the likes of Mr. House that accounts for the difference: but his mention of the surface temperature when the problem is with the emission temperature seems suspiciously like an attempt at misdirection. He had previously asserted that the Holder inequality requires that the mean emission temperature of an astronomical body (which, on the Moon, is at the surface and, on the Earth, is 3-5 miles above it) should be less than or equal to the mean emission temperature indicated by theory. Yet, as I have demonstrated, the Planck parameter on Earth (in theory 0.267 Kelvin per Watt per square meter) is in fact 0.313 Kelvin per Watt per square meter: i.e., higher by about one-sixth than the theoretically-established value. Since the Planck parameter may be expressed as the characteristic-emission temperature divided by (four times the radiative flux at that altitude, where incoming and outgoing fluxes balance by definition), it ought to be obvious that since the flux is constant at 238 Watts per square meter it is the mean temperature that must differ from the mean temperature that had been assumed – and in an upward direction, not a downward direction.
In this connection, Mr. Shore appears to think that the characteristic-emission temperature drives changes in radiative flux at the characteristic-emission altitude. No: It cannot possibly do so, because at the characteristic-emission altitude the incoming and outgoing fluxes balance by definition, and the incoming flux is one-quarter of the solar constant, so it does not change. It is the altitude that changes. As the atmosphere warms, the characteristic-emission altitude rises a little, and the characteristic-emission temperature consequently falls a little. One hesitates to point this out to Mr. Shore, but the clearest explanation of this phenomenon is given in lectures by Professor Lindzen.
Next, Mr. Shore asks: “How can you denominate two thirds of something in different units?” Well, a scientist who had even the most elementary understanding of the difference between forcings and feedbacks would appreciate that climate scientists denominate forcings in Watts per square meter and feedbacks (which are forcings that arise in consequence of and in dependence upon to the temperature change caused by an original forcing) in Watts per square meter per Kelvin. Like it or not, that is how it is done, and with good reason. Mr. Shore will find this matter well explained in the pedagogical paper on feedbacks by Roe (2009), and also (though far less clearly, and with many pusillanimous mistakes) in the IPCC’s Fourth Assessment Report. Roe, though one hesitates to point this out to Mr. Shore, was a pupil of Professor Lindzen, who has also been a contributor to various IPCC reports. Indeed, the extraordinarily powerful influence of the Professor at all points in the climate debate indicates that perhaps Mr. Shore should do him the courtesy of taking him seriously, and addressing his arguments rather than attacking him ad hominem. That would be the grown-up approach.
Mr. Shore says that my opinion on whether water vapor would be present in the atmosphere in the absence of the non-condensing greenhouse gases is “irrelevant”: yet, curiously, he goes on to accept my reason for that opinion, which is that one cannot clearly distinguish between forcings and feedbacks by any empirical method, so that one cannot tell that the water-vapor feedback, for instance, is as strongly positive as the IPCC would like us to believe. Having accepted this point, he then tells me I must no longer rely upon it. Yet the basis for my head posting was that feedbacks, in particular, cannot be reliably quantified either individually or collectively by any method and that, therefore, the IPCC’s claim of very high climate sensitivity lacks any real scientific basis. It is guesswork – and, since the warming during the generation has passed since the IPCC’s first temperature prediction is below the lower bound of that prediction – uneducated guesswork at that.
And it is clear that the bulk of the exaggeration in the IPCC’s estimates of climate sensitivity arises from its assumption that temperature feedbacks are strongly net-positive – an assertion that is inconsistent with the formidable temperature stability exhibited (as best it can be reconstructed) in the past 64 million years. On this topic, I am most grateful to the many process engineers all of whom have been kind enough to confirm in the plainest terms that an implicit feedback loop gain of 0.64 – implicit in the IPCC’s extravagant central estimate of climate sensitivity – is far too close to the singularity in the feedback amplification equation to be in any degree plausible. Bluntly, it is a monstrous exaggeration.
Mr. Shore goes on to say that feedbacks cannot be measured separately in the real world. Quite right. That is precisely my point. And they cannot be measured collectively either; nor, either individually or collectively, can they be distinguished from forcings. Their values, and even their signs, are guesswork, and the guesses of the modelers and of the IPCC are not proving skilful when compared with what is measured in the real world, whether in the paleoclimate or in the present.
Mr. Shore says that the IPCC determines climate sensitivity not only by modeling but also by looking at the paleoclimate and at the instrumental temperature history. Yet the IPCC itself says it determines its central climate sensitivity estimate of 3.26 Celsius degrees per CO2 doubling as a multi-model mean (it was 3.5 C in the 2001 report and 3.8 C in the 1995 report). It is well known, and obvious from a reading of the IPCC’s reports, that the gravamen of its case for high climate sensitivity lies in the now-discredited models.
As we have already seen, the evidence from the paleoclimate is that notwithstanding the many shocks and forcings over the past 64 million years, the global temperature has remained remarkably stable: and that, to a scientist who properly understood the distinction between forcings and feedbacks, would indicate compellingly, and perhaps decisively, that feedbacks cannot possibly be strongly net-positive.
Likewise, the rate at which the planet has warmed since 1950, when we first began adding enough CO2 to the atmosphere to make a theoretical difference to global temperature, is equivalent to little more than 1 Celsius degree per century: yet the IPCC is predicting that there will be almost 3 Celsius degrees of warming in this century (of which precisely none has occurred so far). To get to 3 Celsius of warming, then, the IPCC is in effect positing that by the end of the century a fourfold or even fivefold increase in the warming rate will have occurred compared with what we have seen over the past 60 years: and there is really no credible scientific basis, in theory or in observation, for any such extreme acceleration in the warming rate. It is mere rodomontade: good for getting grants and headlines and for justifying the establishment of a global “government” to Save The Planet, but most unscientific.
And so to Mr. House, who, having at last accepted how unreasonable it was of him to demand that I should provide references demonstrating what my head posting had very clearly shown could not be demonstrated by any method, now moves the goalposts a few miles and asks instead for references that establish that there is such a thing as the greenhouse effect. This is off topic, as I have said before, and it is high time that the moderators prevented such vexatious attempts at hijacking proper scientific discussion. What is more, I have already answered Mr. House’s question by referring him to any elementary textbook of climatology. Let him do some reading before he does any more screeching. If there were no greenhouse effect, then my conclusion to the effect that there is no scientific basis for a high climate sensitivity to greenhouse-gas enrichment of the atmosphere would be true a fortiori, and nothing more need be said of Mr. House’s undistinguished and irrelevant contributions to this thread than that.

Michael Whittemore
April 26, 2012 7:13 am

The consensus has always been about anthropogenic climate change, it has never been about climate sensitivity. I have always pondered about the claim that 97% of climate scientists think that man is causing global warming. It seems like an amazing feat to be able to achieve that kind of statistic. Looking into the facts it’s really only based on one study that surveyed 77 climate scientists, with all of them having in the five years preceding the study, published more than 50% of their work on climate change. They were asked the question “Do you think human activity is a significant contributing factor in changing mean global temperatures?. 97.4% (75 of 77) answered yes. (http://tigger.uic.edu/~pdoran/012009_Doran_final.pdf)
As you can see it’s an interesting study but not the kind of information I would use during a debate. The point is though the consensus statement is not about climate sensitivity.

Greg House
April 26, 2012 7:27 am

Monckton of Brenchley says:
April 26, 2012 at 6:42 am
If there were no greenhouse effect, then my conclusion to the effect that there is no scientific basis for a high climate sensitivity to greenhouse-gas enrichment of the atmosphere would be true a fortiori,
================================================
If CO2 could not cause some warming, then its climate sensitivity (in sense of increase in temperatures) would be ZERO, Christopher, this is so obvious.
The whole AGW thing, however, is based on the notion of CO2 warming, this is a fundamental core issue.
There is one more important thing. Your concept “there is man made global warming, but it is not that bad” will never have a sufficient positive effect, because your “not that bad” can be easily ignored, but at the same time your “there is man made global warming” effectively supports the fundamental claim of the radical warmists and even reinforces it, because they can say that even “sceptic” Monckton has no doubt.

joeldshore
April 26, 2012 7:43 am

I’ll respond to other part of the good Lord’s diatribe later. But, let me just comment on the most substantive point where he is confused. He says:

Yet, as I have demonstrated, the Planck parameter on Earth (in theory 0.267 Kelvin per Watt per square meter) is in fact 0.313 Kelvin per Watt per square meter: i.e., higher by about one-sixth than the theoretically-established value. Since the Planck parameter may be expressed as the characteristic-emission temperature divided by (four times the radiative flux at that altitude, where incoming and outgoing fluxes balance by definition), it ought to be obvious that since the flux is constant at 238 Watts per square meter it is the mean temperature that must differ from the mean temperature that had been assumed – and in an upward direction, not a downward direction.

It is true that the Planck parameter is expressable as “the characteristic-emission temperature divided by (four times the radiative flux at that altitude…)”. However, by substituting in the expression for the radiative flux in terms of the temperature, it is also expressable as 1/(4*sigma*T^3) where sigma is the Stefan-Boltzmann constant [5.67 x 10^-8 W/(m^2 * K^4)] and T is the absolute temperature. In this form, where we have completely eliminated the radiative flux from the equation and thus are not subject to the confusion of what to hold constant and what to vary, it is obvious that the Planck parameter is a decreasing function of the emission temperature.
Anyone still confused on this issue can simply verify it by playing with the numbers. Consider different emission temperatures and see how much temperature change is needed to produce a fixed (say, 4 W/m^2) increase in the emission. One will find that the higher the emission temperature, the lower the temperature change needed. The Planck parameter, as the units imply, is simply this measure of how large a temperature change is needed to produce a given change in radiative flux in W/m^2.
At any rate, all this has nothing to do with what the temperature of the moon is. The fact that Holder’s Inequality means that the 270 K is in fact an upper bound of the average temperature of the moon (and that a very non-uniform temperature distribution that satisfies radiative balance can have a considerably lower average temperature) is a rigorously provable mathematical statement. I am not here to debate things that can be rigorously proven mathematically. If Lord Monckton is unfamiliar with the mathematics involved, he would do well to acquaint himself with it.

Greg House
April 26, 2012 7:48 am

Monckton of Brenchley says:
April 26, 2012 at 6:42 am
And so to Mr. House, who, …asks instead for references that establish that there is such a thing as the greenhouse effect.
===================================================
Christopher, you made a fundamental for the AGW concept claim on this thread and was asked to provide evidences like “clear, distinct and exact references and links”. This is your third answer and exactly like the 2 previous answers it contains ZERO clear, distinct and exact references and links.
Nobody can force you to publicly admit you do not have any, but please, at least consider the possibility that your scientifically unsupported claim misleads people.
And, of course, please, consider the notion that everything, what can be logically derived from a scientifically unsupported claim is a pure speculation without scientific basis.

joeldshore
April 26, 2012 7:54 am

By the way, if Lord Monckton still believes that it is mathematically legitimate to note that since the Planck parameter is given by 4*T/P where T is the absolute temperature and P is the radiative flux, then keeping P constant allows one to conclude that the Planck parameter is directly proportional to temperature, I will note the following:
We can also perfectly legitimately use the formula P = sigma*T^4 to rewrite the Planck parameter as P/(4*sigma^2 * T^7). In this form, again taking P constant, one would be forced to conclude that the Planck parameter varies as one over the temperature to the 7th power!

April 26, 2012 7:57 am

Terry Oldberg says:
“…claims regarding the magnitude of TECS are not falsifiable, thus lying outside science.”
Exactly right. Actually, most of what Joel Shore preaches is not falsifiable, including his belief in catastrophic AGW. Thus, his beliefs are not science; they are more akin to religion. Certainly the putative climate sensitivity number to 2xCO2 id highly contentious among actual scientists, with no resolution on the horizon.
It is entirely possible that the sensitivity number is zero [although I suspect it is a minuscule positive number]. Real world evidence indicates that the IPCC’s 3+ºC number is preposterously huge. But if they don’t frighten the populace, there is no money in it. Therefore, official arguments for the regulation of CO2 must be based upon a high WAG of climate sensitivity to CO2.

April 26, 2012 8:06 am

Lord M says:
demonstrating a climate sensitivity of 0.7 Kelvin per CO2 doubling
Henry says:
You honestly believe in this?\
Obviously I am on the other side (of the debate) than Joel Shore: I don’t believe there is any warming effect at all from the CO2….
I think I properly proved this as well (at least for myself)
http://www.letterdash.com/henryp/global-cooling-is-here
looking at the spectrum of CO2, we notice absorption in the 2 and 4 um range which implies that sunlight will be re-radiated and thus deflected by the CO2, having a cooling effect…..
hence we can even measure it as it bounces back from the moon to the dark side of earth \
http://www.iop.org/EJ/article/0004-637X/644/1/551/64090.web.pdf?request-id=76e1a830-4451-4c80-aa58-4728c1d646ec
the question I have is simple: how do you know that the net effect of more CO2 is that of warming rather than cooling,
or whether the effect is perhaps not zero or close to zero?
Remember that the principal flaw in “An inconvenient Truth” was that they “correlated” the warming of the planet with the increase in CO2 but no one could see that there was a lag of quite a few hundred years, which can simply be explained by the reaction:
heat + 2HCO3- (dissolved in sea water) =>2CO2(g) + 2OH-

pochas
April 26, 2012 8:32 am

Greg House says:
April 26, 2012 at 7:27 am
“There is one more important thing. Your concept “there is man made global warming, but it is not that bad” will never have a sufficient positive effect, because your “not that bad” can be easily ignored, but at the same time your “there is man made global warming” effectively supports the fundamental claim of the radical warmists and even reinforces it, because they can say that even “sceptic” Monckton has no doubt.”
We do want to be objective, so we must at least admit that CO2 is a “greenhouse gas” and not imitate the warmists, for whom lying is as easy as breathing (they think that successful lying shows great intelligence). We can, however, point out that we don’t allow forest fires to burn nowadays, and we don’t have world wars quite as often and we do attempt to control pollution, and weather changes quite a bit anyway, so that any small warming from CO2 would be lost in the noise. This will not impress warmists who have a vested interest in the global warming racket, but everyone else will at least listen.

April 26, 2012 8:55 am

Mr. Shore seems to be ever more confused. He talks of the Planck parameter being given by (four times the temperature) divided by the radiative flux. In fact, the Planck parameter is the temperature divided by (four times the flux). Perhaps Mr. Shore should refresh his memory from an elementary textbook of calculus before he continues this discussion and becomes still more lost.
As for Mr. House, his retreat continues. Having conceded that he was wrong to demand that I should provide references establishing the climate sensitivity to various greenhouse gases, when the manifest conclusion of my head posting was there there is no scientific basis for thus establishing climate sensitivity because feedbacks and forcings cannot be directly measured, quantified, or distinguished from one another by any empirical or theoretical method, he now additionally concedes that if his contention that there is no greenhouse effect were true then my conclusion that there is no scientific basis for the IPCC’s assumption of a high climate sensitivity follows a fortiori. It follows that his new demand that I provide references establishing that there is a greenhouse effect is off topic. Nevertheless, I have referred Mr. House to any textbook of climatology, where he will find the greenhouse effect well described and demonstrated. Given his negative, unconstructive and intrusive attitude, which has attempted to derail this thread, I consider it unlikely that he will accept any instruction from me on the subject. So let him do his own homework. I am not a wet-nurse.

Greg House
April 26, 2012 8:59 am

pochas says:
April 26, 2012 at 8:32 am
We do want to be objective, so we must at least admit that CO2 is a “greenhouse gas”.
==============================================
Pochas, let us be objective, I agree. Objectively, both radical and moderate warmists refer to the Tyndall’s experiment. The Tyndall’s experiment does not prove at all, that any “greenhouse gas” causes any warming, it only proves, that they absorb and re-emit some IR radiation. This fact is not equivalent to causing warming, because IR radiation comes from the Sun, too. That means, that the “greenhouse gasses” also block some incoming IR radiation, so it has to be additionally proven, that the net effect is warming.
So, Tyndall’s experiment alone does not prove any net warming or net cooling.
Those who claim that the net warming is experimentally proven are welcome to provide the evidences. But it does not look like there are any, if you look at Lord Monckton’s answers.

April 26, 2012 9:00 am

Smokey (April 26, 2012 at 7:57 am):
I urge avoidance of the assumption that TECS has a numerical value, even though we are uncertain of what this value is, for to adopt this assumption is to fabricate information. TECS is the proportionality constant in an equation that maps increases in the logarithm in the CO2 concentration to increases in the global average surface temperature at equilibrium. As the equilibrium temperature is not observable, the magnitude of the increase in the logarithm of the CO2 concentration contains no information about the magnitude of the increase in the global average surface temperature at equilibrium. This conclusion follows from the definition of “information” as the measure of a relationship between observables. Please note that when the IPCC conveys a range of possible values for TECS to a governmental policy maker this act seems to the policy maker to provide him/her with perfect information about the outcome from his/her policy decision when in fact it provides no information. 100% of the apparent information has been fabricated.

Tomazo
April 26, 2012 9:01 am

Greg House says:
April 25, 2012 at 5:25 pm
“Just for the sake of science and common sense, please, ask yourself why you assume or believe that this “3.708W/m^2″ is not a fiction.”
As you can possibly conclude from Lord Moncktons’s zero presented evidences for a physical proof of “CO2 forcing”, this number must have been derived from something else. If you look carefully at the core assertion about 33K “greenhouse” warming, you will notice that they simply claim the “greenhouse gasses” cause this warming. This is a fiction, Tomazo. Then they derive those “3.708W/m^2″ mathematically from their claim and then their “climate sensitivity” again from this “3.708W/m^2″. Unfortunately, it works with so many people, a lot of them are well educated, but not trained in finding well hidden fallacies.”
The beauty of virtually all the arguments put forward by the Viscount (and many other mathematicians and scientists) is using the IPCC’s own evidence, data, and math against themselves. I was simply presenting his calculations in a consolidated form. (If one assumes f(CO2)=0.15, one calculates a dT of 3.267K, which I rounded to 3.3. If one assumes a 3.3K and back-calculates f(CO2)=0.1625, or if 3K, f(CO2)=0.0488, all three in units of W/Km^2).
The point is that the forcing claimed attibutable to CO2 by the IPCC is not knowable, given the many other unknowns. Indeed, the aggregate forcings are highly couped, interactive, and likely vary over time from negative to positive (clouds come to mind as the best example), possibly even going negative. In short, there are “too many unkowns and not enough equations”, meaning too many degress of freedom to solve the mathematical problem, which is what the models try to do, necessarily incorporating a vast array of quite tenuous assumptions. This is what negates the value of the models for policy making.
You can find the coefficients he used in the forcings (except CO2, of course) here:
http://www.gfdl.noaa.gov/bibliography/related_files/bjs0601.pdf
I did not challenge the 3.708W/m^2 number, but merely assumed we accept it without question from the proponents of AGW to avoid debate over it. You are right though, as it is the seventh number that is challenged by the Viscount. However, it comes from the IPCC, is used in virutally all their work and reference papers, and may be found here (a simple 2 pager, which mentions “climate sensitivity” 13 times):
http://www.skepticalscience.com/climate-change-little-ice-age-medieval-warm-period-intermediate.htm
Notes: 1) they assign temperature units to a dimensionless unit, 2) the obvious skew in most of the probablity curves presented (including something called “Expert Elicitation”?), 3) two (three if you include the “Climate Sensitivity” composite distribution figure) statements citing 3C (or K, which I suspect was rounded down from 3.3, and which your response of 33K appears to have left out a key decimal between the 3’s), 5) don’t cite the assumed solar flux(es) in the derivations, nor sources) and 5) (with apologies to Lord Monckton), two statements including the dreaded “…peer-reviewed…”
It will be worth your while to spend some time on the site in general, as it lists (at this time so far, perhaps indicating how easy it is to challenge alarmism) 173 rebuttals to skeptic arguments. You can take your challenge about the 3.708 number to the “consensus” climate scientist community (note there are climate scientists that don’t agree with the “consensus”.) It also reads much like the Federal Register, and has many internal inconsistencies in its own arguments. As I spent three whole days on it (much is in the comments), I was reminded of the following definitions: “Endless Loop: n., see Loop, Endless” and “Loop, Endless: n., see Endless Loop.”
Finally, I would like to ask you, Mr House, what number should be assigned to the radiative forcing for a doubling of atmospheric CO2 concentration? I’d be happy to use it and see how the system responds. Thanks!

Tomazo
April 26, 2012 9:21 am

Apologies for the 3 spelling and 1 numeration errors in my post above, for which I presume the readers will make allowance.

Greg House
April 26, 2012 9:27 am

Tomazo says:
April 26, 2012 at 9:01 am
Finally, I would like to ask you, Mr House, what number should be assigned to the radiative forcing for a doubling of atmospheric CO2 concentration? I’d be happy to use it and see how the system responds. Thanks!
====================================================
Well, the answer should be “the true number”, which implies “scientifically established”. No such scientifically established number is known to me. If anyone claimed there is one, I would ask him to prove it before I happily use it.

rgbatduke
April 26, 2012 9:32 am

The bibliography at http://www.knowledgetothemax.com provides citations to the work of the right Ronald Christensen. The one who is an academic statistician is the wrong Ronald Christensen. Using ideas that were first developed by the right Ronald Christensen plus others that were developed by predecessors of Christensen that include Cardano, Clausius, Boltzmann, Gibbs, Lebesgue, Shannon and Jaynes, it is possible to build a model without resorting to the method of heuristics. Optimization replaces the method of heuristics. Absent Christensen’s contributions, this would be impossible.
Well, perhaps, although following the link to your website and skimming through the bibliography I note that you are completely ignorant of Cox and don’t cite Jaynes at all (aside from an oblique reference to “1957” in the text body). Since maximum entropy is due to Jaynes, and since the theoretical foundation for it is due to Cox (not Shannon) this seems like a pretty serious omission. Since Jaynes provides explicit examples of reasoning on the basis of maximum entropy that seem to pretty much precisely duplicate the “results” of Christensen (except for being decades earlier) I’m still trying to see what ideas were developed “first” by Christensen.
Jaynes even presented his original results in terms of a robot model, precisely to demonstrate that they were algorithmic in nature and did not rely on heuristics per se, although neither you, Christensen, Jaynes, or anyone else can to the best of my knowledge “truly” eliminate heuristics. You would have to be a computer programmer to understand why, although you have the right idea when you observe that there are an infinite number of ways to parse a model. Correlation is not causality, and one cannot infer causality without a model and one cannot validate a model beyond noting agreement — so far — with observation in some sort of consistent network.
While I absolutely agree that science is an optimization problem and write about it myself extensively, I do not agree that this was not completely understood by Jaynes and, for that matter, Cox and Shannon and lots and lots of other people by the mid-60s.
rgb

pochas
April 26, 2012 9:42 am

Greg House says:
April 26, 2012 at 8:59 am
“So, Tyndall’s experiment alone does not prove any net warming or net cooling. ”
My own mental error bar for the effect of CO2 does in fact include the value “zero”. But I recognize that it is commonly accepted by all including Lord Moncton and virtually every scientist that publishes in this field that there is a greenhouse effect and I do understand the theory behind it, so I do not dismiss it. But there are those who believe that CO2 simply displaces water vapor as a greenhouse gas so that the net effect is minimal. My own notion is that the effect of CO2 is observable as elevated minimum temperatures at high latitudes which benefits polar bears and Siberian eskimos. Also, increased stratospheric emissivities at low latitudes may mean less violent weather, and of course my lawn needs cutting more often.

April 26, 2012 10:32 am

Brave Sir Hadfield ran away,
Bravely ran away, away!
When danger reared its ugly head,
He bravely﻿ turned his tail and fled.
And gallantly he chickened out.
Bravely taking to his feet
He beat a very brave retreat,
Bravest of the brave, potholer54!
[with apologies to Monty Python]
Why are Joel Shore and Peter Hadfield too chicken to have a face-to-face, moderated debate like the Oxford Union debate? Because taking cowardly potshots from the safety of the internet is their style.

Tomazo
April 26, 2012 10:40 am

Thank you for your prompt research and definitive answer Mr House. At this juncture I see Lord Monckton is entirely correct: you have successfully defeated yourself in a fair and balanced debate, and I refuse to debate philosophy/science with unarmed opponents.

April 26, 2012 11:41 am

Tomazo says
you (Mr. House) have successfully defeated yourself in a fair and balanced debate
Henry says
Actually, (totally) coincidental, just a little bit earlier, I made the same argument as Greg House did, here:
http://wattsupwiththat.com/2012/04/23/why-there-cannot-be-a-global-warming-consensus/#comment-968281
to which there is no reply, (from the dear Lord, or from anyone else for that matter,
simply because no-one has got the test results to show us, exactly, how much warming and how much cooling is caused by the CO2.
Now it seems, (from my research), it is not more warming that is caused by more of the CO2
that is for sure, to me, …
http://www.letterdash.com/henryp/global-cooling-is-here

April 26, 2012 11:53 am

rgbatduke:
I gather that we agree upon the existence of an inductive logic wherein the inferences that are made by a model are selected by optimizations in the entropies of the various inferences rather than by heuristics.

Greg House
April 26, 2012 1:56 pm

pochas says:
April 26, 2012 at 9:42 am
But I recognize that it is commonly accepted by all including Lord Moncton and virtually every scientist that publishes in this field that there is a greenhouse effect and I do understand the theory behind it, so I do not dismiss it.
==================================================
Pochas, as you can see we have a distinct case on this thread, where a scientist who publishes in this field can not support his claim about experimental proof that CO2 “causes some warming” with scientific evidences, clear references, links etc. It indicates a strong possibility, that no evidences exist.
If you still rely on head counting rather than on evidences, it is your right, but then your opinion is beyond a scientific debate, I am afraid.

cba
April 26, 2012 3:48 pm

joel shore,
I went back and looked at some old model runs from 4 years ago that I did. Accuracies compare to around 1% as compared with archer’s modcalc. It would seem that your 1km thickness at the altitude where T is around 255k, the effective radiating result at that altitude is to absorb around 12 W/m^2 of power from the outgoing stream. It is a net absorption. The surface is emitting about 384 W/m^2 and the outgoing emission from very high up in the atmosphere is around 268 W/m^2 (and this is only for clear skies). As I stated, your characteristic emission altitude is pure BS and is completely nonphysical. At any altitude in clear skies, one has what has been emitted from the surface and what has been emitted and absorbed from every altitude below that of interest. In cloudy skies that are completely opaque, one has the cloud top surface emission at its characteristic temperature. In between, it gets far too messy to discuss here. For you to try to use stefan’s law in the atmosphere is a complete and total missuse and missunderstanding of everything physical involved.
I’m still awaiting with abated breath (and starting to feel dizzy lol) maybe I should switch to baited breath (although that seems a bit fishy) for your explanation concerning how looking at what the Earth has done on average with all forcings and feedbacks that currently exist in order to split some out and keep them while tossing out all others.

Brendan H
April 27, 2012 2:03 am

Monckton of Brenchley: “However, in the context, she was attempting to justify the notion that anthropogenic global warming will be serious enough to be catastrophic by asserting there was a consensus about it.”
Which is in agreement with my claim: “…in an informal situation, such as a blog, arguments are not often fully and explicitly expressed”. So we’re on the same page there.
“Are the fallacies that I have said are fallacies fallacies? Yes, they are: one can look them up in any textbook of logic.”
Having decided that the Classicist lacks the requisite authority, Lord Monckton throws him under the bus and reaches for the big gun: the textbook. But Monckton fails to explain why the textbook has more authority than the Classicist.
“Brendan H, in attempting to assert that the consensus fallacy is not really a fallacy, and that no part of an argument need be made explicit before the logician considers whether the argument is valid…”
Strawmen. I have made no defence of the consensus argument, and nor have I made any link between an implicit argument and what a “logician” might consider to be valid.

cba
April 27, 2012 5:29 am

joel shore,
I guess it’s about time to explain things concerning the sensitivity since it’s evident you’re not going to figure it out on your own. The sensitivity I presented was a legitimate sensitivity, deg C rise per W/m^2 of change in atmospheric absorption as opposed to the scientific abomination of the political ipcc organization’s deg C rise per CO2 doubling number they call a sensitivity. As I did it, my sensitivity number contains all existing forcings and feedbacks present, not just some of them. What you failed totally to grasp is that when you apply any new absorption value, it too includes both forcing and feedback based upon the total temperature change. I gave you the hint by mentioning the h2o feedback. Simply multiplying 3.7 w/m^2 forcing for a co2 doubling will give you 0.8 deg C rise from the co2 but not for the feedbacks due to a delta T rise of slightly more than 0.8 deg C due to all added W/m^2. I chose h2o vapor as that is what the ipcc claims is the largest and most important. Were the h2o vapor feedback to be due to a full doubling, it would be between 2 and 3 times the co2 doubling value. However, not even a 5 deg C change in the entire atmospheric column would result in even a 30% change and that much additional h2o vapor can only provide about 3.1 w/m^2 increase in absorption. You can either do an infinite loop to determine the contributions that balance or you can assume a value and comparison iteration to determine how much will occur and 5 deg C is missing the vast majority of W/m^2 absorption. If you assume a total rise of 2 deg C, the h2o vapor absolute humidity increase becomes 13% at constant RH. Net result for that is around 1 W/m^2 of added feedback, enough to achieve a 1 deg C rise, leaving approximately 1 deg C that must be accounted for by the other lesser feedbacks. Hint if the other lesser feedbacks can provide as much added absorption as the h2o feedback plus the forcing, then they are not lesser feedbacks and if they could provide that much, the system is totally unstable as the feedbacks don’t need a forcing to drive the system to the rail and keep it there. Example, try balancing a pencil on its tip. Of course one needs another iteration or more to get the temperature better since at 1 deg C total rise in T, there will not be a 13% increase in h2o absolute humidity and so h2o vapor cannot contribute 0.22 deg C to the total.

rgbatduke
April 27, 2012 6:43 am

I gather that we agree upon the existence of an inductive logic wherein the inferences that are made by a model are selected by optimizations in the entropies of the various inferences rather than by heuristics.
Absolutely, with a few minor quibbles of a technical nature. And I must say that it is a pleasure to encounter somebody that knows of this — it is far from commonly known outside of a relatively small community of statisticians, computer scientists, and physicists. Also, if I seem overzealous in my insistence in the proper order of precedence forgive me, but as an academic myself I was taught to be very careful about this; there are still other names in the chain of the key idea of inference (Polya, for example, who wrote a marvellous three volume treatise on its common use in mathematics, where it is in a sense highly unexpected).
You would have to be familiar with some of the examples he gives, especially of e.g. unproven “conjectures” (my favorite being the Goldbach conjecture) to see why one cannot completely abandon heuristics even in mathematics, let alone in physical science — inference cannot itself formulate a hypothesis. Or perhaps philosophy: in order to apply a process of inductive reason to (as you put it) “decode” a message from nature, one must make the heuristic assumption that there is a message there to be decoded and the further heuristic assumption that your methodology is capable of successfully decoding it. (Personally, I would have said that we seek to compress the information string presented to us by nature, and seek a peculiarly optimal compression — the sun rising N times may or may not be a message from Nature that it will rise the N+1 time, but we can certainly compress the N times of our past experience into a short rule — the sun rises every day — and over time seek to further compress this rule with many related rules while continuing to test its validity as a consistent (and hence “predictive”) hypothesis.)
Since you clearly like this sort of thing, I have one final work to commend to you. A polymath named David MacKay who (I believe) teaches this and that at Oxford has written an absolutely fantastic book:
http://www.inference.phy.cam.ac.uk/itprnn/book.html
“Information theory, Inference and Learning Algorithms”. MacKay has kindly made the entire text of the book available for free on his website as a PDF, or it is available under Google Books. I found it so very useful that I bought a hard copy, in part to support the author. It is not for the faint of heart — to fully appreciate the content requires moderate competence in at least computer science and statistics — but if you are interested in the “encoding” or “compression” aspect of learning and inference as being echos of algorithmic information theoretic processes in computer science, you should absolutely love this book.
One of the most interesting conclusions of the book — and one that I believe is quite correct — is that this is how our brains work! Our brains are Bayesian associative inference engines, all the way down to the neural level. MacKay just echos both Jaynes and Cox here to some extent — but he supports it with the theory of neural networks, which is all by itself rather fascinating.
And now this hijacked thread will be returned to its original owners, although this sort of discussion is ALWAYS apropos in the context of the validity of various scientific conclusions…
rgb

April 27, 2012 9:08 am

Thanks for the interesting citations. The Ron Christensen book that I think you would find most interesting is “Multivariate Statistical Modelling” (ISBN 0-938-87614-7). A number of Ron’s books address the issue of precedence including the role of Jaynes. If you need to reach Ron, you could do so through his company Website ( http://www.entropylimited.com/ ).
My own works in this area include the series of three articles which the climatologist Judith Curry published in her blog a year ago under the title of “The Principles of Reasoning.” The URLs are http://judithcurry.com/2010/11/22/principles-of-reasoning-part-i-abstraction/ , http://judithcurry.com/2010/11/25/the-principles-of-reasoning-part-ii-solving-the-problem-of-induction/ and http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ . If you read them and have comments I’d like to see them.
I tried to steer our conversation away from precedence because I had the larger purpose of trying to warn Lord Monckton of an error in an argument that he and a number of others (including IPCC authors) have often made. The error lies in their assumption that the equilibrium climate sensitivity (TECS) has a numerical value even though we don’t exactly know what this value is; warmists argue that the value is high while skeptics argue that it is low.
If TECS has a numerical value then to know this value is to have perfect information about the rise in the equilibrium temperature at Earth’s surface given the rise in the atmospheric CO2 concentration. Actually, by the definition of “information” as the measure of a relationship between observables, one has no information for the equilibrium temperature is not an observable. 100% of the information that seems to be created by the equation which maps the rise in the CO2 concentration to the rise in the equilibrium temperature by way of TECS is fabricated. This phenomenon falls in the baliwick of the inductive logic wherein I suspect that Monckton’s background is weak to nonexistent. If he tunes into our conversation, perhaps he’ll receive and understand my warning and do something constructive with it.

April 27, 2012 7:41 am

I have been following the discussion of the role of induction in the scientific method with some interest. This kind of discussion, though off topic, is at least a genuine attempt on the part of all its contributors to establish the objective truth, unlike the fatuities of the “no-greenhouse-effect” trolls, whose ignorance of everything from common courtesy to the manner of conducting a rational scientific debate is shameful.
It may be worth briefly outlining the formal structure of mathematical induction, now a well-developed and powerful technique. For instance, Andrew Wiles used it to demonstrate Fermat’s Last Theorem.
A demonstration by induction is in two parts: the basis for the induction, and the induction step. An illustration may help. We shall prove that the sum of all the positive integers to any given integer n is always equal to n(n+1)/2.
First, the basis for the induction. For n = 1, it is trivial to calculate that n(n+1)/2 = 1, and it is self-evident that the sum of all integers 1 to n is 1. We have, therefore, demonstrated the basis for the induction.
Next, the induction step. Here, it is necessary to discover whether, if the equation holds good for some integer k, it must also hold good for k+1.
Substituting k for n, the sum of all integers to k is k(k+1)/2 = (k^2+k)/2. To this we add k+1, which is equal to (2k+2)/2. The sum to k+1 is then (k^2+3k+2)/2.
Next, we check whether we get the same result by substituting k+1 for n. Sure enough, in that event the sum of all integers to k+1 is (k+1)(k+2)/2 =(k^2+3k+2)/2, exactly as before. So the induction step is demonstrated, whereupon the original proposition is proven true.
If this kind of rigor were deployed in climate science, the usual suspects would never have tried to claim “consensus” about how much warming a doubling of atmospheric CO2 concentration may cause.

rgbatduke
April 27, 2012 9:19 am

If this kind of rigor were deployed in climate science, the usual suspects would never have tried to claim “consensus” about how much warming a doubling of atmospheric CO2 concentration may cause.
To put it another way, if this were a different kind of problem it would very likely be analyzed in a very different way. Hypotheses that address most scientific questions are assessed with a neutral payoff. That is, when we consider whether or not Dark Matter has objective existence in our Universe, the expected payoff from the answer is a perturbation of “zero”. A positive perturbation, but one cannot reasonably infer that humans will receive any direct economic benefit from the knowledge one way or another because there is ample evidence that as far as we are concerned Newton’s Law of Gravitation works well enough to describe our local environment, possibly supplemented with some additional knowledge of general relativity. Knowledge always has some non-negative value and hence we pursue it, but the answer does not matter in that is likely to have some cash value some day. Consequently one assesses the probable answers (via an inductive optimization process on a high dimensional space) “objectively” on a nearly flat payoff terrain.
However, when we analyze questions whose answers come with some sort of objective value — either positive or negative — the optimization process changes. Instead of simply (metaphorically) fitting the data to the various consistent hypotheses and assigning a weighted degree of belief based on the best, most consistent fits (while leaving plenty of room for our ignorance to alter the profile of assigned degree of belief as it is filled in by further work) we use game theory — or economic theory — to optimize a payoff based on the answers.
A most famous example of how this process perverts reason is Pascal’s Wager:
http://en.wikipedia.org/wiki/Pascal%27s_Wager
Since this is an almost perfect metaphor for the climate debate, permit me to step through the propositions on a comparative basis, one by one. I don’t know how the formatting will work out — I’m not going to build an html table and debug this — but bear with me as I try to make the comparison clear in the context of the argument as understood by the lay person, one who cannot themselves objective assess such “evidence” as exists.
———Pascal ———————– Climate
1) God exists or he doesn’t —————– CO_2 will cause a catastrophe or it won’t
2) This is a binary game ——————— This is a binary game
3) Reason is insufficient ——————— Reason is insufficient
4) Playing is not optional ——————– Playing is not optional
5) Weigh the gain and loss —————— Weigh the gain and loss
6) Select “Belief in God” ——————— Select “Belief in CAGW”
This is necessarily a slight oversimplification, but it is adequate to understand the “religious” nature of the CAGW argumentation and its connection not to evidence alone but to a payoff (expected gain versus expected loss). Analyzed in still more detail, this suffices to explain many other aspects of the debate, because just as the priesthood derives a considerable immediate benefit from promoting belief in God, so the many advocates of CAGW not infrequently derive a certain immediate benefit from promoting the general public belief in their favored hypothesis. These “vested interests” must be carefully considered (in both directions) when attempting to assess the best strategy for winning the game.
To put the argument in words (for those that may not know of it) — as living beings, we have no choice but to bet on whether or not God exists. Reason alone is insufficient to positively answer the question — arguments can be made for or against the proposition based on evidence, but the evidence at hand can be interpreted in different ways (lacking a comprehensive theory capable of explaining it all).
If we bet yes, worship God properly, and God exists, we gain an “infinite” payoff — eternal life in paradise — that is far greater than any possible cost associated with the belief during life. If we are wrong, our losses are limited to that finite cost of e.g. supporting the priesthood with money and wasting time “worshipping” a nonexistent being. Therefore a rational being should select belief because even a remote chance of an infinite payoff is worth the slow bleed of finite costs over a finite time.
The analogy to climate science is hopefully quite obvious. We live on the planet, and hence have no choice but to “bet” on whether or not our actions will destroy its ecosystem and hence impact us. Reason alone is insufficient to positively answer the question of whether or not our actions have or will have such an effect — there are hypotheses and there is evidence, but none of the hypotheses are capable of precisely explaining the totality of the evidence. In the case for God, one can always find cases where somebody cursed God and was struck by lightning two years later, but that fails to explain the millions of people who curse God or believe in the wrong God who remain stubbornly unstruck, and thus we have a hard time fully understanding the mechanism by which divine retribution or reward operates. We have similar difficulties in the context of climate science — a violent storm happens and this is attributed to “anthropogenic global warming”, but when we try to explain why similarly violent storms happened in the past when there was presumably less “anthropogenic global warming” with roughly equal frequency we find that we cannot fully explain the mechanism that generates such storms sufficiently precisely to be able to resolve the causal connection one way or another.
But the bottom line — literally — is the analogy in the cost and benefit of belief or disbelief in the hypothesis. The “C” in CAGW is pure Pascal. If you believe in AGW and alter your life (and force everybody else to alter theirs in accordance with this belief as well, as unlike religion the choice cannot be left up to the individual if the measures justified by your belief are not universally adopted) accordingly, paying countless small sums in the form of higher energy costs and reduced quality of life for yourself and everybody else and are wrong, you’re out at most some thousands of dollars amortized over many years, and can usually rationalize even that expenditure as an investment of sorts in a “better world”.
On the other hand, if you disbelieve in AGW and fail to alter your life or refuse to go along with everybody else who believes in it, it supposedly will lead to a catastrophe. The catastrophe is open ended and hyperbolic. One of my favorites is:
The analogy of this absurd “prediction” with “going to hell” is quite obvious. There is no evidence for the existence of hell. There is no evidence that it is vaguely, remotely plausible that the Earth’s oceans will ever boil from the Greenhouse effect no matter what we do. But in both cases “hell” assigns an infinite cost to disbelief — eternal torment or the complete extinction of the ecosystem — and even reason has a hard time altering the cost-benefit analysis because one has to be certain that the hell hypothesis is incorrect to regain positive expected payoff against an infinite loss if you are wrong.
Pascal’s Wager is just one aspect of the statistical analysis of extreme and/or unlikely events. Students of decision theory and inference are also well advised to read e.g.
http://en.wikipedia.org/wiki/Black_swan_theory
to discover why inference on the basis of finite experience is often not only wrong but catastrophically wrong, categorically mistaken, unless one makes a strictly probabilistic interpretation of “truth”. Low frequency, high-impact events are often completely unpredictable in the context of ordinary science or reason or probability theory, but really can have an enormous impact on human affairs when there is a high payoff (positive or negative) with their occurrence. An extremely interesting question might be whether or not the human race should not be more interested in spending money and resources to ensure its probable survival in the event of black swan “extinction grade” events such as a major asteroid impact or gamma ray burst from a nearby exploding star (such as Betelgeuse, due to supernova any million years now) instead of spending them to prevent Hansen’s “boiling oceans”. There is no evidence that ocean boiling is accessible by any reasonable pathway associated with human endeavor or geological accident (as it has never happened in the past, even during times of extreme CO_2 content in the atmosphere). There is direct evidence that asteroid collisions happen and can cause the extinction of 90% or more of all land species, and further that they do occur predictably enough at the rate of 1 per several hundred million years on precisely that scale (plus far more less extreme events at a much greater rate).
That’s why I appreciate what you are trying to do, Christopher — there are two issues at state here. The first is the “flat” unweighted science, which is far from settled because climate is a very difficult problem with a very high dimensionality and reliable evidence available for an appallingly short time scale by any measure (even the entire thermometric record is the blink of an eye in geological time). The second is the cost-benefit analysis, which is horribly distorted and presented to the general public as Pascal’s Wager in egregiously and unmistakably religious terms. The most dire of consequences are confidently predicted, and presented as having costs so enormous that they divert the eye away from the even more enormous costs of the measures being taken to prevent them, in spite of the fact that their own theories conclude that those measures will in the end be utterly inadequate to avoid the catastrophe.
Repent ye sinners, but bear in mind that by now even if you repent — even if we all repent — judgement day cometh, and fire and brimstone shall be rained down on believer and unbeliever alike, for our sins. Only if we commit collective suicide and destroy human civilization itself and the bulk of the human population, returning to life as simple hunter-gatherers that eat our food raw can catastrophe be averted, if catastrophe is attendent upon a doubling of CO_2 by 2100. Only if we expend tens of trillions of dollars — a significant fraction of the GDP of the entire human race — and damn much of that population to a life or wretched poverty in the process can hell be even ameliorated or delayed.
Thus sayest the prophet Hansen and his acolytes, Mann, Jones, Briffa, Bradley, Hughes, and others of the priesthood whose fame, reputation, and livelihood entirely depend on the truth of these extreme predictions. And cursed be all of ye deniers who dare to question the Truth of their vision or to assert that the merest glimpse at the entire paleoclimatological record utterly contradicts their claim of immanent catastrophe brought about by higher CO_2 levels, making it a most implausible conjecture.
rgb

rogerknights
April 27, 2012 1:03 pm

Hi Mindbuilder, rgb (Robert Brown), Oldberg, and Chris:
Here’s something on induction I serendipitously found 15 minutes ago while Googling Images for something quite different. Maybe you’d care to comment?
———–
Philosophical Terms and Methods
Vocabulary Describing Arguments
Contents
Valid Arguments
Sound Arguments
Persuasive Arguments
Conditionals
Necessary and sufficient conditions
Consistency
Most of the arguments philosophers concern themselves with are–or purport to be–deductive arguments. Mathematical proofs are a good example of deductive argument.
Most of the arguments we employ in everyday life are not deductive arguments but rather inductive arguments. Inductive arguments are arguments which do not attempt to establish a thesis conclusively. Rather, they cite evidence which makes the conclusion somewhat reasonable to believe. The methods Sherlock Holmes employed to catch criminals (and which Holmes misleadingly called “deduction”) were examples of inductive argument. Other examples of inductive argument include: concluding that it won’t snow on June 1st this year, because it hasn’t snowed on June 1st for any of the last 100 years; concluding that your friend is jealous because that’s the best explanation you can come up with of his behavior, and so on.
It’s a controversial and difficult question what qualities make an argument a good inductive argument. Fortunately, we don’t need to concern ourselves with that question here. In this class, we’re concerned only with deductive arguments.

rogerknights
April 27, 2012 1:12 pm

PS: In particular, can any of you give me references to places where the issue of the “controversial and difficult question what qualities make an argument a good inductive argument” can be found? TIA.

April 27, 2012 3:36 pm

rogerknights:
A model is a procedure for making inferences. Each time an inference is made, there are many candidates for being made. Logic contains the rules by which the one candidate that is “correct” may be discriminated from the many candidates that are “incorrect.” The problem of how to make this discrimination is called the “problem of induction.”
A solution to this problem is facilitated by the fact that each inference has a unique measure. The measure of an inference is the missing information in it for a deductive conclusion, the so-called “entropy.” It follows that the problem of induction can be solved by optimization. In particular, the correct inference is the one that maximizes the entropy or, depending upon the type of inference, the one that minimizes the entropy under constraints expressing the available information.
The IPCC’s climate models were not built under principles of logical reasoning. Instead, intuitive rules of thumb called “heuristics” were used in the discrimination of the one correct inference from the many incorrect ones. A consequence is for the models to be riddled with logical errors. Among the most serious is that when a maker of governmental policy on CO2 emissions believes the IPCC has provided him/her with information about the outcomes from his/her policy decisions, he/she has no information at all. While there is no basis for making policy, it seems to the policy maker as though there is a solid basis. This error has the capacity for costing us 100 trillion US\$.

rgbatduke
April 27, 2012 1:27 pm

PS: In particular, can any of you give me references to places where the issue of the “controversial and difficult question what qualities make an argument a good inductive argument” can be found? TIA.
Dear rogerknight, sir,
Please grab Jaynes’ Mobil Lectures. If, at the end of reading it (it is quite readable even to a lay person, not too technical) you do not feel Enlightened, I would be amazed.
As I also noted, you might want to grab my own online draft:
http://www.phy.duke.edu/~rgb/axioms.pdf
which more or less precisely answers your question, with considerable examples and discussion. Polya’s books on inference in mathematics are also highly illuminating — even mathematicians rely almost entirely on heuristics and inference to formulate the theories they later, sometimes, manage to prove deductively. And sometimes they don’t. The Riemann conjecture, the Goldbach conjecture, appear to be true but (so far at least) are not provable. And Godel tells us not to be too surprised if they turn out to be true and not provable, as he proved that true, unprovable theorems exist in any sufficiently complex theory.
Cox’s monograph is also quite excellent, as is Boole’s Investigation into the Laws of Thought. Finally, if you are fairly competent in math, stats, computation or something similarly technical, you might try to tackle MacKay’s online book, but it is not terribly simple and presupposes competence at the level of at least an undergrad major in physical science, math, stats, computation in its contents. Even so, you can probably appreciate parts of it, but the math is heavy going at times. It was intended as a college text for upper-level majors or lower-level grad students, after all.
Hope this helps. Personally, I think all lawyers and many other kinds of scientist or economist or engineer would benefit from understanding Bayesian probability theory and its connection to plausible reasoning along the lines illustrated by Jaynes and in Axioms. Examples in Axioms make it clear how shifty the sands of argumentation really are.
rgb

Tomazo
April 27, 2012 1:59 pm

rgb – The link below is to another well written (abeit quite dated) paper that also speaks to your nicely described parallels & rationales between religions and AGW believers.
http://econot.com/page4.html
Among other points, it hits upon the irony of the virutally complete parallel between the GAIA-ists and the Judeo-Christian “Garden of Eden” in the “Mankind is Evil and must be Redeemed” postulate.
As rgb asks, redemption at what price? Belief in God and the science of an apparently demonstrable robust agregate climate sensitivity is clearly cheaper than giving up our lifestyles to live in caves again.
Apologies to the moderator for being off-topic (sorta).

April 27, 2012 2:12 pm

By an unfortunate accident of timing, Mr. Oldberg made the unfounded – and false – assumption that my background in inductive logic is “next to non-existent” shortly after I had posted a comment explaining in some detail precisely how mathematical induction works. Inferentially, he had not seen my comment. Since my previous posting at WUWT had revealed that I have been Classically trained in logic, as would perhaps have been evident to anyone who had done me the kindness of reading that posting, it was unwise of Mr. Oldberg to make (and still more wise of him to state) an assumption contrary to the evident facts.
However, the present head posting was designed precisely to allow people who are unfamiliar with logic – such as Mr. Oldberg – to appreciate by mathematical reasoning why it is that climate sensitivity to a doubling of CO2 concentration cannot be determined by the methods now used in the climate models.
Notwithstanding my careful explanation of the many unknown and unknowable unknowns that constitute the climate-sensitivity equation, Mr. Oldberg – as though I had not written a word – puzzlingly goes on to accuse me of making what he calls the “error” of assuming that “equilibrium climate sensitivity has a numerical value”.
Of course climate sensitivity has a numerical value: whether Mr. Oldberg likes it or not, it is a real quantity expressed in Kelvin per doubling of CO2 concentration. The point of my head posting, however, was to demonstrate that the IPCC’s methods cannot help us to narrow down the value of that crucial quantity to a small enough interval – whether expressed as a central estimate flanked by error-bars or as a probability distribution – to provide useful information for policymakers. There is no error of logic in my approach.
Mr. Knights makes the puzzling assertion that “most arguments are not deductive but inductive”, though he does not define either term. So, for the sake of dispelling the fog of confusion that seems to surround these elementary concepts in logic, deduction is the inference of a conclusion from stated premises (if any of the premises is unstated, then the logician must be diligent enough to state it). Deductive logic, when done properly, is the appropriate, valid (because internally self-consistent) and sound (because the premises are true) inference of a general conclusion by reasoning from the general to the particular.
Induction, by contrast, is the valid and sound inference of a general conclusion from particular instances. Mr. Knights is quite wrong to state that “inductive arguments do not attempt to establish a thesis conclusively”. Mathematical induction, an instance of which I demonstrated in an earlier comment, is explicitly designed to establish propositions conclusively: and, for instance, as I pointed out in that comment, Andrew Wiles relied upon mathematical induction in conclusively demonstrating Fermat’s Last Theorem. In all species of formal logic, whether deductive or inductive, the intent is to establish the truth of a conclusion by arguing validly (so that the premises logically entail the conclusion) from premises that are true. An argument in which the premises are true and validly entail the conclusion is known as a sound argument, and its conclusion is necessarily true.
It is a matter of great regret that logic is no longer universally taught to the governing class, as it was until my generation in the United Kingdom. The climate scare would never have gotten off the ground if most politicians had had some formal training in how to think.

Reply to  Monckton of Brenchley
April 27, 2012 4:41 pm

Monckton of Brenchley (April 27, 2012 at 2:12 pm):
You appear to believe that to know about mathematical induction is to know about inductive logic. This is far from the case.
The possibility of an inductive logic was created by the discovery of a mathematical theory of information. Had you mastered this logic you would know that “information” is the measure of the intersection between two state-spaces, both of them observable. As an equilibrium temperature is not an observable, the rise in the CO2 concentration can provide no information about the rise in the equilibrium temperature. If the equilibrium climate sensitivity has a particular value, though, the rise in the CO2 concentration provides perfect information about the rise in the equilibrium temperature. The apparently perfect information is entirely fabricated.

April 27, 2012 2:38 pm

I was taught logic at a very rigorous school, and I well remember inductive vs deductive logic classes.
As Lord Monckton says, inductive logic is from the particular to the general, while deductive logic is from the general to the particular.
An example of inductive logic would be: “I see a white swan. Therefore, all swans are white.” Inductive logic is fine when first formulating a conjecture, but it is deductive logic that separates the wheat from the chaff; true from false.
Anyone can employ inductive logic to support their conjecture. We see it done here all the time: “There were tornadoes, ergo, global warming.” Inductive logic is for lazy minds.

April 27, 2012 3:54 pm

Smokey (April 27, 2012 at 2:38 pm):
You’ve confused induction with inductive logic. Also, your conclusion that all swans are white does not logically follow from your premise of an observed white swan, for information about the colors of the unobserved swans is missing.
Finally, in examining the method of construction for a model for the presence of logical errors it is necessary to use the inductive as well as the deductive logic. For example,use of the deductive logic alone would not catch your error in concluding that all swans are white.

April 27, 2012 4:01 pm

Terry Oldberg,
I’ve confused nothing; I gave examples. Could my comment [“inductive logic is from the particular to the general, while deductive logic is from the general to the particular”] have been any more simple and straightforward?
I gave two examples of inductive logic, which take a particular fact and extend it in general. Simples. I don’t know why you don’t get it.

rgbatduke
April 27, 2012 4:41 pm

An example of inductive logic would be: “I see a white swan. Therefore, all swans are white.” Inductive logic is fine when first formulating a conjecture, but it is deductive logic that separates the wheat from the chaff; true from false.
Sadly, this is not correct. Inductive logic is not concerned with “true versus false — it is concerned with “probably true versus probably false”. Deductive logic is altogether about one thing and one thing only — the consistency of a proposition with a set of assumed truths — the axioms of the theory. Deduction cannot give you the axioms.
Again, you can learn all of this by reading Jaynes’ “Probability Theory, The Logic of Science”, or his Mobil lectures, or a perhaps simpler rendition in my axioms draft.
rgb

April 27, 2012 4:50 pm

Mr. Oldberg wrongly asserts that Smokey has “confused induction with inductive logic”. The process by which the truth of a general conclusion is tested against premises that are particular is called “induction”, and the species of logic that is being deployed is “inductive logic”. The two phrases mean much the same: and, even if they did not, it is poor technique to assert that someone has confused two terms without going on to dispel the confusion by defining the two terms.
Mr. Oldberg’s assertion that “it is necessary to use inductive as well as deductive logic” to establish that a fallacy is a fallacy needs qualification and clarification. The fallacy of inappropriate argument from the particular to the general, i.e. a dicto secundum quid ad dictum simpliciter, is a well-known species of fallacy, as is the fallacy of inappropriate argument from the general to the particular, i.e. a dicto simpliciter ad dictum secundum quid. One can identify these fallacies without recalling which of them is a fallacy of deduction and which a fallacy of induction. Two instances will demonstrate this.
“CO2 causes warming. Warming melts ice. The Arctic ice is melting. Therefore CO2 is causing the global warming that melts the Arctic ice.” Here, all three premises are true if one takes the trend since the satellites were first able to measure polar ice extent reliably. Yet the conclusion is false, because the argument from the particular to the general in this instance is inappropriate, for the obvious reason that there are other causes that may have melted the Arctic ice, and for the still more obvious reason that the Antarctic ice has been growing, indicating that the warming (however caused) is not truly global.
“Global warming intensifies hurricanes. Hurricane Katrina was a bad one. Therefore global warming caused Hurricane Katrina.” Here, neither of the premises is true, and nor is the conclusion, and in any event the argument is invalid because it is an inappropriate argument from the general to the particular. According to the 30-year record of hurricane activity maintained by Dr. Ryan Maue at Florida State University, hurricane activity worldwide is at its least since the satellites began watching; and theory suggests that a reduction in temperature differentials that drive hurricanes will occur if there is warming. Secondly, when Hurricane Katrina made landfall it was not a particularly bad one: it was Category 3 (on a scale of 1-5). The reason why it did so much damage was the failure of the [Democrat] administration in New Orleans to make sure that the Corps of Engineers kept the levees up to strength. Thirdly, as the IPCC itself concedes, one cannot ascribe individual extreme-weather events to global warming: nor is it possible to maintain that there have been more extreme-weather events in the last few years: and even if there had been more extreme weather, it cannot have been caused by global warming because there has not been much of it to write home about for 15 (or, according to a recent report) 20 years.
Mr. Oldberg goes on to define a heuristic, with unfortunate imprecision, as an “intuitive rule of thumb”. In mathematics, however, where precision in the definition of terms is just as important as it is in logic generally, a heuristic is a rigorous and disciplined parallel thought-experiment that illuminates or assists in confirming a result by an empirical method. It is neither “intuitive” (indeed, it can sometimes be counter-intuitive) nor a “rule of thumb”. An example will point the moral and adorn the tale.
Twice in the 2007 Fourth Assessment Report, the IPCC publishes a graph of annual global mean surface temperature anomalies since 1850. Each graph is overlaid by four overlapping least-squares linear-regression trend fits, all ending in 2005 but each commencing at a different date: 150, 100, 50, and 25 years before 2005 respectively. Each successive trend-line has a steeper slope. From this, the IPCC draws the fallacious conclusions a) that the rate of global warming is itself accelerating (an intriguing special instance of ignoratio elenchi by which the start-dates for the trend-lines are arbitrarily chosen to generate the desired result, whereas a different choice of start-points or end-points would produce precisely the opposite result); and b) that we are to blame for the imagined (and imaginary) acceleration in the warming rate (a howling non sequitur). How a supposedly trustworthy intergovernmental body can come up with such noisome garbage, and get away with refusing time and again to correct it (but without being able to defend it, for it is indefensible) beats me.
As a heuristic, one can apply the IPCC’s flagrantly bogus technique to a sine-wave. By definition, a sine-wave has a zero trend, which is why it is useful as a heuristic to illuminate the question whether the IPCC’s technique is acceptable. One can readily choose four overlapping trend-lines each less steep than its predecessor, suggesting that the sine wave exhibits not merely a falling trend, but on an accelerating falling trend. Shifting the entire sine-wave half a cycle to the left or right allows one to choose four more overlapping trend-lines each steeper than its predecessor, suggesting that the sine wave exhibits not merely a rising trend, but an accelerating rising trend. Each of these results is incompatible with the known zero trend for a sine wave, and the two results also contradict one another. Accordingly, since the data are accurate and each trend-line is calculated correctly, these two contradictions demonstrate rigorously that it is the technique of allowing the statistician to select arbitrary start-points and end-points that is defective. Accordingly, the IPCC ought not to have used the bogus technique, and the conclusions it draws by the use of that technique are inappropriate and unustifiable. This is a good example of the power – and the rigor – of the heuristic in mathematics.
Intriguingly, I was demonstrating this heuristic in Brisbane, Australia, a few months ago. Professor Bob Carter, one of the sharpest intellects on the skeptical side of the global warming debate, was sitting in the audience. He almost fell off his chair. He, too, uses the sine-wave as a heuristic to demonstrate the shoddy bogosity of the principal conclusion of the IPCC’s 2007 report. Yet neither of us had been aware that the other had independently developed the same thought-experiment. Great minds … [fools] …
If any readers are having some difficulty visualizing this thought-experiment and would be interested, they should add a comment about it and, if there is enough interest, I shall offer Anthony a posting about it (if he has not carried one already).

April 27, 2012 4:51 pm

Smokey (April 27, 2012 at 4:01 pm):
You’ve swapped terms. It is “induction” that is from the particular to the general not “inductive logic”; inductive logic contains the rules that make an induction valid. Similarly, it is “deduction” that is from the general to the particular and not “deductive logic”; deductive logic contains the rules that make a deduction valid.

April 27, 2012 5:08 pm

Terry Oldberg,
I’m not going to get into a hairsplitting debate over what you believe inductive logic is or isn’t. I’ll just leave you with my handy on-line dictionary definition. The entire definition is:
inductive |inˈdəktiv | adjective
1 characterized by the inference of general laws from particular instances.
I’ll go with Mr. Webster.

April 27, 2012 5:14 pm

Smokey (April 27, 2012 at 5:08 pm):
Webster is right. You were wrong.

April 27, 2012 5:22 pm

Terry Oldberg:
Monckton is right. You were wrong.☺

April 27, 2012 5:48 pm

Smokey (April 27, 2012 at 5:22 pm):

April 27, 2012 5:58 pm

Mr. Oldberg continues to confuse matters: one hopes that this is not deliberate. He writes that I “appear to believe that to know about mathematical induction is to know about inductive logic. This is far from the case”. Yet, whether Mr. Oldberg likes it or not, mathematical induction is a particularly rigorous and demanding species of inductive logic, which nicely illustrates how it is possible to reach rigorous conclusions by inductive methods.
Mr. Oldberg then sneers – yet again, without the slightest evidence – to the effect that I have not “mastered this logic”. It is very clear, however, that it is he who has not mastered it. For instance, he asserts that, “as an equilibrium temperature is not an observable, the rise in the CO2 concentration can provide no information about the rise in the equilibrium temperature.” Of course it can. We know by experiment that, at certain characteristic wavelengths in the long wave, radiation interacts with CO2 molecules in the atmosphere, causing warming. From these experiments we know that a rise in CO2 concentration will be likely, all other things being equal, to cause some warming. As my head posting here demonstrates, we do not have enough information to determine how much warming will occur: but we do have enough information from the increase in CO2 concentration to know that we should expect some warming. That, whatever else it is, is not “no information”.
Likewise, until recently, the far side of the Moon was not “observable”: however, it was legitimate to infer from the state of the visible face of the Moon that its invisible face was very likely to be composed of regolith pitted with impact craters, with a thermal conductance, an albedo and a mean surface temperature similar to those of the visible face. When we were finally able to have a look, we found – not greatly to our surprise – that the far side of the Moon was not, after all, made of green cheese. It was indeed made of regolith with characteristics very similar to those of the Moon’s visible face. Once again, what we could legitimately infer from the observable face of the Moon about its unobservable face is not “no information”.
Mr. Oldberg goes on to compound his error: “If the equilibrium climate sensitivity has a particular value, though, the rise in the CO2 concentration provides perfect information about the rise in the equilibrium temperature.” Since the climate system (though non-periodic) is deterministic, equilibrium climate sensitivity indeed has a particular value, though, since the climate (though deterministic) is non-periodic and the information to us about its initial state at any chosen starting moment is insufficiently precise, we do not know what that value is. Notwithstanding what Mr. Oldberg and the IPCC claim, knowing the increase in CO2 concentration tells us little about what that value is, except that it is likely to be positive. But that, though it is not “no information”, is not “perfect information” either.
Mr. Oldberg should appreciate that reciting half-understood definitions in information theory is not at all the same thing as understanding – after due thought – what these definitions mean. In the present instance, Mr. Oldberg has deluded himself by failing to stand back and think about what he is saying.
The bottom line remains the bottom line: my head posting was intended to offer a mathematical alternative to the formal, logical approach with which Mr. Oldberg and one or two others continue to have such difficulty. There are too many unknown and unknowable unknowns in the fundamental equation of climate sensitivity, and particularly in the parameters that define the overall feedback gain factor: therefore, the methods deployed in the models relied upon by the IPCC cannot determine climate sensitivity to a sufficient precision to be policy-relevant, and the interval of estimates presented by the IPCC is accordingly guesswork. It is as simple as that, and no amount of inexpert waffle about the intersection of two observable state-spaces will alter that simple and, in my submission, undeniable conclusion.

Reply to  Monckton of Brenchley
April 27, 2012 6:45 pm

Monckton of Brenchley (April 27, 2012 at 5:58 pm):
The proof of my assertion is by the definition of the term “information.” Your attempted rebuttal of this assertion is by redefinition of the same term. You should understand that in the mathematical theory of information, the term “information” acquires a precise definition that is identical to the definition I have used but different from the definition you have used in your rebuttal.
Also, in the course of our debate you have repeatedly employed the ad hominem fallacy. For the future, please refrain from muddying the waters by addressing the argument with which you disagree and not the qualities of the person who is making this argument.

April 27, 2012 6:13 pm

Terry Oldberg says: