By Christopher Monckton of Brenchley
In a previous post, I explained that many of the climate-extremists’ commonest arguments are instances of logical fallacies codified by Aristotle in his Sophistical Refutations 2300 years ago. Not the least of these is the argumentum ad populum, the consensus or head-count fallacy.
The fallacy of reliance upon consensus, particularly when combined with the argumentum ad verecundiam, the fallacy of appealing to the authority or reputation of presumed experts, is more likely than any other to mislead those who have not been Classically trained in mathematical or in formal logic.
To the Classicist, an argument founded upon any of the Aristotelian logical fallacies is defective a priori. Nothing more need be said about it. However, few these days are Classicists. Accordingly, in this post I propose to explain mathematically why there can be no legitimate consensus about the answer to the central scientific question in the climate debate: how much warming will occur by 2100 as a result of our sins of emission?
There can be no consensus because all of the key parameters in the fundamental equation of climate sensitivity are unknown and unknowable. Not one can be directly measured, indirectly inferred, or determined by any theoretical method to a precision sufficient to give us a reliable answer.
The fundamental equation of climate sensitivity determines how much global warming may be expected to occur once the climate has settled back to a presumed pre-existing state of equilibrium after we have perturbed it by doubling the atmospheric concentration of CO2. The simplifying assumption that temperature feedbacks are linear introduces little error, so I shall adopt it. For clarity, I have colored the equation’s principal terms:
Climate sensitivity at CO2 doubling (blue) equals the product of the CO2 forcing (green), the Planck parameter (purple) and the feedback gain factor (red).
The term in green, ΔF2x, is the “radiative forcing” that the IPCC expects to occur in response to a doubling of the concentration of CO2 in the air. Measurement and modeling have established that the relation between a change in CO2 concentration and a corresponding change in the net down-minus-up flux of radiation at the top of the climatically-active region of the atmosphere (the tropopause) is approximately logarithmic. In other words, each additional molecule of CO2 exerts less influence on the net radiative flux, and hence on global temperature, than its predecessors. The returns diminish.
To determine the radiative forcing in response to a CO2 doubling, one multiplies the natural logarithm of 2 by an unknown coefficient. The IPCC’s first and second Assessment Reports set it at 6.3, but the third and fourth reduced it by a hefty 15% to 5.35. The CO2 forcing is now thought to be 5.35 ln 2 = 3.708 Watts per square meter. This value was obtained by inter-comparison between three models: but models cannot reliably determine it. Both of the IPCC’s values for the vital coefficient are guesses.
The term in purple,
, denominated in Kelvin per Watt per square meter of direct forcing, is the Planck or zero-feedback climate-sensitivity parameter. This is one of the most important quantities in the equation, because both the direct pre-feedback warming and separately the feedback gain factor depend upon it. Yet the literature on it is thin. Recent observations have indicated that the IPCC’s value is a large exaggeration.
The Planck parameter is – in theory – the first differential of the fundamental equation of radiative transfer about 3-5 miles above us, where incoming and outgoing fluxes of radiation are equal by definition. The measured radiative flux is 238 Watts per square meter. The radiative-transfer equation then gives us the theoretical mean atmospheric temperature of 255 Kelvin at that altitude, and its first differential is 255 / (4 x 238), or 0.267 Kelvin per Watt per square meter. This value is increased by a sixth to 0.313 because global temperatures are not uniformly distributed. However, it is also guesswork, and the current Lunar Diviner mission suggests it is a considerable overestimate.
Theory predicts that the Moon’s mean surface temperature should be around 270 Kelvin. However, Diviner has now found the mean lunar equatorial temperature to be 206 K, implying that mean lunar surface temperature is little more than 192 K. If so, the theoretical value of 270 K, and thus the lunar Planck parameter, is a 40% exaggeration.
If the terrestrial Planck parameter were similarly exaggerated, even if all other parameters were held constant the climate sensitivity would – on this ground alone – have to be reduced by more than half, from 3.3 K to just 1.5 K per CO2 doubling. There is evidence that the overestimate may be no more than 20%, in which event climate sensitivity would be at least 2.1 K: still below two-thirds of the IPCC’s current central estimate.
If there were no temperature feedbacks acting to amplify or attenuate the direct warming caused by a CO2 doubling, then the warming would simply be the product of the CO2 radiative forcing and the Planck parameter: thus, using the IPCC’s values, 3.708 x 0.313 = 1.2 K.
But that is not enough to generate the climate crisis the IPCC’s founding document orders it to demonstrate: so the IPCC assumes the existence of several temperature feedbacks – additional forcings fn demonimated in Watts per square meter per Kelvin of the direct warming that triggered them. The IPCC also imagines that these feedbacks are so strongly net-positive that they very nearly triple the direct warming we cause by adding CO2 to the atmosphere.
The term in red in the climate-sensitivity equation is the overall feedback gain factor, which is unitless. It is the reciprocal of (1 minus the product of the Planck parameter and the sum of all temperature feedbacks), and it multiplies the direct warming from CO2 more than 2.8 times.
Remarkably, the IPCC relies upon a single paper, Soden & Held (2006), to establish its central estimates of the values of the principal temperature feedbacks. It did not publish all of these feedback values until its fourth and most recent Assessment Report in 2007.
The values it gives are: Water vapor feedback fH2O = 1.80 ± 0.18; lapse-rate feedback flap = –0.84 ± 0.26; surface albedo feedback falb = 0.26 ± 0.08; cloud feedback fcld = 0.69 ± 0.38 Watts per square meter per Kelvin. There is also an implicit allowance of 0.15 Kelvin for the CO2 feedback and other small feedbacks, giving a net feedback sum of approximately 2.06 Watts per square meter of additional forcing per Kelvin of direct warming.
Note how small the error bars are. Yet even the sign of most of these feedbacks is disputed in the literature, and not one of them can be established definitively either by measurement or by theory, nor even distinguished by any observational method from the direct forcings that triggered them. Accordingly, there is no scientific basis for the assumption that any of these feedbacks is anywhere close to the stated values, still less for the notion that in aggregate they have so drastic an effect as almost to triple the forcing that triggered them.
Multiplying the feedback sum by the Planck parameter gives an implicit central estimate of 0.64 for the closed-loop gain in the climate system as imagined by the IPCC. And that, as any process engineer will tell you, is impossible. In electronic circuits intended to remain stable and not to oscillate, the loop gain is designed not to exceed 0.1. Global temperatures have very probably not departed by more than 3% from the long-run mean over the past 64 million years, and perhaps over the past 750 million years, so that a climate system with a loop gain as high as two-thirds of the value at which violent oscillation sets in is impossible, for no such violent oscillation has been observed or inferred.
Multiplying the 1.2 K direct warming from CO2 by its unrealistically overstated overall feedback gain factor of 2.8 gives an implicit central estimate of the IPCC’s central estimate of 3.3 K for the term in blue,
, which is the quantity we are looking for: the equilibrium warming in Kelvin in response to a doubling of CO2 concentration.
To sum up: the precise values of the CO2 radiative forcing, the Planck parameter, and all five relevant temperature feedbacks are unmeasured and unmeasurable, unknown and unknowable. The feedbacks are particularly uncertain, and may well be somewhat net-negative rather than strongly net-positive: yet the IPCC’s error-bars suggest, quite falsely, that they are known to an extraordinary precision.
It is the imagined influence of feedbacks on climate sensitivity that is the chief bone of contention between the skeptics and the climate extremists. For instance, Paltridge et al. (2009) find that the water-vapor feedback may not be anything like as strongly positive as the IPCC thinks; Lindzen and Choi (2009, 2011) report that satellite measurements of changes in outgoing radiation in response to changes in sea-surface temperature indicate that the feedback sum is net-negative, implying a climate sensitivity of 0.7 K, or less than a quarter of the IPCC’s central estimate; Spencer and Braswell (2010, 2011) agree with this estimate, on the basis that the cloud feedback is as strongly negative as the IPCC imagines it to be positive; etc., etc.
Since all seven of the key parameters in the climate sensitivity equation are unknown and unknowable, the IPCC and its acolytes are manifestly incorrect in stating or implying that there is – or can possibly be – a consensus about how much global warming a doubling of CO2 concentration will cause.
The difficulties are even greater than this. For the equilibrium climate sensitivity to a CO2 doubling is not the only quantity we need to determine. One must also establish three additional quantities, all of then unmeasured and unmeasurable: the negative forcing from anthropogenic non-greenhouse sources (notably particulate aerosols); the warming that will occur this century as a result of our previous enrichment of the atmosphere with greenhouse gases (the IPCC says 0.6 K); the transient-sensitivity parameter for the 21st century (the IPCC implies 0.4 K per Watt per square meter); and the fraction of total anthropogenic forcings represented by non-CO2 greenhouse gases (the IPCC implies 70%).
Accordingly, the IPCC’s implicit estimate of the warming we shall cause by 2100 as a result of the CO2 we add to the atmosphere this century is just 1.5 K. Even if we were to have emitted no CO2 from 2000-2100, the world would be just 1.5 K cooler by 2100 than it is today. And that is on the assumption that the IPCC has not greatly exaggerated the sensitivity of the global temperature to CO2.
There is a final, insuperable difficulty. The climate is a coupled, non-linear, mathematically-chaotic object, so that even the IPCC admits that the long-term prediction of future climate states is not possible. It attempts to overcome this Lorenz constraint by presenting climate sensitivity as a probability distribution. However, in view of the uncertainty as to the values of any of the relevant parameters, a probability distribution is no less likely to fail than a central estimate flanked by error-bars.
If by this time your head hurts from too much math, consider how much easier it is if one is a Classicist. The Classicist knows that the central argument of the climate extremists – that there is a (carefully-unspecified) consensus among the experts – is an unholy conflation of the argumentum ad populum and the argumentum ad verecundiam. That is enough on its own to demonstrate to him that the climate-extremist argument is unmeritorious. However, you now know the math. The fact that not one of the necessary key parameters can be or has been determined by any method amply confirms that there is no scientific basis for any assumption that climate sensitivity is or will ever be high enough to be dangerous in the least.
pochas says:
April 28, 2012 at 7:39 pm
There is not much IR coming from the sun, its almost all shortwave.
==============================================
It is not about “much”, it is about “more” or “less” NET blocked/trapped.
I’d like to see the scientific evidences, particularly the experimental ones Lord Monckton claimed to exist. I am still looking forward to his scientific answer.
I’ve looked at and printed out the chapter on inductive logic in the Stanford Encyclopedia of Philosophy that Greg House quoted. Yikes!
I looked further in that encyclopedia and discovered something more to my purpose – its chapter on Informal Logic. Others might be interested too. It’s here:
http://plato.stanford.edu/entries/logic-informal/#Fal
rogerknights (April 29, 2012 at 6:53):
On the topic of inductive logic, the Stanford Encyclopedia of Philosophy is 49 years out of date.
Steven Mosher, Brendan H, tom, joeldshore and rgbatduke:
I add these comments to the answers from Lord Monckton to you.
Steven Mosher:
At April 24, 2012 at 12:20 am you assert:
“the models used to estimate the forcing due to doubling ( 3.7w) are line by line radiative transfer models.”
No, that is incorrect. If you were right then the models would use the value (or the narrow range of values) indicated by the line transfer analyses.
In fact, the models are tuned by curve fitting and climate forcing is a fitted parameter. Therefore, each model uses a unique value of climate sensitivity that provides a best fit for that model and is compensated by a unique value of aerosol negative forcing.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007)
Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png ]
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:
”Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.”
It shows that over the twentieth century
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4 for the twentieth century.
But they would all use the same value of “Total anthropogenic forcing” over the twentieth century if they all used the same value of forcing for a doubling of CO2.
Brendan H:
At April 24, 2012 at 4:13 am you assert;
“The claim here is that those who lack a certain type of training in mathematical or formal logic are likely to be misled by logical fallacies, and even more so by the arguments from consensus and authority.
The underlying assumption is that those who have acquired a certain type of training in mathematical or formal logic are less likely to be misled by logical fallacies; that is, they are better placed to identify logical fallacies than those not so blessed.
And this, of course, is an argument from authority, where those who are “Classically trained” in mathematical or formal logic are considered to have a better understanding of the subject than those not so well trained.”
Nonsense!
There is no argument from authority. There is only the simple truth that people are more likely to recognise logical fallacies when they have learned what logical fallacies are.
Similarly, people who have learned motor mechanics are more likely to recognise a carburettor than those who have not had such training.
There would be an argument from authority if it were claimed that only the classically trained can recognise a logical fallacy or that only trained motor mechanics can recognise a carburettor.
All your other posts are similar to your first one (which I have commented); i.e. they are illogical. But I do not consider it worth the bother to refute each of them.
tom:
At April 24, 2012 at 6:58 am you assert:
“It is the people who for their own personal self interest and maintenance of their narrow but tenuous positions will do most anything to keep the rest of us from acting in sane and rational to the dangers we all face.”
I agree and I think we should do all we can to oppose them. Therefore, please provide sufficient information to enable me to oppose you.
joeldshore:
At April 26, 2012 at 7:43 am you wrote;
“I’ll respond to other part of the good Lord’s diatribe later. But, let me just comment on the most substantive point where he is confused.”
You, joeldshore, claim Lord Monckton wrote “diatribe” and “is confused”!?
You said that! Now, that is funny; really, really funny. Ploughing through the comments I needed some comic relief and you provided it. Thankyou.
Rgbatduke:
I enjoyed your comments pertaining to Pascal’s Wager, but I prefer the ‘famous last words’ of the great unbeliever who when on his deathbed was asked by the Priest, “Will you now turn to God by renouncing the Devil and all his works?”. He replied, immediately before taking his last breath,
“This is not the time to be making enemies”.
Richard
PS Christopher, over the years since the IPCC made the false argument about “accelerating” global warming, I too have used the sine wave demonstration of the IPCC error. Indeed, I have used it a few times on WUWT.
Pochas says
However, if global warming is supposed to be caused entirely by the increasing CO2 concentration and the warming has stopped while the CO2 continues to increase, then the CO2 is not really causing the warming.
Henry@Pochas
Well,thanks, that was sort of my argument, even if there were only a partial influence of trhe CO2; namely, most recently, after identifying that nobody really has the right answers on the CO2, see here,
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
I set to determine the exact warming rates. My results on a statistical analysis of 44 weather stations shows that it actually has been cooling since 1994.
http://www.letterdash.com/henryp/global-cooling-is-here
Others have reported no warming in the last decade or so, I report a cooling rate of 0.2 degrees K since 2000.
You will ask: who do you believe. I say: reather believe me because my method is sort of almost independent of calibration, whilst others still have to struggle with that (how do you do your calibration in a satellite?). 0.2 degrees C is not a lot.
Either way, there is no warming,
hence the CO2 increase does not cause any warming, whatsoever,
we do not need any other reasoning then the temp. measurements performed, and an analysis of those,
and what we can deduct from those.
In his post of April 27, 2012 at 2:12 pm, Monckton of Brenchley concludes (second paragraph) that I am “unfamiliar with logic.” Monckton has not yet revealed the argument by which he reaches this conclusion. The following facts are inconsistent with this conclusion and would seem to falsifiy it.
In the course of earning three degrees in engineering (Cornell, Michigan, Santa Clara), 10 years of college credits and a professional engineer’s license, I was repeatedly tested on my ability to reach conclusions from facts and logic and passed. As a practicing engineer, I rose to the level of manager in charge of large, complex scientific research and product development projects. During a stint as a computer programmer, I was promoted to the highest rank based in part on my knack for designing logical algorithms. As a graduate student in electrical engineering, I studied logic and was repeatedly tested on my ability to design logical circuitry. As a project manager, I successfully designed a long succession of scientific studies; the design of a study taxes one’s competency in the inductive as well as the deductive branch of logic. I’ve studied mathematics and mathematical statistics at the level of courses designed for graduate students; mathematics is based in the deductive logic while mathematical statistics contains inductive as well as deductive elements. In the peer reviewed literature, I’ve published on theoretical and applied statistics and on advanced information theory. I headed the theoretical research program of the electric utilities of the United States on nuclear reactor core materials, using my extremely rare command of the inductive logic in bringing this program to a successful conclusion. In the peer reviewed literature of climatology, I’ve published on the topic of “logic and climatology.” At Stanford University, I’ve lectured on applications of the inductive logic in materials science research. I’ve lectured on the inductive logic to meetings of the American Chemical Society, American Nuclear Society, American Institute of Chemical Engineers and American Society for Quality. To meetings of the American Society for Non-destructive Testing, I’ve lectured on an application of argumentum verecundiam in their field. In the peer reviewed literature, I’ve published on violations of the laws of the excluded middle and non-contradiction in the research program of the U.S. Nuclear Regulatory Commission. Writing in the blogosphere, I’ve exposed numerous logical failings in the IPCC’s argument for CAGW. In a peer reviewed article, I’ve demonstrated that the methodology of the IPCC’s inquiry into global warming was not scientific.
HenryP says:
April 29, 2012 at 11:01 am
Henry@Pochas
Well,thanks, that was sort of my argument, even if there were only a partial influence of trhe CO2; namely, most recently, after identifying that nobody really has the right answers on the CO2, see here,
http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011
=================================================
Very good article, Henry.
Put back the Water Cycle and there is no Greenhouse Effect – that AGWScienceFiction plus 33°C warming is a con, A CON, it is missing the middle; the Water Cycle which cools the Earth 52°C to get the temps down to 15°C.
Rgbatduke:
I enjoyed your comments pertaining to Pascal’s Wager, but I prefer the ‘famous last words’ of the great unbeliever who when on his deathbed was asked by the Priest, “Will you now turn to God by renouncing the Devil and all his works?”. He replied, immediately before taking his last breath,
“This is not the time to be making enemies”.
Famous and apocryphal, like the alleged deathbed conversion of Darwin. But I think Voltaire’s written works and words that are attributed beyond question speak for themselves on this matter. My own personal favorite Voltaire story was told to me by George Roberts (a student/colleague of Russell) way back in 1974, IIRC. Voltaire and the Marquis de Sade were contemporaries, and de Sade was fond of inviting the famous and infamous people of his day to his orgies. One day he invited Voltaire, and to his great delight Voltaire accepted, attended, and disported with the greatest of abandon in every sense of the word. de Sade, overcome with joy at having (he though) found a kindred spirit, invited him to a second orgy, but Voltaire declined with the immortal words, “Once is philosophy, twice is perversion.”
Although I am struck by certain similarities between de Sade’s Justine and Voltaire’s Candide — they did think very similarly about the evil of the church and priesthood and their stories illustrate that the corruption and pederasty of today are far from a new thing — the end result of their philosophy was very different.
To Christopher Monckton (and Bob Tisdale if he happens to be listening in):
Before you take the time to write your own treatment of the curve fitting implicit in GCMs and trends, you should take the time instead to look at:
http://itia.ntua.gr/en/docinfo/673/
(grab the preprint). Figure 1 in his introduction says it almost as well as it can be said — trying to fit a timeseries (or interpret a timeseries according to some fit function) almost invariably depends on the scale length being fit. Of course, anyone with even a crude understanding of calculus and the Stone-Wierstrass theorem:
http://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem
would recognize that fitting a smooth curve with a polynomial function in the vicinity of a point is fraught with peril — yes, you can always do it (with a Taylor series being an example of an analytic expression for the result) but you have no a priori idea of how many terms will be required to fit or what their proper interpretation is, and of course there are in general an infinite number of e.g. interpolating polynomials for any given finite data set.
Koutsoyiannis seems to me to be rather brilliant — he is a hydrologist from Athens (AFAICT) and actually understands statistics, which is rather a rarity in climate science. Tisdale in particular should read or look over:
http://www.cwi.colostate.edu/nonstationarityworkshop/SpeakerNotes/Wednesday%20Morning/Koutsoyiannis.pdf
where he shows — convincingly to me at any rate — that a) GCMs ain’t got no skill (but we knew that); b) the reason they ain’t got no skill is that they are interpreting the climate record as a continuous function driven by a single parameter e.g. CO_2 concentration; c) where in actuality, it is a random stationary process — in particular a Hurst-Kolmogorov random process of punctuated stationary behavior.
Having read with interest the many WUWT articles that present SSTs as punctuated stationary processes with discrete and rather sudden temperature shifts often synchronized with ENSO, it might be useful for Tisdale to analyze them as a Hurst-Kolmogorov process, although the time series may be too short to do so as convincingly as Koustoyiannis does with the Nile river heights (one of the longest running moderately precise climate series on the planet).
Since random simulations and Markov processes are my personal cup of tea this approach and interpretation rather appeals to me, but like it or not his analysis of the noise and trends in a fairly wide selection of climate time series is both interesting and somewhat compelling.
rgb
Oops, Mr. Referee or Anthony — the link in my previous post to the Koustoyannis talk is broken by a space. Sadly, the link address actually HAS a space here — some people do not quite understand how fragile browser and link entry systems are and persist in including non-parsable spaces in their links (although you’d think that talk organizers making the presentations available online would know better, sigh).
I don’t know if this can be fixed in wordpress, but for those interested in following the link you should be able to cut and paste it into your browser directly to access the PDF.
rgb
REPLY: inserted a %20 to code for the space – Anthony
There would be an argument from authority if it were claimed that only the classically trained can recognise a logical fallacy or that only trained motor mechanics can recognise a carburettor.
I should point out that “argument from authority” is not really a logical fallacy, not in the Bayesian logical system that actually applies to the real world developed by Jaynes. It is, like everything else in the scientific worldview where “true” doesn’t mean what it does in classical Boolean or Aristotelian logic but rather means “probably true given the following evidence and assumptions”, is a judgement call on the part of the listener.
That is because a rational person using common sense and possessing incomplete information or personal knowledge or even intelligence sufficient to answer a given question might quite reasonably elect to “trust” an authoritative person presumed to have more complete information (evidence) at hand, a better personal knowledge of the linked Bayesian priors (e.g. laws of nature and/or laws of thought), and possibly even greater raw intelligence. Of course an additional assumption is generally made in extending this sort of trust to authority — that the authority is honest and has no vested interest or personal bias.
Thus when I make certain arguments concerning things that I have studied very extensively and worked on for years and am arguably a “world’s greatest expert” (not asserting to be the best/most knowledgeable but perhaps ONE of the best or most knowledgeable) then most people would be well advised to weight my words and opinions more heavily than, say, my sixteen year old son’s words or your average person off the street’s words on that subject. That isn’t to say that I might not be mistaken; only that I am less likely to be mistaken, and you have to form your own opinion about whether or not I am a liar or a con artist.
Yet you will observe that I defer to the “authority” of Lief Svalgaard on precisely the same grounds — not because he is always right or guaranteed to be right but because he is a lot more likely to be right than I am on matters of solar dynamics or general astrophysics because that is what he does (where I have worked a lot more in condensed matter and quantum physics and computing and statistics) and because I judge him to be honest. I would therefore require a powerful argument and solid evidence to challenge his conclusions in this field, and in the end if my argument was good enough I would expect him to either change his mind or if it wasn’t good enough I’d change mine or we’d agree to disagree on insufficient (so far) evidence.
One of the tragedies of the politicization of the climate debate is that it has weakened the reliability of argument by authority in the general political and social sphere. We see climate scientists with a clear vested interest. We see direct evidence of those same scientists using immoral and unethical means to suppress dissent with their personal conclusions in supposedly impartial journals. We see overt lies — the “settled science” lie, the “scientists all agree” lie (what am I, chopped liver? as the saying goes:-), the many lies implicit in cherrypicked data and confirmation bias pervading the unsettled science and ultimately are somewhat reluctantly forced to conclude that we can not rely on authority in climate science because the authority is being abused to advance a political agenda and conceal all evidence of doubt in the underlying science used to justify it.
My wife is a physician. When she argues from authority on matters of diagnosis and treatment and health, she may be wrong but you are still well advised to take her advice instead of mine as a physicist — assuming that you like living…;-) That’s because she is an authority. She’s not right because she’s an authority; she’s an authority because she has worked hard enough studying, learning, and practicing that she is usually right.
rgb
rgbatduke (April 30, 2012 at 8:08 am):
Usually, the fallacy called argumentum ad Verecundiam is described as appeal to illegitimate authority. This description raises the issue of what is meant by “illegitimate.”
Courts are faced with this issue when claims made in testimony are represented as “scientific” but what is meant by “scientific”? In the courts of the U.S., this issue is resolved by the Daubert standard. Under this standard, a necessary condition for a claim to be “scientific” is for it to be falsifiable. Falsifiability implies the existence of a statistical population but in climatological arguments one is hardly ever present. Under the Daubert standard, this is an example of argumentum ad verecundiam.
Monckton of Brenchley say:
You are correct that I made a careless arithmetic mistake in where the factor of 4 goes. However, it is also totally irrelevant to the discussion since what we are interested in is the proportionality. As I have shown, your argument that the fact that the Planck parameter is the temperature divided by (four times the flux) means that the Planck parameter is proportional to the radiating temperature is wrong…And, in fact, your claim that you can say this because you can assume that the flux is constant leads to the ridiculous situation where you could also rewrite things in a way where the Planck parameter is proportional to temperature to the negative 7th power…Or really any power you want!
The correct way to determine the dependence is to eliminate the flux from the equation so that the Planck parameter is expressed only in terms of the radiating temperature. Doing so shows that the Planck parameter is proportional to 1/T^3 and hence confirms what I have been saying: that the Planck parameter is a decreasing function of the radiating temperature. Physically, this just states the obvious fact that the temperature rise needed to produce an additional 1 W/m^2 of radiative flux is larger when the temperature is low than when the temperature is high, a simple consequence of the dependence of the radiative flux on the 4th power of the temperature.
It is not clear what any of this really has to do with the average temperature of the moon. But, as I have discussed, Holder’s Inequality tells us that the 270 K temperature derived for the fourth root of the average of T^4 over the surface of the moon is an upper bound on the average temperature of the moon. Furthermore, there is no expectation that it should be a particularly strong bound for an airless body like the moon that has a very non-uniform surface temperature. Hence, the fact that the average temperature of the moon is considerably lower than 270 K is not in fact a mystery…It is a direct consequence of the mathematics!
Henry@Greg House
Thanks! So far it seems very few people agree with you…
Terry Oldberg says
Under this standard, a necessary condition for a claim to be “scientific” is for it to be falsifiable
Henry says
As I said earlier: get a few people good at stats and check my results
http://www.letterdash.com/henryp/global-cooling-is-here
I don’t want to scare you guys but if I take my plot of the drop in maxima to its best case scenario
(linear drop) forward to today, we could already be cooling at about 0.04 degrees C per annum.
In the worst case, if
y= 0.0454 ln(x)-0.1278 (R2=0.994) is true
we could already be cooling at a rate of 0.1 degrees C or K per annum.
Adding more CO2 to the atmosphere won’t help much, I am afraid,
as apparently only Greg and myself understand.
HenryP:
In drawing a conclusion by regression analysis, one should beware the pitfall of drawing a false conclusion via base rate neglect; there is a tutorial at http://en.wikipedia.org/wiki/Base_rate . The base rate of the outcome of an event is its unconditional probability. In base rate neglect, one ignores or underweights the base rate in assigning a numerical value to the conditional probability of an outcome.
In assigning the proper weight to the base rate, a statistical population and samples drawn from this population are necessities. Currently climatologists do not feel the need for these necessities. That they do not provide for them ensures that climatological arguments err via base rate neglect.
HenryP says:
April 30, 2012 at 10:00 am
as apparently only Greg and myself understand.
======================================================
Henry, I am sorry for the misunderstanding, but I praised your other article: http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011 .
I have not read this one yet: http://www.letterdash.com/henryp/global-cooling-is-here
you konw that the wetter in europe is acting really strange for about the last three years there sure is no warming but a clima change you can argue about its cause but not denied it the last winter in germany was the coldest since 20 years and after that winter we had at 10.05.2012 about 20 degrees (Celcius) 11.05.2012 middays 30 degrees (Celsius) now at the moment im writing this 11.05.2012 evening we have 20 degrees (Celsius) the weather for tomorrow should be highest 11 degrees (Celsius)
Henry@jeremy
http://wattsupwiththat.com/2012/05/10/uah-global-temperature-up-in-april/#comment-982383
Wow… what a bunch of dishonest losers you guys are. You make young earth creationists look good.
REPLY: I think you need to be schooled on honesty first before your opinion has merit.
1. Fake name
2. Fake email “lyingsh..bag.com”
3. Fake argument
4. Policy violation
5. Banned
@Moderator, you wrote: =I overstepped in this case. Although Peter Hadfield has never challenged Lord Monckton to a debate, that is not saying he will not debate. My apologies for giving that impression. He is still free to issue a debate challenge.=
Thanks very much for the correction, and I appreciate and accept that it was inadvertant. We are all guilty of overstepping the mark every now and again, myself included.
Just to clarify, while you correctly point out that I have not issued a challenge to Mr. Monckton, it is important to point out that neither has he issued a challenge to me. The reason, at least in my case, is that I have been conducting a very enlightening debate with Mr. Monckton online, and that debate continues as I await his promised response. The debate was begun by Mr. Monckton on your site, not by me, and while I am more than willing to continue it, it seems Mr. Monckton is reluctant, despite promising to respond. For an explanation as to why he has not responded, and why Anthony Watts subsequently decided to no longer host the debate, you will have to ask them. Fortunately a site called the League of Reason has promised to continue hosting the debate, and has written to Mr. Monckton asking for his promised rebuttal. Anything he writes as an explanation of the discrepencies, and any sources he cites, can be checked and verified as the debate continues. If I am wrong about Mr. Monckton’s sources then it should be very easy for him to show it through this debate and I look forward to reading it.
Kind regards,
Peter Hadfield
The World Political landscape has went to hell over the past couple decades due to media bias. We need to rise up for our children and take back our country from Big Pharma, Big Tobbacco, Big Insurance and really just big corporations. It is time for our elections to cease being purchased.
Peter Hadfield is as fixated on Lord Monckton as Ahab was on the white whale.