Guest essay by Christopher Monckton of Brencheley
The splendidly-titled Alberto Zaragoza Comendador, commenting on my recent posting taking apart Mr. Mann’s latest fantasia in Scientific American, was startled by my statement that only half of equilibrium global warming would emerge after a couple of hundred years, because –
“Equilibrium climate sensitivity is a measure of the global warming to be expected in 1000-3000 years’ time in response to a doubling of CO2 concentration, regardless of how that doubling came about. It has nothing to do with fossil-fuel emissions scenarios.”
El Comendador wrote:
“Whoa. Whoa whoa whoa. The effects of a CO2 doubling aren’t felt until 1000 years later? So if we hit 560 ppm we’ll in theory get 2.5°C of warming. But only 1.25°C will happen in the first 200 years? Am I getting this right? Can anybody please confirm?”
In fact, he had to write again, because I did not reply at once, for the fascinating answer to his entirely proper question needs a head posting to address it properly. He wrote:
“I have to ask the question again: is the literature certain (or as certain as climate science can be) about the time it takes for warming to kick in? At one point in the article Monckton says only half of the warming happens in the first 200 years. The rest may happen over the following 1000-3000 years. Politicians have set this nonsense 2 Cº limit, whichm when compared to pre-industrial times, means we only have 1.1 Cº warming left of warming before mega-disaster happens. I always knew it was a matter of decades, but now it seems to be a matter of centuries. If true, this takes the absurdity of the whole dangerous-anthropogenic-global-warming bandwagon to another level. And I wonder how many in the public know this: 0.01%, maybe? Of course it’s extremely convenient for the usual suspects that it will take so much time for warming to kick in: they can always claim the thing hasn’t been disproved, therefore the money should keep flowing.”
El Comendador is quite right to press his excellent question, and I must begin by apologizing that I was not able to answer it sooner.
I must also issue an Equation Alert. We’re going to have to review – in the simplest fashion – the fundamental equation of climate sensitivity, and then go deep into the IPCC’s documents to work out what they have hidden by their now-traditional device of not making it explicit what their projections entail. So, hold on to your hats. Here goes.
Climate sensitivity: The global warming ΔTt to be expected in response to a given proportionate increase in CO2 concentration over a specified term of years t is for present purposes sufficiently described by the simplified climate-sensitivity relation (1), where ΔTt, denominated in Kelvin or Celsius degrees, is the product of three quantities: the reciprocal of the fraction q of total anthropogenic forcing that is driven by CO2; a time-dependent climate-sensitivity parameter λt, which is itself the product of the instantaneous or Planck sensitivity parameter λ0 and a time-dependent temperature-feedback gain factor Gt; and the CO2 radiative forcing ΔFt. Annex B provides a more detailed discussion of (1), and of the uncertainties to which it gives rise.
Global warming ΔTt: On business as usual, without mitigation, global warming of 2.8 K from 2000-2100 is the mid-range projection in IPCC (2007, table SPM.3). Since the Earth has warmed at a rate well below those projected in all five IPCC Assessment Reports and there has been no global warming since 1996 (RSS, 2014), 2.8 K 21st-century warming will be taken as close to the upper bound.
CO2 concentration: On business as usual, unmitigated CO2 concentration over the 21st century will attain the annual values (in μatm) in Table 1, derived from the mid-range estimates in IPCC (2007).
CO2 forcing: According to the IPCC, a radiative forcing is an external perturbation in a presumed pre-existing climatic radiative equilibrium, leading to a transient radiative imbalance that will eventually settle toward a new equilibrium at a different global temperature. Experiment and line-by-line radiative transfer analysis have demonstrated that the CO2 radiative forcing ΔFt is reasonably approximated by the logarithmic relation (2),
where (Ct/C0) is a proportionate change in CO2 concentration over t years, with C0 the unperturbed value. Myhre et al. (1998), followed by IPCC (2001), give the coefficient k as 5.35, so that, for example, the CO2 forcing that arises from doubled concentration is 5.35 ln 2, or 3.708 W m–2.
Planck parameter λ0: Immediately after a perturbation by an external radiative forcing such as anthropogenically-increased CO2 concentration, the climate sensitivity parameter by which the forcing is multiplied to yield the global temperature response will take its instantaneous or Planck value λ0 = 0.31 K W–1 m2 (expressed reciprocally as 3.2 W m–2 K–1 in IPCC, 2007, p. 361 fn.).
The sensitivity parameter λn: To allow for the incremental operation of temperature feedbacks, considered by the IPCC to be strongly net-positive, λn is projected to increase over time. The IPCC implicitly takes λn as rising from the instantaneous value λ0 = 0.31 K W–1 m2 via the centennial value λ100 = 0.44 K W–1 m2 and the bicentennial value λ200 = 0.50 K W–1 m2 (derived in Table 2) to the equilibrium value λ∞ = 0.50 K W–1 m2. The equilibrium value is not attained for 1000-3000 years (Solomon et al., 2009).
Centennial parameter λ100: This and longer-term values of λn allow for longer-term mitigation benefit-cost appraisals. The IPCC projects CO2 concentration of 713 μatm in 2100 against 368 μatm in 2000, and a mid-range estimate of 2.8 K warming by 2100, of which 0.6 K is pre-committed (IPCC, 2007, table SPM.3), leaving 2.2 K of new warming, of which 70% (derived in Table 2), or 1.54 K, is CO2-driven. Therefore, the IPCC’s implicit centennial climate sensitivity parameter λ100 is 1.54 K divided by 5.35 ln(713/368) W m–2, or 0.44 K W–1 m2, representing an increase of 0.13 K W–1 m2 over a century against the Planck value λ0 = 0.31 K W–1 m2. This value is half of the equilibrium value λ∞, derived below.
Bicentennial parameter λ200: Examination of the six SRES emissions scenarios for 1900-2100 (Table 2) demonstrates the IPCC’s implicit bicentennial sensitivity parameter λ200 to be 0.50 K W–1 m2 on each scenario.
Equilibrium parameter λ∞: Dividing the IPCC’s 3.26 K central estimate of climate sensitivity to a CO2 doubling (IPCC, 2007, p. 798, box 10.2) by the 3.71 W m–2 radiative forcing in response to a CO2 doubling gives the implicit equilibrium sensitivity parameter λ∞ = 0.88 K W–1 m2, attained after 1000-3000 years.
CO2 fraction: In Table 2, the fraction q = 0.7 of total anthropogenic forcing attributable to CO2 emissions is derived from each of the six SRES standard emissions scenarios.
Plotting the four values λ0 = 0.31 K W–1 m2, λ100 = 0.44 K W–1 m2, λ∞ = 0.50 K W–1 m2, and λ∞ = 0.88 K W–1 m2, produces curve A in Fig. 1. As the inset panel A shows, the temperature rises quite sharply in the first century or two.
Figure 1. Two equally plausible evolutions of the climate-sensitivity parameter λn. Version A is implicit in IPCC (2007). However, version B, an epidemic curve, is equally plausible.
Now, the various values of the climate-sensitivity parameter arise over time because temperature feedbacks do not take effect instantaneously, particularly in the IPCC’s very high-sensitivity regime. They unfold on timescales of centuries to millennia.
One example of a millennial-scale feedback is the melting of the land-based ice in Greenland, which the IPCC says will only happen if global temperatures remain 2 Cº higher than today for several millennia. And even this is probably an exaggeration. Most of you are too young to remember, but 8000 years ago the mean temperature at the summit of the Greenland plateau was 2.5 Cº higher than it is today (Fig. 2), but the ice there did not melt. So the most one might expect, even after several millennia, is some further loss of ice around the coastal fringes of Greenland.
In passing, there is a characteristically hysterical recent piece (in The Guardian, inevitably) by the accident-prone Australian professional bed-wetter Graham Redfearn, saying that from 2002-2011 some 260 billion tons of ice a year has melted from Greenland. Oo-er! Even if that were the case, sea level would have risen by just 0.7 mm a year, or little more than a quarter of an inch over the decade.
Figure 2. Reconstructed temperatures at the summit of the Greenland ice cap, 6000 BC to date.
For reasons such as this, it is no less plausible that feedbacks will come into play slowly to start with, as in inset panel B, than that they will act near-instantaneously in the first century or two, as in the IPCC’s implicit regime (Fig. 1, inset panel A).
The literature is pointing ever more clearly towards only the smallest net-positive feedbacks even at equilibrium. In that event, the global warming from a doubling of Co2 concentration will not much exceed 1 Cº, and that will come about within a century or two rather than several millennia. But even on the IPCC’s high-sensitivity central case, after 100-200 years the warming in response to a CO2 doubling would not have reached much more than 1.5 Cº, because the feedbacks under a high-sensitivity regime take longer to come into full effect.
Under the IPCC’s imagined regime, of course, the warming would continue to increase all the way to equilibrium, though at a slower rate than in the first couple of centuries.
To be fair, one should also bear in mind that CO2 concentration on business as usual will continue to rise even beyond the doubling from the pre-industrial 280 μatm to 560 μatm in around 2080. However, CO2 concentration would have to double again, from 560 to 1120 μatm, to have the same warming effect as that of the previous doubling.
Finally, it is worth reiterating that there is no, repeat no, consensus in the scientific literature in support of the IPCC’s assertion that recent warming is mostly manmade. Legates et al. (2013) established that only 0.3% of abstracts of 11,944 climate science papers published in the 21 years 1991-2011 explicitly stated that we are responsible for more than half of the 0.69 Cº global warming since we began to have a theoretically-detectable effect on global temperature in 1950.
Suppose that 0.33 Cº – just under half of the observed 0.69 Cº – was our contribution to global warming since 1950. Suppose also that CO2 concentration in that year was 305 ppmv and is now 398 ppmv.
Then the radiative forcing from CO2 that contributed to that warming was 5.35 ln(398/305) = 1.42 Watts per square meter. Assume that the IPCC’s central estimate of 713 ppmv CO2 by 2100 (Table 1) is accurate. Assume also that the CO2 forcing from now to 2100 will be 5.35 ln(713/398), or 3.12 W m–2.
Assuming that the 0.7 ratio of CO2 forcing to that from other greenhouse gases (derived in Table 2) will remain broadly constant, and assuming that by 2100 temperature feedbacks will have exercised 0.44/0.31 of the warming effect seen to date, the manmade warming to be expected by 2100 on the basis of the 0.33 Cº warming since 1950 will be 3.12/1.42 x 0.33 x 0.44/0.31 = 1 Cº.
Broadly speaking, the IPCC expects this century’s warming to be equivalent to that from a doubling of CO2 concentration. In that event, 1 Cº is indeed all the warming we should expect from a CO2 doubling. And is that going to be a problem?
[No.]
Dr. Alex
“As more than 99% of solar radiation passes through the first 1cm the absorptivity is less than 0.01. The models are wrong in assuming this surface layer receives most of its thermal energy from radiation. It in fact receives over 99% by conduction and convection.”
Wrong. Solar radiation spectrum peaks between 500 to 750 nm wavelengths. The path length in water is 1.5 m depth for transmissivity to be zero (Beer-Lambert law). The emissivity of water is 0.95 hence it absorbs 95% of incident radiation. Water is close to being a blackbody radiator.
AlecM
“My whole point is that there is near zero (0, Nought, Zilch, Nada) NET CO2 15 micron IR energy emitted by the surface to be absorbed by the nearby (‘absorption depth ~10 m) atmosphere… It’s because that band is self-absorbed and mutually annihilates the same wavelength range surface IR emission to the atmosphere. Therefore, there can be no atmospheric warming from this cause”
You are saying no warming after equilibrium is attained. But equilibrium is only attained after the surface temperature increased to match the back radiation. Convective cooling will only increase after increase in surface temperature (or else the wind must blow harder in response to back radiation but the wind does not have a mind of its own).
@Dale Rainwater Strangelove: you are correct. Double pCO2 and the increased atmospheric Radiation Field will cause less surface IR emission, meaning its temperature will increase by ~1.2 K to ensure that the total rate of heat transfer to the atmosphere is 160 W/m^2.
However, another processes kicks in and reduces that temperature rise. Its called the hydrological cycle plus some other factors.
AlecM
Those other factors are negative feedbacks. They are hard to calculate theoretically. They are empirically estimated. Using satellite data, Spencer found strong negative feedback giving a climate sensitivity of 6 W/m^2/K. If this is correct, TCR is < 1.2 K per doubling of CO2.
Dr. Alex
The surface will be heated by conduction if the air is not moving and convection if moving. Since air is always moving, conduction is negligible. The convective heat transfer coefficient of natural air flow is 25 W/m^2/C. The solar and back radiation hitting the surface is 525 W/m^2. For convection to equal radiation, the air must be 21 C hotter than the surface. So it’s not true that 99% of surface heating is by conduction and convection. That would mean air is 2,000 C hotter than surface.
Dr Strangelove would do well to read my “continued” comment afer the one he quoted which referred to just the first 1cm out of the 150cm he discusses in the ocean thermocline. Actually, measurements show the solar insolation penetrating more like 10m. but that is not an important issue, Even If only 95% of the radiation passes through that 1cm thin transparent layer of water, then all the rest of the energy it receives comes from the thermocline below and some from the air just above – virtually all by non-radiative processes. The 1cm surface layer is very transparent to solar radiation, as we all know. As such it does not act like a black or gray body by definition.
On Ocean Acidification:
There is very little actual data and global coverage appears to be insufficient. In addition, warming does not seem to affect ocean acidification . According to climate science, wind speed appears to be the principle variable governing dissolution of CO2 in the oceans and that is modeled. Please see my comments here, here, here and here.
The money quote (Takahashi et al. 2009):
In Chapter 3: Observations: Ocean of the Second Order Draft of AR5, Taro Takahashi and Nicolas Gruber are Contributing Authors. On page 3-35, lines 26-29, they state:
In short, the IPCC is basing its assessment of dissolution of anthropogenic CO2 on Takahashi et al. 2009 and Gruber et al. 2009. The claim that Takahashi et al. 2009 is “independent” from Gruber et al. 2009 is not completely supportable.
References:
Gruber et al. 2009: http://www.up.ethz.ch/publications/documents/Gruber_et_al._2009
Takahashi et al. 2009: http://eprints.uni-kiel.de/2294/1/683_Takahashi_2009_ClimatologicalMeanAndDecadalChange_Artzeit_pubid12055.pdf
Mr Old refers me to what he ambitiously describes a “peer-reviewed” blog posting, a novelty that may yet, perhaps, prove valuable but has not done so in the present instance. Dr Briggs’ introduction to his posting falls a long way short of being any form of endorsement.
In that posting, Mr Berg quibbles about whether “projection” and “prediction” mean different things. While definition of terms is desirable, pushing it beyond what is necessary is mere futile pedantry and has more than a little childish pettiness in it. And, one suspects, Mr Old’s boring persistence in banging on and on about this and similar useless points is really intended to derail threads such as this. Fortunately, he has left it too late this time.
The facts are that not the least of the purposes of climate models is to attempt to represent the real climate with sufficient plausibility and resolution to run the model forward and predict what will happen next. It matters not one jot whether one calls the predictions of a model “predictions” or “projections” or “potatoes”. Let us call them “potatoes”. The models’ potatoes have constantly proven exaggerated when set alongside the real-world results the potatoes are supposed to represent. No amount of quibbling will alter that fact.
If Mr Berg chooses to use words in a sense distinct from that which ordinary people would use them, that is his privilege: but he must not expect anyone except fellow pedants to take him seriously. I, for one, will call the predictions of a model “predictions”, and, when they fail, I will not attempt to excuse the failure by rebranding them “projections”.
And if Mr Old were really as insistent upon using the correct term, I am not “Mr Monckton” but either “Lord Monckton” or, if Mr Berg prefers not to use the title, “Monckton of Brenchley”. If Mr Old wishes to be pedantic, let him be consistently pedantic.
Mr. Monckton:
I’ve learned that When I attempt logical discourse with you are almost certain to drift off topic thus destroying the thread of the conversation. Often, you drift onto the different topic of alleged flaws in my character. This, of course, is the well known debating tactic of the ad hominem argument. Experienced debaters use this tactic only when cornered for members of the audience are apt to know that the argument is fallacious and discount it.
In your latest use of this tactic, you hang the deprecating label of “pedant” around my neck though whether or not I am a pedant is unrelated to the topic of the equivocation fallacy in global warming arguments. It is not hard to guess why you changed the topic. Were you to disambiguate terms in the language in which you make your argument thus avoiding use of the equivocation fallacy it would be clear to the readers of your argument that your model is neither validated nor susceptible to validation because this model lacks an underlying statistical population.
Monckton of Brenchley says:March 26, 2014 at 8:01 am
Mr Kelly, Mr Keohane, and highflight56344 all have difficulty with the notion that on all timescales CO2 concentration change lags temperature change,
Actually I do not have any difficulty what-so-ever with a time lag WRT temperature and CO2. But the proponents of CO2 driving the climate do not acknowledge the same, thus I was proposing from their stance, and concluding it makes no sense since the same levels of CO2 exist during warming and cooling times from their perspective.
Mr Kelly also asks why we get glaciers every time CO2 is at a maximum. We don’t, but it has been known, though not in the past 420,000 years.
I asked the same, and in fact we do re-glaciate at peak and dropping levels of CO2 while we are in this ice age, the Holocene being a brief respite, every 100K years or so.
The selectively pedantic Mr Oldberg – who perhaps deliberately gets my name wrong – is too late to divert this thread with his usual waffle about what, in the present context, is the spectacularly and transparently irrelevant distinction between “prediction” and “projection”. He also repeats, again fortunately too late to derail the thread, his nonsense about “absence of a statistical population”. The matter is very simple, yet even simple is probably altogether beyond him. The models predict how much global warming should be occurring. The thermometers measure how much global warming is actually occurring. The models’ predictions are shown to have been relentlessly exaggerated. And that’s that. Even the IPCC, after years of resistance, has begun to notice the elephant in the room. It has accepted that the models have failed in one of their primary purposes, and it has adjusted its predictions accordingly, though they may well still prove excessive.
If Mr Oldberg wishes to witter on about the difference between prediction and projection, let him send his waffle to a reviewed journal and see how far it gets in the direction of publication. Meanwhile, I shall not be deterred by his pompous irrelevances from continuing to provide clear visual representations of the behavior of statistical populations. Everyone but Mr Oldberg would call them graphs.
Mr. Monckton:
In the fields of probability theory and statistics, the mathematical idea known as a “frequency” plays a key role. It is the count of observed events of a particular description. For climate models generally and for your model in particular there are no event descriptions, events, frequencies or relative frequencies. One cannot validate or falsifiy such a model. However, one can perform an IPCC-style “evaluation.”
A model of this kind conveys no information to a policy maker about the outcomes from his or her policy decisions. By conflating “prediction” and “projection” one creates the appearance of information. Is this what you wish to do?
Monckton of Brenchley says:
March 27, 2014 at 10:02 am
Prediction vs. projection in this case is a semantic distinction without a functional difference, IMO. A hand-waving waffle that can’t obscure the reality of massive model failure on biblical epic scale.
Dr Alex Ha’n says:
March 26, 2014 at 3:00 pm
“The models are wrong in assuming this surface layer receives most of its thermal energy from radiation. It in fact receives over 99% by conduction and convection.”
About 70% of solar radiation does reach the surface, and if received by the ocean, is absorbed in the top 100 meters, warming the 100 meter zone.
The argument that IR does not penetrate the surface layer but SW does appears frequently but actually means little, because whatever the modes of heat transfer involved a heat balance at the surface must short-term net to zero. True, a joule absorbed at 100 meters must reach the surface to be radiated (or evaporated or convected away) and it does this by liquid phase convection, aided by turbulent mixing of the surface layer.
Phil at 3:00 a.m.
I concur. I raise the issue of acidification as one key area of ocean study where there are enormous gaps and yet, growing sound and fury. We have a generally solid physical and chemical theory of the co2 uptake and yet, little concrete findings and materials, and very few species specific studies. I am again shocked by the IPCC statements on “global” acidification as there is not one global ocean nor one measurement nor one single species. This all leads more or less to the points Bill makes on his posts about the oceans.
Monckton of Brenchley
May I assume that the delay in your response to my recent comments is that you are indeed seeking professional advice from someone well versed in radiative heat transfer physics and thermodynamics? The key issues are that raised by BigWaveDave two years ago (linked in a comment above) and the fact that sensitivity calculations are based on applying Stefan-Boltzmann calculations to a surface of which 70% may be considered to be a 1 centimetre thin layer of transparent water, and thus not a black or grey body. As you will learn, several of us in Australia with a background in physics have been able to expose the errors in the greenhouse concept and explain reality with the science mentioned by BigWaveDave.
In this thread, Mr. Monckton has raised as an issue the quality of the peer review of my article “A Common Fallacy in Global Warming Arguments.” This article presents the results of a study that I conducted at the request of Dr. Judith Curry on the topic of logic and climatology. Curry is a professional climatologist and is chair of the School of Earth Sciences at Georgia Tech university. I understand from her that she is has a working knowledge of information theory and logic; these two disciplines are central to the argument that I make in “A Common Fallacy…”
When I submitted a report on my study to Curry for publication in the blog which she edits, she played the role of peer-reviewer. The report went through several drafts in response to her comments. Curry published the final version under the title “The Principles of Reasoning: Part III. Logic and Climatology.” The URL is http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ .
Subsequent to publishing this article, I learned from an academic philosopher that the fallacy which I had described in the article for Curry was recognized in the philosophical community and had a name. Philosophers called it the “equivocation fallacy.”
I employed this new found knowledge in a critique that I wrote on its draft report for the United States Global Change Research Report; I submitted this critique to that organization in 2013. Later, I submitted my critique to Dr. William Briggs, asking him if he was interested in publishing it in his blog as an entry in his “Spot the Fallacy Contest.” Briggs holds the MS degree in atmospheric science and PhD degree in mathematical statistics. He is an adjunct professor of statistics at Cornell University. Briggs edited my text for brevity and clarity and published it in his blog under the title “A Common Fallacy in Global Warming Arguments.” The URL of this article is http://wmbriggs.com/blog/?p=7923 .
Dr. Vincent Gray has read “A Common Fallacy…” and given me his congratulations on my logical professionalism in writing it. Gray has published articles (“Spinning the Climate” and “The Triumph of Doublespeak”) on the same topic. In fact, it is from one of these articles that I first learned of the existence of this fallacy in climatological arguments. Gray holds the PhD degree in Physical Chemistry from the University of Cambridge and has taught climatology in several universities. He has served as an IPCC expert reviewer for many years.
My articles serve to formalize Gray’s contentions by framing them as a proof. They prove it is through applications of the equivocation fallacy that IPCC-affiliated climatologists reach their major conclusions.
Mr Oldberg’s wafflings about the irrelevant distinction between prediction and projection, and about statistical populations, a concept of which he displays no understanding, were not peer-reviewed. They were published not in any learned journal but on a couple of tolerant blogs. And the equivocation fallacy only applies when the meanings of two words are sufficiently distinct, and when the two meanings both occur in a single syllogism. Neither of these criteria is or has
ever been applicable to any of the postings here that he has tried to derail – in the present instance unsuccessfully. He is not even capable of getting my name right, confirming what all have long suspected – that he is a troll intending merely to cause annoyance by wandering from the topic at hand. A less generous moderation policy would have banned him long ago for wasting everyone’s time – notably including his own. I recommend an elementary textbook of logic – Hodges perhaps.
If one discounts the illogical ad hominem portion of Mr. Monckton’s argument and his modification in the definition of the term “equivocation” from its definition in the philosophical literature one is left with his claim that there is not a difference in the meanings of “prediction” and “projection.” That there is a not a difference is asserted by Monckton but is not proved by him. Lacking proof, he has resorted to the logically illegimate tactic of argument by assertion. In contrast, the articles that I have cited to Monckton prove there is a difference by logical argument.
By the way, Monckton states a false proposition when he says that “the equivocation fallacy only applies when the meanings of two words are sufficiently distinct, and when the two meanings both occur in a single syllogism..” Swap “argument” for “syllogism” and delete “sufficiently” to get a true proposition. A “syllogism” is an argument whose conclusion is known to be true. An “equivocation” is an argument that appears to be a syllogism but is not one because a term changes meaning in the midst of this argument. By logical rule, to draw a conclusion from an equivocation is improper. To draw an IMPROPER conclusion from an equivocation is the “equivocation fallacy.”
If the meanings of “prediction” and “projection” differ, then in drawing the conclusion that there is not a difference between the two terms Monckton is guilty of the equivocation fallacy. My articles and those of Vincent Gray prove him to be guilty of it.
If one discounts as illogical the persisting and rather childish ad-hominem misstatement of my name by Mr Oldberg (whatever else I be, I am not “Mr Monckton”, I have not, as he suggests, altered the definition of the term “equivocation”: I have merely pointed out that unless the definitions of two terms are sufficiently distinct they may serve interchangeably in an argument without diminishing its validity.
Mr Oldberg then breaches the Eschenbach Rule – as he all too often does – by alleging that I had said something I had not said. He says I made a “claim that there is not a difference in the meanings of ‘prediction’ and ‘projection’. I made no such claim. However, I pointed out – which is no more than common sense – that the definitions are sufficiently close to one another to be interchangeable in the particular context in which the IPCC, climate modelers, the international scientific community, and I use them interchangeably.
Whether Mr Oldberg likes it or not, the IPCC, in particular, frequently uses the word “predict” when Mr Oldberg would apparently prefer it to use the word “project”. If he really wishes to make a difference rather than make a nuisance, he would be better off trying to lecture the IPCC about how many angels can dance on his pinhead than trying to lecture me from what appears to be a standpoint of invincible ignorance. I have had a university education in this stuff: he, manifestly, has either had no such education or has made a remarkably poor use of it.
Mr Oldberg says I use the “logically illegitimate tactic of argument by assertion”. If Mr Oldberg knew anything about logic – and almost every time he mentions it he commits an error – he would know that in logic is defined as one or more propositions (i.e., declarative assertions) followed by a conclusion. All arguments in logic are arguments by assertion.
Mr Oldberg, clearly feeling the strain of swimming way out of his depth in the paddling pool of intellectual discourse without either the water-wings of knowledge or the muscles of experience, next asserts that a syllogism is an argument whose conclusion is known to be true. This is a misunderstanding of the very end and object of logic that so fundamental that one does not know whether it is really worth trying to educate Mr Oldberg at all. But here goes, just for fun, for my toe is not yet healed, so I am confined to barracks for another few days.
A syllogism is a particular form of argument comprising a triad of connected propositions, so related that one of the three (the conclusion) necessarily follows from the other two (the premisses). The end and object of logic is to determine whether the premisses of any argument, including the premisses of a syllogism, necessarily and validly entail the conclusion. If the premisses necessarily entail the conclusion, the argument is logically sound. But the conclusion may or may not be true. The conclusion of a logically valid argument is true if and only if each of the premisses is true.
For instance, here is a valid syllogism:
All skerfuffles are chunderoids. This glapsplot is a skerfuffle. Therefore this glapsplot is a chunderoid.
Since this syllogism is valid, if it be true that all skerfuffles are chunderoids, and if it be true that this glapsplot is a skerfuffle, then it is ineluctably true that this glapsplot is a chunderoid. But not necessarily if either or both of the premisses are false.
And here is an invalid syllogism:
All skerfuffles are chunderoids. This glapsplot is a chunderoid. Therefore this glapsplot is a skerfuffle.
This syllogism is invalid because it is possible that not all chunderoids are skerfuffles. And yet the conclusion may be true. for it is possible, though not necessary, that this particular glapsplot is a chunderoid that also happens to be a skerfuffle.
Bottom line: I shall continue to use the words “prediction” and “projection” as interchangeably as everyone else in the scientific world happily uses them, and without any unacceptable loss of rigor. For as long as the IPCC uses “predict” on the basis of its models’ output, so shall I, whether Mr Odberg likes it or not, for everyone else but he will understand exactly what I am saying.
One appreciates that the ever-more-visible failure of the models’ predictions is a most inconvenient truth to the likes of Mr Oldberg. No doubt he conceives that trying to make everyone use the word “projection” when “prediction” would be clearer will to some degree soften the humiliating blow. But I, for one, will continue to write plain English, and Mr Oldberg will just have to put up with it.
Observe, finally, that in all of this discussion Mr Oldberg has neither defined “prediction” nor defined “projection”, nor explained why – in the context in which the scientific community interchangeably uses both – they ought not to do so. Indeed, in his previous comment, he says, “IF” the definitions be different …”. There could be no clearer demonstration of the pointlessness of his wafflings than that.
According to Monckton, “A syllogism is a particular form of argument comprising a triad of connected propositions, so related that one of the three (the conclusion) necessarily follows from the other two (the premisses).” I agree. An “equivocation” is an argument from which a conclusion does not follow from the premises; thus, it is not a “syllogism” under Monckton’s own definition of the term. An equivocation is an argument that looks like a syllogism but isn’t one. To draw a conclusion from an equivocation is the “equivocation fallacy.” Thus, Monckton errs in stating that “the equivocation fallacy only applies when the meanings of two words are sufficiently distinct, and when the two meanings both occur in a single syllogism..”
By asides such as “If Mr Oldberg knew anything about logic – and almost every time he mentions it he commits an error…” Monckton diverts the topic of the debate from the equivocation fallacy to me. His apparent purpose is to prevail in the minds of the members of our audience (if any) by portraying me as an incompetent on issues of logic who therefore cannot be believed. This is an example of an ad hominem argument. An ad hominem argument is the last refuge of a debater who is out of ammunition. Monckton is unable to defend his position through legitimate argument so defends it by fallacious argument.
According to the author of Wikipedia’s article on “Proof by Assertion” the term references “an informal fallacy in which a proposition is repeatedly restated regardless of contradiction.” By “argument by assertion” I meant “proof by assertion.” I’m sorry for my mistake in terminology.
In stating his “bottom line,” Monckton announces his intention to use the polysemic terms “prediction” and “projection” interchangably, thus making equivocations of his arguments. There is no offense provided that he refrains from drawing conclusions from these arguments. When he draws conclusions from them though, he is guilty of the equivocation fallacy.
Mr Oldberg wriggles, but nonetheless retreats. He is a born quibbler, but there comes a time when even quibblers must give over. Let us review progress. He has abandoned his attempts to say that the models’ predictions do not concern statistical populations, for they self-evidently do. He has abandoned his erroneous statement that argument by assertion is fallacious. He now accepts that all logical argument proceeds by way of assertions, known as “propositions”. However, he now adds several further logical errors. Let us patiently deal with each in turn, so that the retreat towards wisdom on Mr Oldberg’s part may continue.
He says an equivocation is an argument in which the conclusion does not follow from the premisses. No, it is a particular form of argument in which the conclusion does not follow from the premisses because a polysemic word is used in two distinct meanings, but the logical form of argument proceeds as though the meanings were the same.
He says that the scientific community, which he personifies in me, is guilty of the fallacy of equivocation: but his ground for stating that we equivocate is that we are using two distinct terms interchangeably, so that sometimes we draw conclusions as to the models’ “predictions” and sometimes we draw conclusions as to the models’ “projections”.
But, as I have already attempted to explain to him, the equivocation fallacy only arises in an argument where a single term is used in two sufficiently distinct meanings, not – repeat not – in an argument where two terms whose meanings sufficiently overlap are used interchangeably. All of this is elementary logic, which is why I have recommended that Mr Oldberg should study an elementary textbook before attempting to expatiate inexpertly and inaccurately on a subject upon which, on his showing to date, he knows very little indeed. And this is why I have asked Mr Oldberg to clarify why he thinks there is insufficient overlap between the words “prediction” and “projection” for the scientific community to use them interchangeably.
He makes the elementary logical error of challenging the scientific community’s use of two terms without himself defining those terms and explaining why the manifestly large overlap between their meanings is irrelevant. In logic, a quibbler is one who draws petty and pointless distinctions. A vexatious quibbler is one who refuses to explain why he insists on petty and pointless distinctions. Mr Oldberg is a vexatious quibbler. Usually a university logic 101 course beats this tendency out of those who are small-minded enough to be prone to it.
Next Mr Oldberg makes the mistake of citing Wikipedia as though it were an authority. In most universities, citations from Wikipedia are banned because it is the encyclopedia that any idiot can edit and that, therefore, only a cretin would credit – to quote an eminent professor on the subject.
Mr Oldberg then says I must not draw conclusions from arguments. However, as I have already patiently tried to explain, an argument consists of one or more premises and a conclusion. If the premisses validly entail the conclusion, the argument is valid. If the premisses validly entail the conclusion and are all true, then the argument is sound and the conclusion is true.
Mr Oldberg makes the fundamental logical error of asserting that the scientific community must not use the terms “prediction” and “projection” interchangeably, but without producing a single instance where I – who am for some reason the particular focus of his petulance – have perpetrated a specific fallacy by a particular argument.
He makes the further fundamental logical error of asserting that I am logically wrong without having identified and specified the premisses and the conclusion of any specific argument that he conceives to be logically fallacious.
Finally, he makes the fundamental logical error of dressing up his prejudice against the interchangeable use of the words “prediction” and “projection” as though it were a failure of logic on the part of the scientific community rather than a mere prejudice on his part.
However, since I have at present the time to offer him an education in these matters, I shall continue to force him backward, step by step, towards an understanding of the truth. For these conversations are all archived, and his wilful persistence in ignorance and error is a paradigm of the misconduct of a certain politicized element in the scientific community that has done grave damage to science by its sullen refusal to adhere to the scientific method, and to logic.
Finally, Mr Oldberg – a whiner by temperament – complains that I have used ad-hominem arguments. No, I have not. If he offends against logic in the manner I have described, then he is a quibbler by definition, so no ad-hom arises.
However, his persistence in not using my full surname, Monckton of Brenchley, is indeed ad hominem. He has persisted in this solecism despite repeated requests that he desist. Accordingly, even if I were to lapse from logical rigor by resorting to ad-hominem arguments, Mr Oldberg is in no more position to complain about that than the pot is to call the kettle black. He is out of his league, and everyone knows it.
Mr. Monckton:
My understanding is that your surname is “Monckton.” Thus, I am perplexed that you so dislike my use of the term “Mr. Monckton” as to bring it up in each of your posts. In American English, “Mr. Monckton” is a respectful and accurate form of address for a person who lacks a PhD.
When you argue that “the models’ predictions do not concern statistical populations” you state an equivocation, for the term “prediction” is polysemic; thus, no conclusion may properly be drawn from your argument. Through a usage in which “prediction” is disambiguated, a similar argument may be constructed from which a conclusion may be properly drawn. Under disambiguation, a “prediction” is an extrapolation to the outcome of an event. The complete set of these events is an example of a statistical population. A “projection” is an issuance of a climate model and is a mathematical function that maps the time to the global temperature. While a “prediction” references a statistical population, a “projection” does not reference one. Thus, one important reason for distinguishing between a “projection” and a “projection” is to highlight the absence from every global warming model referenced by the IPCC of the statistical population that underlies it. To attempt to conduct scientific research in the absence of the statistical population underlying its models is a fatal blunder that is is obscured when “prediction” and “projection” are treated as synonyms.
Mr Oldberg has now retreated on all fronts but two. The first is my surname, which is not “Monckton” but “Monckton of Brenchley”. And I am not “Mr.”
The second is that Mr Oldberg says one cannot use a word with more than one meaning in a logical argument. Actually, this is well-trodden territory: of course one can, as long as the meaning is clear from the context. The models predict, in quite some detail, how various climate variables will change, month by month, for 100 years. Those predictions constitute a statistical population. Or one can call them “projections”: for they, too, both reference and generate what Mr Oldberg pompously calls “statistical populations”, what mathematicians simply call “sets”, and what computer programmers call “datasets”. The terms “prediction” and “projection”, therefore, are interchangeable, without loss of comprehension, as everyone but Mr Oldberg understands full well.
Mr Oldberg has been talking nonsense throughout, tinkering futilely with concepts in logic with which he is plainly unfamilar. However, his primary purpose, which is to derail threads such as this, has failed, and he has perforce learned something of the elements of logic from me in the process, though he has not yet begun to apply what he has learned.
In future, he should not waste his admittedly not very valuable time making a fool of himself here. He should direct his tedious quibbles to the IPCC, which will continue to pay no more attention than I do to his pathetic, pseudo-academic drivel, and will continue to use “prediction” and “projection” interchangeably. Everyone except Mr Oldberg will understand perfectly what the IPCC means.
Monckton of Brenchley
Do you claim the existence of statistical populations underlying the IPCC climate models? If so, please provide a citation to them.
A statistical population (in mathematics, a “set”) is defined as the total membership or population or universe of a defined class of people, things, or events (in mathematics, an “object”). The “target population” or “scope” (in mathematics the “chosen set”) is the population about which information is sought. The “survey population” or “coverage” (in mathematics the “operative subset”) is that fraction of the target population or chosen set on which the analysis (in mathematics the “operation”) will be performed.
To take a simple example, one purpose of climate models is to represent the climate object in mathematical terms via a series of (usually thousands) of equations and then to examine both that object and its constituent objects to determine how those objects will evolve over time.
Thus, for instance, from the set or population of all measurements of temperature a subset (consisting, say, of monthly global means from a particular data source over a defined period) is one of many statistical populations examined by the models, which then output a new set, or perhaps an interval of sets, of predicted monthly global means (such as that which will be found in Table 10.26 of AR4, for instance).
This new set (or interval of sets), too, is a statistical population, on which various operations (such as determination of a trend) may be carried out. In Table 10.26, there are predicted intervals for global temperature, for CO2 concentration, for radiative forcing, etc., under six distinct “emissions scenarios). Each of the postage-stamp-sized graphs in Table 10.26 displays three statistical populations – the max, mid and min predictions.
Now, the IPCC, on page 39 of its Second Assessment Report, says that a “projection” is one that does not favor any particular prediction within the interval of predictions made by the models. From this rather artificial distinction (to which the IPCC itself pays but intermittent subsequent lip-service), it follows that, wherever the IPCC selects a particular value as its best estimate, it is making what it calls a “prediction”, even if it is making that prediction on the basis of an interval of predictions that it calls a “projection”.
In mathematics, the word “projection” has a more specific meaning – it is a “casting forward” or extension of a previously-established set, such as the points on a given straight line. Accordingly, mathematicians, who are particularly precise in their definition of terms, will not be comfortable calling an interval of predictions a “projection”. We will tend to call predictions “predictions”, whether they are single numbers or an interval.
However, little loss of rigor occurs where the scientific community sometimes refers to a specific prediction as a “projection”, for it is no longer using the word in its mathematical, rigorous sense: instead, it is merely using the part for the whole, in a context that admits of little or no loss of comprehension.
Monckton of Brenchley:
Thank you for responding to my request. I’m pleased to see our discussion get down to a purely technical level. If it remains at this level, I think we can resolve the issue in short order.
You say “Thus, for instance, from the set or population of all measurements of temperature a subset (consisting, say, of monthly global means from a particular data source over a defined period) is one of many statistical populations examined by the models, which then output a new set, or perhaps an interval of sets, of predicted monthly global means (such as that which will be found in Table 10.26 of AR4, for instance)” The syntax of this sentence is incorrect. It sounds as though you mean to contend that an event in the population is a temperature or average temperature (e.g., a monthly average). There are some problems with this contention.
A “prediction” is an extrapolation from an observed state of nature to an unobserved but observable state. The former state is called the “condition.” The latter state is called the “outcome.” Your event has no condition while its outcome is a temperature or average temperature. As an average temperature is a less problematic choice of outcome than a temperature, let’s assume that the outcomes are average temperatures.
A global temperature time series provides a source of temperature values and from these values an average may be formed over a specified time period. The predicted value is a real number. The observed value is a real number. The probability that the two values will coincide is nil. Thus, with the outcome defined in this way, the model is sure to be falsified.
To avoid falsification of his or her model, the builder of this model must define the outcomes more broadly. One possibility is to divide the temperature values into intervals such that the complete set of intervals is a partition of the complete set of values. This could produce, for example, an outcome in which the value of the average temperature lies between 15 and 16 Celsius in a specified time interval.
Your idea of associating the outcomes of events with time averages of global temperatures is consistent with the traditions of climatology. Traditionally, the average is taken over 30 years. For statistical independence of the events, the periods of the events cannot overlap. This consideration reveals an inherent limitation of global warming climatology: going back to the beginnings of the various global temperature time series in 1850, there are between 5 and 6 events of 30 year duration. Experience with model building by methods that are maximally efficient in their use of information suggests that the minimum number of events for construction of a statistically validated model is about 150. Thus, global warming climatology is short on events of 30 year duration by a factor of around 30. Reduction of the duration of an event to 1 year would produce 163 independent observed events giving climatologists a chance at producing statistically validated models. The word “climate” would have to be redefined and it is unclear whether a basis would then exist for public policy decisions on CO2 emissions.
Currently, the following elements in a description of the events of global warming climatology are not identified:
* the start time of each event
* the stop time of each event
* the description of each condition
* the description of each outcome.
While this state of affairs remains, the statistical population underlying each climate model will remain an empty set.
Under this circumstance, it is impossible for a climate model to be statistically validated. Also, it is impossible for such a model to provide information to a policy maker about the outcomes from his or her policy decisions. Policy makers must think they have information but they have none. That they think they have information is a consequence from applications of the equivocation fallacy on the part of climatologists.
Long ago, Vincent Gray spotted the impossibility of validating one of the models. In his paper “Spinning the Climate,” Vincent reports complaining to IPCC management that past assessment reports claimed the models to be validated when they were insusceptible to being validated. IPCC management reacted by changing the word “validated” to the word “evaluated” in subsequent assessment reports. An “evaluation” is what one sees in AR4 and AR5. It remains impossible to validate a model.
This sorry state of affairs is obscured by applications of the equivocation fallacy, features of which are: a) confusion of “evaluation” with “validation,” b) confusion of “projection” with “prediction” and c) confusion of “pseudo-science” with “science.” The power of this fallacy to yield logically illegitimate conclusions is broken through a disambiguation in which “projection” takes on a different meaning than “prediction,” “evaluation” takes on a different meaning than “validation” and “pseudo-science” takes on a different meaning than “science.” As currently structured, global warming climatology is a pseudo-science.