Guest Post by Ira Glickstein.
The graphic from RealClimate asks “How well did Hansen et al (1988) do?” They compare actual temperature measurements through 2012 (GISTEMP and HadCRUT4) with Hansen’s 1988 Scenarios “A”, “B”, and “C”. The answer (see my annotations) is “Are you kidding?”
HANSEN’S SCENARIOS
The three scenarios and their predictions are defined by Hansen 1988 as follows
“Scenario A assumes continued exponential trace gas growth, …” Hansen’s predicted temperature increase, from 1988 to 2012, is 0.9 ⁰C, OVER FOUR TIMES HIGHER than the actual increase of 0.22 ⁰C.
“scenario B assumes a reduced linear growth of trace gases, …” Hansen’s predicted temperature increase, from 1988 to 2012, is 0.75 ⁰C, OVER THREE TIMES HIGHER than the actual increase of 0.22 ⁰C.
“scenario C assumes a rapid curtailment of trace gas emissions such that the net climate forcing ceases to increase after the year 2000.” Hansen’s predicted temperature increase, from 1988 to 2012, is 0.29 ⁰C, ONLY 31% HIGHER than the actual increase of 0.22 ⁰C.
So, only Scenario C, which “assumes a rapid curtailment of trace gas emissions” comes close to the truth.
THERE HAS BEEN NO ACTUAL “CURTAILMENT OF TRACE GAS EMISSIONS”
As everyone knows, the Mauna Loa measurements of atmospheric CO2 proves that there has NOT BEEN ANY CURTAILMENT of trace gas emissions. Indeed, the rapid increase of CO2 continues unabated.

What does RealClimate make of this situation?
“… while this simulation was not perfect, it has shown skill in that it has out-performed any reasonable naive hypothesis that people put forward in 1988 (the most obvious being a forecast of no-change). … The conclusion is the same as in each of the past few years; the models are on the low side of some changes, and on the high side of others, but despite short-term ups and downs, global warming continues much as predicted.”
Move along, folks, nothing to see here, everything is OK, “global warming continues much as predicted.”
CONCLUSIONS
Hansen 1988 is the keystone of the entire CAGW Enterprise, the theory that Anthropogenic (human-caused) Global Warming will lead to a near-term Climate Catastrophe. RealClimate, the leading Warmist website, should be congratulated for publishing a graphic that so clearly debunks CAGW and calls into question all the Climate models put forth by the official Climate Team (the “hockey team”).
Hansen’s 1988 models are based on a Climate Sensitivity (predicted temperature increase given a doubling of CO2) of 4.2 ⁰C. The actual CO2 increase since 1988 is somewhere between Hansen’s Scenario A (“continued exponential trace gas growth”) and Scenario B (“reduced linear growth of trace gases”), so, based on the failure of Scenarios A and B, namely their being high by a factor of three or four, it would be reasonable to assume that Climate Sensitivity is closer to 1 ⁰C than 4 ⁰C.
As for RealClimate’s conclusion that Hansen’s simulation “out-performed any reasonable naive hypothesis that people put forward in 1988 (the most obvious being a forecast of no-change)”, they are WRONG. Even a “naive” prediction of no change would have been closer to the truth (low by 0.22 ⁰C) than Hansen’s Scenarios A (high by +0.68 ⁰C) and B (high by 0.53 ⁰C)!
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The “predictions” which various bloggers attribute to Hansen are not predictions but rather are projections. Though they are often conflated, the term “prediction” and the term “projection” have differing meanings. To conflate the two terms is to foster deceptive arguments on the methodology of the international study of global warming via the equivocation fallacy.
Terry Oldberg;
Though they are often conflated, the term “prediction” and the term “projection” have differing meanings.
>>>>>>>>>>>>>
Darn right. One is based on data with error bars and a defined precision and the other is an excuse for not having either.
davidmhoffer:
Thanks for taking the time to reply. There are interesting differences between models that make predictions and models that make projections. One is that a model of the former type conveys information to a policy maker about the outcomes from his or her policy decisions while a model of the latter type conveys no information. Thus, while a model of the former type is suitable for making policy, a model of the latter type is completely unsuitable. In AR4, the models that are cited by the IPCC as the basis for making policy on CO2 emissions convey no information to policy makers about the outcomes from their policy decisions; thus these models are completely unsuitable for making policy.
Though being completely unsuitable, models that make projections are what are used in making policy. That this is so is one of many ghastly consequences from the incorporation of the equivocation fallacy into arguments regarding the methodologies of climatological studies.
An “equivocation” is an argument in which a term changes meaning in the middle of the argument. By logical rule, to draw a conclusion from an equivocation is improper. To draw a conclusion from an equivocation is the equivocation fallacy. Under the circumstance that the words “prediction” and “projection” are treated as synonyms, the word pair prediction-projection has dual meanings and is said to be “polysemic.” In making arguments about the methodologies of their studies, climatologists use polysemic terms that include prediction-projection and draw improper conclusions from these arguments. One of the consequences is to move money from the pockets of non-climatologists to the pockets of climatologists. As uses of the equivocation fallacy are hard to spot, few of the non-climatologists are aware of having their pockets picked by the climatologists!
@terry Oldberg:
I’m so sick of that deceptive canard. It is used as a shield to cover blatant deception. First started by the Club Of Rome propaganda piece “Limits To Growth” by Meadows et.al. as near as I can tell.
Folks tell you what they expect to happen in the future. That’s a prediction. Fortune tellers do not say “I will now project your future!”… One doesn’t say “The High Priest will now project the date of the eclipse.” In common usage, saying “this is what will happen” or “this is what I expect to happen” is a prediction.
The rest of the word play is just bafflegab to dodge responsibility for it being a FAILED prediction.
And I predict the Warmers will continue to play the Projection Game as cover for abuse, deception, and error.
E.M.Smith:
The deception being practiced by participating climatologists has been studied by logicians and is called by them the “equivocation fallacy.” Please see my response to davidmhoffer for details.
By the way, skeptics as well as warmists are guilty of incorporating the equivocation fallacy into arguments about methodology. Participating skeptics unwittingly parrot uses of the equivocation fallacy by their opponents the warmists. In doing so, they reduce their argument with the warmists to one that is over the size of the equilibrium climate sensitivity (TECS). TECS does not logically exist but rather is a product of uses of the equivocation fallacy in making methodological arguments.
Ira
I consider that Russ R, (see his comment : Russ R. says: March 20, 2013 at 9:32 am)
raises a very good point. Have a look at it.
I think that your article could be improved if you were to detail the CO2 assumptions (in ppm) both natural and manmande which Hansen made in scenaro A, and scenaro B. If Hansen was basing these assumptions on say 30 years worth of CO2 emissions beweeen say 1958 to 1988, then CO2 as from 1988 to date has risen at more than a linear rate. In particular, the manmade component certainly has.
I had always thought that CO2 emissions were above the BAU (scenaro B) but less than the scenaro A, ie., that we were somewhere between the two scenaros – running a little above BAU. . However, (given what Russ R says) it may be that my understanding of this is incorrect since although manmade CO2 emissions is above those assumed in BAU (scenaro B), this increase has been offset by the growth of the naturally occurring CO2 sinks which have grown faster than was envisaged in BAU (scenaro B).
I think if we are to consider Hansen’s ‘projections’ (some would say predictions’) we need to specifically consider CO2 emissions and the assumptions that Hansen made with respect to these. Your Mauna Loa plot shows what has happened to CO2 but it does not detail precisely what Hansen assumed CO2 emissions would be running at in his various assumptions.
PS The point raised by Old Engineer is very interesting.
Phil. March 21, 2013 at 6:43 pm: Thanks for clarifying that the “Fig 2” you were refering to is from Hansen 1988, available as a .pdf http://pubs.giss.nasa.gov/docs/1988/1988_Hansen_etal.pdf
In your first comment (Phil. March 21, 2013 at 10:46 am) you wrote:
OK, Fig. 2 shows the assumed “Greenhouse forcing for trace gas scenarios A, B, and C as described in the text.” The horizontal axis is years, from 1960 through 2050, and the vertical axis is delta T in ºC, There are three sections, with the upper section showing CO2 forcing. As I quoted from the Abstract in my graphic above, Scenario A assumes an exponential increase in CO2, B assumes a linear increase, and C assumes a linear increase until the year 2000 and then CO2 flatlines. The middle section shows CO2 + trace gases, with forcings over twice that of the upper section. The lower section shows CO2 + trace gases + aerosols, with forcings the same as the upper section for Scenario A, but lower for B and C due to assumed volcanic eruptions.
The text in section 1 of Hansen 1988 says that CO2 “is now about 345 ppmv with current mean annual increments of about 1.5 ppmv”. Given that starting point, if CO2 had increased linearly (Scenario B) it would have increased 36 ppmv in 24 years and be up to 381 ppmv. Actual CO2 is 2012 was about 393, so, for CO2, the increase has been exponential which corresponds to Scenario A.
The text in Section 2 of Hansen 1988 says “equilibrium sensitivity” (what is now called “Climate Sensitivity”) to doubling of CO2 from 315 ppmv to 630 ppmv, is 4.2 ºC.
The “trace gases” include CH4 and N2O, and Scenario A assumes a linear increase, while B assumes a moderate decrease, while C assumes a “more drastic curtailment of emissions than has generally been imagined”. Which of these three scenarios best describes the increase in “trace gases”? I think it is A or B, which predict much greater warming than has actually occured. It is definitely NOT C, yet the prediction of C, while still high, is closest to the actual.
The “aerosols” come from volcanos, and they provide a negative forcing. Scenario A assumes no volcanos. B and C assume some volcanos.
So, Phil., now that we are on the same Fig. 2, please make your point. Tell us about the “more drastic curtailment of emissions [of trace gases] than has generally been imagined”. And, what about aerosols? It is clear that Scenario C, Hansen’s best prediction, is based on counterfactuals with regard to CO2 stabilizing (it has increased exponentially) and “drastic curtailment” of other trace gases.
Please take us through why Scenario B, which Hansen says is the most likely, got warming high by so much.
advTHANKSance
Ira
Ira Glickstein, PhD says:
March 22, 2013 at 8:47 am
“…if CO2 had increased linearly (Scenario B) it would have increased 36 ppmv in 24 years and be up to 381 ppmv. Actual CO2 is 2012 was about 393, so, for CO2, the increase has been exponential which corresponds to Scenario C….”
/////////////////////////////////////////////////////////////////////////////////
Ira
Actual 2012 CO2 is 393 ppm. This is more than linear which would have been 381 ppm, but is it properly clasified as exponential?
You state: “… so, for CO2, the increase has been exponential which corresponds to Scenario C….” but do you mean it corresponds with Scenario A (not C) since Scenario A is is the projection of an exponential growth in CO2 emissions, and you are asserting that CO2 has risen exponentially?
Of course, we are not entirely Scenario A because Scenario A assumes no volcanoes and there has been some (claimed) negative forcing due to aerosols.
We are obviously not Scenario C since that assumes no increase in CO2 post 2000. Nor are we Scenario B, which assumes some negative aerosol forcing but a linear rise in CO2 emissions, whereas we have seen more than a linear rise in CO2. Are we therefore not somewhere between the Scenario B and Scenario A projections? Should we not be comparing reality with something lying between Hansen’s Scenario B and A projections?
There is also the spanner in the works pertaining to aerosols from developing nations use of coal powered generation. Personally, i consider this suspect. Is it not the position that aerosol emissions today are no greater than they were in the 70s/80s. Perhaps a plot of aerosol emissions from 1980 to date would be a useful addition to your article.
Well, I see nobody knows what to make of my evidence. It shocks me that something so obvious can be so hard for people to grasp. I guess doing this kind of stuff for a living for several decades skews your viewpoint to the point that things which are obvious and basic to you are just not in the realm of experience for most others.
Oh well, so it goes. But, I have made predictions, so you can all watch them unfold and realize I am right.
Bart:
It is logically troublesome that the entity which you call a “prediction” is the entity which I call a “projection” for to treat the two terms as synonyms makes of the word pair “prediction-projection” a polysemic term; a polysemic term is a term with several meanings. That this word pair is polysemic leads arguments about the methodology of global warming research to degenerate into examples of the equivocation fallacy. Thus, it is important for all of us to assign the same distinct meanings to the two terms.
Terry Oldberg says:
March 22, 2013 at 1:41 pm
I think you addressed the wrong guy. I haven’t addressed anything to you in particular, and am off on a completely different topic.
But, I get your argument, a projection is not a prediction. It is a scenario of what would happen under specific conditions. However, when you determine which projection is consonant with observed conditions, then that projection can effectively be considered a prediction. So, in that regard, I see your objection as largely semantic.
Bart:
Though it has a semantic aspect, the problem that interests me is logical. In making arguments about the methodologies of their studies, climatologists habitually draw improper conclusions from equivocations thus being guilty of the equivocation fallacy. A favored vehicle for this practice is to treat “prediction” and “projection” as synonyms. To do so is to is to create a polysemic term that switches meaning in the middle of their argument. When a conclusion is drawn from this argument, another example of an equivocation fallacy is born.
Opportunities for shenanigans of this kind can be eliminated by disambiguating terms in the language of methodological arguments thus eliminating the polysemic terms. When this is done, today’s climate models are revealed to have numerous pathological features. Among them is that the models convey no information to policy makers about the outcomes from their policy decisions. Thus, the models are useless for making policy. It is instances of the equivocation fallacy that make them seem useful.
And, my prediction is not a projection.
Bart:
As I use the term “prediction,” it is a product of an inference to the unobserved outcome of an event in a statistical population; for the IPCC climate models there is no statistical population and thus is no such thing as a prediction. As I use the term “projection,” it is a mathematical function that maps the time to the global average surface air temperature. Your “prediction” sounds like my “projection” and unlike my “prediction.”
The identity of the term that one assigns to the meaning which I assign to the term “predict” and the identity of the term that one assigns to the meaning which I assign to the term “project” is immaterial to the logic or illogic of the methodologies of climatological studies. Currently these methodologies are illogical but sound to many as though they are logical. This mistake is a consequence from the fallacy of drawing an improper conclusion from an equivocation, the so-called “equivocation fallacy.”
Either way you look at it, the atmosphere allows the Earth’s surface to retain heat (and other ) energy. Some calculations say as much as 30C.
The question is whether CO2 has any influence on this. Ira says about 1C per doubling of CO2 concentration. I suspect that the answer is more likely to be = 0C, mainly because the atmosphere is a self-balancing mechanism controlled and regulated by pressure differences. CO2 makes no difference (well, extremely small) to the atmospheric pressure or to the rate transfer of energy within the atmosphere. Any minor warming by so-called back-radiation from CO2 would be instantly balanced out by the atmosphere’s pressure regulating mechanism. The atmosphere acts to COOL the surface if it the surface is warmer than it should be, and would also act to cool any part of the atmosphere that gets warmer than the pressure gradient will allow.
Anyways.. it will certainlly be fun watching the warmists squirm over the next several years as the global temperature starts to drop. 🙂
AndyG55 I agree – I find it very difficult to conceive that man, such a puny animal, occupying such a small space on this, relatively, gigantic planet with its vastly more gigantic atmosphere, could have any appreciable effect on it. Is this not a hangover from the kind of anthropocentrism (e.g. Man is in God’s image, Earth at the centre of the Universe, etc.) that reflects more on the arrogance of our species than it does on the state of affairs.
Yes, AndyG55, but while we enjoy the discomfort of the warmists, millions are are paying the price of this cruel deception; affordable energy has been at the heart of of civil refinement (let’s not call it ‘progress’) and this ghastly conspiracy is almost wholly responsible for depriving the burgeoning population of access to it. People are being driven out of work, poverty stalks the land, the lights are going to go out, the poor and elderly will starve and freeze to death, all as a result of the war against humanity being waged by self-appointed, self-important bullies in the name of the “Green Agenda”.
Man under threat, like many other animals, tends to produce more of the species. Give security from war, ample food, clothing and shelter the reproductive rate falls and, if there really is an overpopulation problem, that problem will diminish without draconian eugenics and the like.
AGW, like WMD, was contrived by the ruthless, selfish and powerful to impose servility on the innocent and ignorant. While we gloat over the warmists’ humiliation, let us also grieve for all those paying the price of their arrogance.
What surprises me even more is that very few seem to express anger now that the deception has been exposed.
Terry Oldberg says:
March 22, 2013 at 5:34 pm
‘Your “prediction” sounds like my “projection” and unlike my “prediction.”’
Not so. It is based on the statistically observed behavior of what I argue is an, at least approximately, ergodic system – the time average of certain measures is, approximately, equal to the distribution expectation.
It is a prediction. It is not just one random possibility, it is the expected outcome.
Bart:
If your “prediction” is an Oldbergian prediction, underlying your model is an example of a statistical population. If there is one, kindly describe the independent events in this population. In particular, what is the starting time and ending time of each event and what is the complete set of possible outcomes.
Terry Oldberg says:
March 22, 2013 at 5:34 pm
I agree with you entirely in this sense, however: What the IPCC has done is assumed a model based on a population of one, and projected that model forward to give a distribution of outcomes. The usefulness of the resulting distribution is not in predicting the future, but in validating the model, i.e., in showing whether it is likely false or indeterminate – it cannot show if it is true.
Unfortunately, the clearly desired implication is that they are predictions, and that the scarier scenarios can happen, when there really is a very tenuous, to the point of negligible, basis to expect that at all.
But, that differs from what I have done. I have looked at the time evolution of the data itself, and matched it to a statistical model which describes the observed dynamics over the timeline. Their process is deductive, based on premises and projected from a sample size of one. Mine is inductive – I start with the data, and infer the statistical model from it.
And, mine has matched the future observables, where their’s has failed, i.e., it is very likely false. For me, the turning point of the ~60 year cycle in globally averaged temperature arrived right on time in about 2005, whereas the first figure in Ira’s post here shows that they are skirting on the extreme lower boundary of the distribution of their projections.
In this reality, systems tend to behave in simple ways. Complex systems tend to regress to a simple systematic mean. Thus, for example, the ungodly complications of quantum theory regress to Newton’s simple F = ma in the large, as demonstrated by Ehrenfest’s Theorem. Complex nonlinear systems tend to behave as simple linear ones near a particular equilibrium. Such regression has been observed ubiquitously. Our entire techno-industrial society depends upon it.
Those who stay locked in ivory towers and have little contact with practical reality tend to get wrapped around the axle, and overwhelmed by complexity. They just cannot conceive that the system could really evolve so simply as the rate of change of atmospheric CO2 being proportional to temperature anomaly, or the evolution of temperature being simply a trend plus a simple cyclic phenomenon. Yet, that is precisely what the data confirm. And, humans clearly have little impact on either.
Terry Oldberg March 22, 2013 at 5:08 pm and March 22, 2013 at 5:34 pm:
I am trying to get my head around the distinction you make between PROJECTION and PREDICTION. In my main Topic above I used “prediction” exclusively. In my previous writing, I do not believe I ever used “projection” in the sense you seem to be using it here as some sort of invalid prediction. The Merriam-Webster dictionary seems to be on my side:
Note that we have to go to the ninth definition to find anything like what we are talking about here.
Indeed, when I first heard the word used by you in association with the work of climatologists, I thought of simply taking a trend line (such as CO2 levels or temperature anomalies, etc.) and projecting it to the future. Thus, if the most recent trend was a linear slope, I would project it in a linear manner, if exponentially increasing slope, I would project it with upward curvature, if decreasing slope, I would project it with downward curvature.
In my research, I found how the IPCC defines the terms: see http://www.ipcc-data.org/ddc_definitions.html
Therefore, we could call Hansen’s 1988 scenarios A, B, and C “projections” in the IPCC sense. Each of Hansen’s sceniarios is an attempt to project the curve of the current temperature anomalies into the future according to three different possible sets of actions by human societies (emission of trace gases) and Nature (aerosols from volcanic eruptions). A is “business as usual” with no special action to curb emissions and no volcanic activity. B is moderate curbing of emissions and some volcanic eruptions. C is drastic curbing of emissions and some volcanic eruptions.
Hansen 1988 says Scenario B is the “most likely” and thus, by the IPCC definition, it becomes a forecast or prediction.
As a native speaker of (American – actually Brooklyn :^) English, I would go along with the IPCC and say that Scenario B, which Hansen 1988 called “most likely” is a prediction. Assuming he was sincere when he and his team constructed and ran the models (which assumption I accept) he was predicting two things:
B It is most likely that his recommendations would be accepted to some extent and thus, a) Society would curb emissions of trace gases in some moderate manner, and b) as a result, the Scenario B curve of future temperature anomalies would be approximated by future measurements
Now, although he did not say his Scenario A or Scenario C was “most likely”, I also consider them to be predictions. Namely:
A if, in the less likely case that his recommendations were totally ignored, a) Society would NOT curb emissions of trace gases and they would continue to increase exponentially, and, b) as a result, the Scenario A curve of future temperature anomalies would be approximated by future measurements.
C if, in the less likely case that his recommendations were totally accepted in a drastic, nearly unimaginable way, a) Society would DRASTICALLY curb emissions of trace gases and they would flatline in the year 2000, and b) as a result, the Scenario C curve of future temperature anomalies would be approximated by future measurements.
The above definitions of prediction seem reasonable to me. A failed prediction (and ALL of Hansen’s 1988 predictions were very wrong, including C which, while close to the actual temperature anomalies, is not matched with any drastic societal action nor any change in the trend of trace gas levels, which was the whole point of C) is, nevertheless a PREDICTION.
You (Terry Oldberg) seem to be saying that a prediction based on wrong science is a not a “prediction” at all, but only a “projection”.
But let us all agree that whatever Hansen 1998 is, it turned out almost totally wrong!
Ira
[” …as a result, the Scenario B curve of future temperature anomalies would be approximated by future measurements.” Should this not be “Scenario C curve” ? Mod] [Yes, fixed, Thanks. Ira]
Ira Glickstein:
I specialize in answering the question of whether methodological arguments are logical. Before examining the methodological arguments of the global warming climatologists in light of logic, it is necessary to rid these arguments of instances of the equivocation fallacy. This can be accomplished through disambiguation of the polysemic terms in the language of these arguments. A polysemic term is a term with more than one meaning.
Among the polysemic terms is the word pair prediction-projection in the circumstance that the two words in this word pair are treated as synonyms. In their study of the methodology of global warming climatology, Green and Armstrong found that most IPCC-affiliated climatologists treated the two words as synonyms at the time at which AR4 was being written.
The polysemic term prediction-projection can be disambiguated by assignment of distinct meanings to “prediction” and “projection.” This raises the issue of what these meanings shall be.
In addressing this task ( http://judithcurry.com/2011/02/15/the-principles-of-reasoning-part-iii-logic-and-climatology/ ), I formed the hypothesis that climatologists had acquired the two words from the meteorological literature. In particular, they had acquired the term “projection” from the literature of ensemble forecasting. In ensemble forecasting, a “projection” is a response function that maps the time to the values of a selected independent variable. Meteorologists seemed to adopt the definition of “prediction” that was standard in mathematical statistics. Under this definition, a prediction was an unconditional predictive inference.
As all of the evidence that I was able to acquire was consistent with this hypothesis, I went with it. By examination of uses of the two words in the literature of meteorology, I formed an impression of what meteorologists meant by each of the words. These are the meanings that I assign to the two words in this thread. While the IPCC claims there to be a circumstance in which a projection becomes a prediction, this is not mathematically possible for a “prediction” is an extrapolation to the outcome of a specified event in a statistical population but for global warming climatology there is no such population.
Bart:
In a search lasting 3+ years, I’ve been unable to find so much as a single event in the statistical population underlying the IPCC climate models. If you are aware of one, please point me in the right direction.
Instead of one or more events, I find multiple examples of the equivocation fallacy that deceive many of us into thinking that: a) predictions have been made when only projections have been made b) the models convey information to policy makers on CO2 emissions when the models convey no such information c) global temperatures are controllable by man when they are uncontrollable d) the models have been validated when they have only been evaluated e) the scientific method has been followed in studies of global warming when it has not been followed.
Ira Glickstein, PhD says:
March 23, 2013 at 12:01 pm
ALL of Hansen’s 1988 predictions were very wrong, including C which, while close to the actual temperature anomalies, is not matched with any drastic societal action nor any change in the trend of trace gas levels, which was the whole point of C)
The ‘drastic societal action’ was the Montreal Protocol, as for the change in the trend of trace gas levels, see below.
Ira, I suggest you read this post by McIntyre, it will show the observed reductions in CFCs, CH4 and N2O actually fell below Hansen’s Scenario C:
http://climateaudit.org/2008/01/17/hansen-scenarios-a-and-b/
Phil. says:
March 23, 2013 at 4:54 pm
Ira, I suggest you read this post by McIntyre, it will show the observed reductions in CFCs, CH4 and N2O actually fell below Hansen’s Scenario C
So with the very low rise now, it appears as if CO2 was never a major player all along so we can ignore CO2. Is that correct?
Werner Brozek:
Whether or not CO2 is a major factor cannot be determined until climatologists supply a missing ingredient for doing research that is “scientific.” This ingredient is the statistical population underlying their models.
Suppose this population is finally described and the duration of an event in this population is 30 years. An event has one of 2 possible outcomes. In one of these, there are two possible outcomes. One is that the spatially and temporally averaged global surface air temperature exceeds the long run median. The other is that the spatially and temporally averaged global surface air temperature does not exceed the median. In this case, the recent 16 year period in which the warming has oscillated about zero yields no observed events. Thus, it provides us with no information about the outcomes of the events of the future.
Terry Oldberg says:
March 23, 2013 at 3:18 pm
“If there is one, kindly describe the independent events in this population.”
They are the events which drive the system forward in time. If a whitening filter can be devised which effectively yields an estimate of that population of inputs as stationary broadband noise, then the inverse of that filter provides an effective model for prediction. It is not, of course, guaranteed, but the range of systems for which such an approach has been successfully applied is vast.
Bart:
Thanks for taking the time to reply. An event has a starting time and stopping time and a set of mutually exclusive collectively exhaustive possible outcomes. I’d like to know the starting time and stopping time for each statistically independent event in your population as well as the set of possible outcomes.
Terry Oldberg says:
March 23, 2013 at 3:18 pm
PS: I believe your criticisms are right on. What is being done in climate science right now is a scattershot approach which has very little likelihood of success. I would advocate a more phenomenological approach, based on how the data has actually been observed to behave, rather than trying to make the observations conform to an underlying theory which might well be (indeed, has been found to be IMHO) false. As Sherlock Holmes was fond of saying:
Bart:
Thanks for the support. The aspect of global warming research that strikes me as most interesting is that the methodology is unscientific, illogical and unsuitable for the intended purpose but that this state of affairs has successfully been covered up through repeated uses of a deceptive argument. Two hundred billion US$ have been spent on the misguided research and deluded governments are gearing up to spend one hundred trillion US$ or so on implementing the results. An advocate for impoverishing people in this way has received the Nobel Peace Prize. The President of the U.S. and United Nations are on board. This is a great story!
Terry Oldberg says:
March 24, 2013 at 12:07 pm
“I’d like to know the starting time and stopping time for each statistically independent event in your population as well as the set of possible outcomes.”
I will see if a brief synposis of my viewpoint will help.
We start with a description of the system from a set of linear time invariant stochastic differential equations. There are many powerful tools in existence for the identification of such systems. The best starting point is probably estimating the power spectral density (PSD).
For example, I did a PSD estimate of Sun Spot Number here. From this, I was able to see that the specturm was dominated by two spectral peaks which modulated against one another to produce four peaks in the spectrum. So, the underlying system has the character of the Hypothetical Resonance PSD I show in the middle, When squared, the signal has the theoretical specturm shown as the green line in the bottom plot. and that matches up pretty well with the spectrum of the squared process from actual data in blue. Although there are other small peaks evident in the data, it is apparent that the two processes identified above dominate. The others can be added at some point to achieve a better model, but these two main ones should be enough to provide a useful first-cut approximation.
I do not know what these processes are but, from a phenomenological viewpoint, I do not need to. Once identified, their source can be tracked down independently, but we can still used the identified structure to predict future evolution.
A model for the dominant processes is shown here. The input PSD of the driving processes is assumed to be wideband and flat, and this idealization works fine if the true stochastic input is simply more or less uniform in the frequency range of each of the two spectral peaks. I show here and here the outputs of the model when simulated, and they are seen to have similar character to the actual observed SSNs.
Using the square of the SSN provides a smooth observable which can be incorporated into an Extended Kalman Filter. Using backwards and forwards propagation and update of the filter, the states can be optimally smoothed and primed at the last value for prediction. Propagating the differential equations forward then provides an approximate expected value, which is therefore a prediction of the future based on all past observables, and the Kalman Filter formalism provides RMS bounds on the error in the expected value. I haven’t gone forward with the project of doing so because it is a big job, and I have many other competing interests, some of by which I earn a living, but the procedure is straightforward.
Note that this is an entirely phenomenological approach, I do not need to know the actual underlying dynamics, only a reasonable equivalent representation. The observed structure of the dynamics can be expected to continue as they have in the past. So, we have a statistically non-trivial set of observations with which to verify the model.
Now, why are we justified in taking such an approach, and why should we expect it to bear fruit? First and foremost, because it has innumerable times in the past. It is not an overstatement to say that our entire tech-industrial society has been built upon this very foundation. But, it is also reasonable from a first principles point of view.
Most processes, at the most basic level, can be represented to high fidelity as the outcome of a randomly driven set of partial differential equations (PDEs). PDEs can generally be decomposed onto a functional basis, thence expanded into a multi-dimensional set of first order ordinary differential equations. Further simplifications can be achieved by focusing on those states which dictate the long term behavior, the Langevin equations. In the neighborhood of a particular equilibrium state, these equations take the form of a linear time invariant (LTI) system. And, so, we can expect that starting from determination of an LTI system which describes the observations, we can ultimately arrive at a useful predictive model.
That model then provides a standard against which to validate theories on the deeper underlying system dynamics. So, it is essentially the reverse of the procedure the climate establishment is using. As I said before, their approach is essentially deductive – they start developing theory unconstrained by the observables, then they try to force the observations to match the theory. That, as Sherlock would say, is bass-ackwards. You should start with the observations, which then constrain the form of your theoretical models, in an inductive progression.
The deductive approach is somewhat like trying to get a winning Lotto ticket by randomly choosing a number, and seeing if it matches the winning number. The inductive approach is more like observing that the winning numbers in this particular game are always prime and never repeat, and so you dramatically reduce the number of possible winning numbers from which you can choose and, eventually, since the numbers are composed of a finite number of digits, you can zero in on the winning number.
Bart says:
March 24, 2013 at 2:00 pm
“The deductive approach is somewhat like…”
Or, maybe, it is like trying to guess the solution of an equation by generating random numbers until you get an exact match of the equation, versus using a Newton algorithm to converge quadratically on the answer.
Werner Brozek says:
March 23, 2013 at 9:47 pm
Phil. says:
March 23, 2013 at 4:54 pm
Ira, I suggest you read this post by McIntyre, it will show the observed reductions in CFCs, CH4 and N2O actually fell below Hansen’s Scenario C
So with the very low rise now, it appears as if CO2 was never a major player all along so we can ignore CO2. Is that correct?
No, the point of Hansen 88 was that GHGs would have a major effect on climate, most of the short term change would be due to gases other than CO2, long term the effect of CO2 would be significant. What the actual measurements show was that Hansen’s Scenario C was most representative of reality except for CO2 which was between A and B so you’d expect the overall result to lie between B and C. Hopefully the ongoing melting of the Arctic sea-ice won’t lead to a recurrence of the growth in CH4 but current measurements in the Arctic suggest otherwise.
Phil.,
What would it take for you to admit that AGW is either non-existent, or so minuscule that it isn’t worth worrying about?
Numbers, please: how many more years of little or no global warming would it take? How much more human-emitted CO2 without runaway global warming would it take?
Or, is your mind made up to the point where nothing can possibly convince you that your “carbon” conjecture was/is wrong? <— [Like Hansen's true belief.]
Planet Earth is not agreeing with you, Phil. Who should we believe, you and Hansen? Or the planet?
Phil. says:
March 24, 2013 at 2:41 pm
“result to lie between B and C. Hopefully the ongoing melting of the Arctic sea-ice won’t lead to a recurrence of the growth in CH4 but current measurements in the Arctic suggest otherwise.”
The greenhouse effect of CH4 competes with H2O and is only measurable in dry winter weather. Are you SURE it’s a problem when it can’t even be measured in warm moist weather?
Terry Oldberg March 23, 2013 at 3:09 pm:
Thanks for the link to your Topic on Judith Curry’s site. I read it through and now I understand the distinction you are making between types of models and whether their outputs are properly called projections or predictions.
The only problem I see is that there are many areas of public policy that are so complex and beset with lack of reliable data where the best anone can achieve is what you call a projection. Nevertheless, individuals and public officials must make decisions in the short term despite the uncertainty.
So, what to do? Well, we should take the most conservative course and not act rashly unless it is pretty clear that that is necessary in a given case.
In my field of system engineering, risk is defined as the probability a given bad event will occur multiplied by the cost if that event occurs. Thus, if the cost of that bad event is catastrophic, we need to act to prevent it even if the probability is low. Conversely, if the probability of that bad event occuring is high, we need to act to prevent it even if the cost if it happens is low.
On the other hand, if the probabiliy of a bad event happening is low, and the cost if it does occur is also low, we do not need to act to prevent it. I think that is the case with Global Warming. The probability of it amounting to more than 1 ºC per century appears to be very low, and, the cost to society of even as much as 1 ºC per century is negligible, because we can adapt to it, and it may turn out to be of net benefit.
Ira
Ira Glickstein:
Thanks for taking the time to read my article. A tricky aspect of the type of model that makes projections is that it conveys no information to policy makers about the outcomes from their policy decisions. Thus, though policy makers think the opposite, the availability of this type of model does not make global temperatures controllable through regulation of CO2 emissions.