CO2 Sensitivity is Multi-Modal – All bets are off

Guest Post by Ira Glickstein

A multi-modal probability distribution, such as the graphic below [from Schmittner 2011], cries out “MULTIPLE POPULATIONS”. Equilibrium Climate Sensitivity (expected temperature increase due to a doubling of CO2 levels, all else being equal) is distinctly different for Land and Ocean, with two peaks for Land (L1 and L2) and five peaks for Ocean (O1, O2, O3, O4, and O5).

When a probability distribution includes more than one population, the mean may, quite literally, have no MEANing! All bets are off.

Example of a Multi-Modal Distribution

According to the basic tenets of System Science (my PhD area) probability distributions that inadvertently mix multiple populations often lead to un-reliable conclusions. Here is an easy to understand example of how a multi-modal distribution leads to ridiculous results.

Say we graphed the heights of a group of infants and their mothers. We’d get a peak at, say 25″, representing the average height of the infants, and another at, say 65″, representing the mothers. The mean of that multi-modal distribution, 45″, would represent neither the mothers nor the infants – not a single baby nor mother would be 45″ tall!

If some “alien scientist” re-measured the heights of the cohort of children and their mothers over a decade, the mean would increase rapidly, perhaps from 45″ to 60″. If that “alien scientist” did not understand multi-modal distributions representing different populations, he or she might extrapolate and predict that, a decade hence, the mean would be 75″! Of course, actual measurements over a second decade, as the children reached their adult heights, would have a mean that would stabilize closer to 66″ (assuming about half the children were male). The “alien scientist’s” extrapolation would be as wrong as some IPCC predictions seem to be.

Implications of Multi-Modal CO2 Sensitivity

Schmittner says:

The [graph shown above], considering both land and ocean reconstructions, is multi-modal and displays a broad maximum with a double peak between 2 and 2.6 K [1 K = 1ºC], smaller local maxima around 2.8 K and 1.3 K and vanishing probabilities below 1 K and above 3.2 K. The distribution has its mean and median at 2.2 K and 2.3 K, respectively and its 66% and 90% cumulative probability intervals are 1.7–2.6 K, and 1.4–2.8 K, respectively. [my emphasis]

The caption for the graphic says:

Marginal posterior probability distributions for ECS2xC. Upper: estimated from land and ocean, land only, and ocean only temperature reconstructions using the standard assumptions (1 × dust, 0 × wind stress, 1 × sea level correction of ΔSSTSL = 0.32 K…). Lower: estimated under alternate assumptions about dust forcing, wind stress, and ΔSSTSL using land and ocean data.

So part of the cause of multi-modality is due to different sensitivity to dust, wind, and sea surface temperatures for the combined Ocean and Land data, and part due to differences between Ocean and Land. But, that is only part of the story. Please read on for how Geographic Zones seem to have different sensitivities.

Geographic Zones Have Different Sensitivities

Another Schmittner 2011 graphic, shown below, indicates how different the Arctic, North Temperate, Tropics, South Temperate, and Antarctic zones are. Indeed, there is a startling difference between the Arctic and Antarctic.

Zonally averaged surface temperature change between the LGM and modern. The black thick line denotes the climate reconstructions and grey shading the ±1, 2, and 3 K intervals around the observations. Modeled temperatures, averaged using only cells with reconstructions … are shown as colored lines labeled with the corresponding ECS2xC values.

The thick black line represents the “climate reconstruction” (change in temperature in ºC) between current conditions and those of about 20,000 years ago during the Last Glacial Maximum. The LGM was the coldest period in the history of the Earth in the past 100,000 years. Note that the Tropics were about 2ºC cooler than they are now, the South Temperate zone was about 3ºC cooler, the North Temperate zone about 4ºC cooler, and the Antarctic about 8ºC cooler. However, according to the climate reconstruction, the Arctic was about 1ºC WARMER than it is today!

The estimated CO2 level during the LGM is 185 ppm, quite a bit below the estimated Pre-Industrial level of about 280 ppm, and about half that of the current measured level of about 390 ppm. Thus, IF CO2 DOUBLING CAUSED ALL of the temperature increase from the LGM to the present, the sensitivity for the geographic zones would range from +8ºC (Antarctic) to +4ºC (South Temperate) to +3ºC (North Temperate) to +2ºC (Tropics) to -1ºC (Arctic).

Of course, based on the Ice Core temperature records for several ice ages over the past 400,000 years, the warming 20,000 years after a Glacial Maximum tends to be significant (several degrees). Thus, while increases in CO2, all else being equal, do cause some increase in mean temperatures, it is clear from the Ice Core record, where temperature changes lead CO2 changes by from 800 to 1200 years, that something else causes the temperature to change and then the temperature change causes CO2 to change. Thus, it would be wrong, IMHO, to assign more than some small fraction of the warming since the LGM to CO2 increases.

The colored lines in the above graphic correspond to modeled temperatures based on different assumed CO2 sensitivities, ranging from 0.3ºC to +8.4ºC. The darker blue line, corresponding to a sensitivity of 2.3ºC, is the best match for the thick black climate reconstruction line.

IPCC CO2 Sensitivities are Mono-Modal and have “Fat Tails”

So, how do the IPCC AR4 Figure 9.20 graphs of Equilibrium Climate Sensitivity compare to the Schmittner 2011 results? Not too well, as the graphic below indicates!

...Comparison between different estimates of the PDF (or relative likelihood) for ECS (°C). All PDFs/likelihoods have been scaled to integrate to unity between 0°C and 10°C ECS. ...

First of all, notice that NONE of the individual IPCC graphs are multi-modal! Yet, taken as a group, there are several distinct peaks, indicating that each of the researchers characterized only one of a number of multi-modal peaks, and were inadvertently (or purposely?) blind to the other populations. Thus, the IPCC curves, taken as a group, seem to support Schmittner’s results of multi-modality.

For example, compare the green curve (Andronova 01) to the red curve (Forest 06). They hardly overlap, indicating that they have sampled different populations.

There is another, less obvious problem with the IPCC curves. Notice that they each have a relatively “normal” tail on the left and what is called a “Fat Tail” on the right. What does that mean? Well, a “normal curve” has a single peak, representing both the mode and the mean, and two “normal” tails that approach zero at about +/- 3ơ (Greek letter sigma, representing standard deviation). A mono-modal curve may skew to the left or right a bit, which would put the mode (peak) to the left or right of the mean.

The problem with the IPCC curves is that, in addition to the skew, the right-hand tail extends quite far to the right, out to 10ºC and beyond, before approaching zero. According to Schmittner 2011:

High sensitivity models (ECS2xC > 6.3 K) show a runaway effect resulting in a completely ice-covered planet. Once snow and ice cover reach a critical latitude, the positive ice-albedo feedback is larger than the negative feedback due to reduced longwave radiation (Planck feedback), triggering an irreversible transition … During the LGM Earth was covered by more ice and snow than it is today, but continental ice sheets did not extend equatorward of ~40°N/S, and the tropics and subtropics were ice free except at high altitudes. Our model thus suggests that large climate sensitivities (ECS2xC > 6 K) cannot be reconciled with paleoclimatic and geologic evidence, and hence should be assigned near-zero probability….[my emphasis]

Based on the above argument, I have annotated the IPCC figure to “X-out” the Fat Tails beyond 6°C. I did that because any sensitivity greater than 6°C would retrodict a “total snowball Earth” at the LGM which contradicts clear evidence that the ice sheets did not extend equatorward beyond the middle of the USA or corresponding latitudes in Europe, Asia, South America, or Africa. Indeed, if Schmittner is correct, the tails of the IPCC graphs that extend beyond 5°C (or perhaps even 4°C) should approach zero probability.

Conclusions

Schmittner 2011 contradicts the IPCC climate sensitivity estimates and thus brings into question all IPCC temperature predictions due to human-caused CO2 increases.

It is clear from the several, widely-spaced peaks in the IPCC AR4 Figure 9.20 curves that Equilibrium Climate Sensitivity is indeed multi-modal. Yet, ALL the individual curves are mono-modal. Thus, the IPCC figure is, on its face, self-contradictory.

If Schmittner 2011 is correct that sensitivity beyond about 6°C is impossible based on the fact that Tropical and Sub-Tropical zones were not ice-covered during the LGM, the Fat Tails of all the IPCC Equilibrium Climate Sensitivity curves are wrong. That calls into question each and every one of those curves.

The multi-modal nature of CO2 sensitivity indicates that the effects of CO2 levels are quite different between geographic zones as well as between Ocean and Land. Thus, the very concept of a whole-Earth Equilibrium Climate Sensitivity based on a doubling of CO2 levels may be misplaced.

Finally, if CO2 is as strong a driver of surface temperatures as the IPCC would have us believe, how in the world can anyone explain the apparent fact that, given a doubling of CO2 levels, the modern Arctic is about 1°C COLDER than the LGM Arctic?

BOTTOM LINE: The Climate System is multi-faceted and extraordinarily complex. Even the most competent Climate Scientists, with the best and purest of intentions are rather like the blind men trying to characterize and understand the elephant. (One happens upon the elephant’s leg and proclaims “the elephant is like a tree”. Another happens to grab the tail and says with equal certainty “the elephant is like a snake”. The third bumps into the side of the elephant and confidently shouts “No, the elephant is like a wall!”) Each in his or her way is correct, but none can really understand all the aspects nor characterize or predict the behavior of the actual Climate System. And, sadly, not all Climate Scientists are competent, and some have impure intentions.

About these ads

153 thoughts on “CO2 Sensitivity is Multi-Modal – All bets are off

  1. Sorry, THIRD paragraph. Editors need editors too :-)

    [REPLY: Yes, I guess they do. Read the sentence again. -REP]

  2. Took me some time, but finally I understood what is said here.
    It seems so obvious, considering the enormity an unknown sides of the climate system that it is absolute hubris trying to give predictions on base of the current science, let alone dictate actions…
    And, it seems to me there is no “global mean temperature”. It is a fiction.

  3. Ira said:

    “Thus, while increases in CO2, all else being equal, do cause some increase in mean temperatures, it is clear from the Ice Core record, where temperature changes lead CO2 changes by from 800 to 1200 years, that something else causes the temperature to change and then the temperature change causes CO2 to change. Thus, it would be wrong, IMHO, to assign more than some small fraction of the warming since the LGM to CO2 increases.”
    ———–
    Of course, that “something else” that causes temperatures to start to change would be Milankovitch cycles with the follow through warming coming from CO2 released from warming oceans as well as less being absorbed by phytoplankton, leading to the rise in CO2 from the LGM to the Holocene maximum of about 100ppm over a period of approximately 10,000 years. The kick-start is the Milankovitch cycle with the thermostat being CO2. After the Holocene climate optimum, CO2 levels were generally steady to falling slightly, like they do during all interglacials, until of course the modern industrial era when CO2 levels rose as much in a few hundred years as they had in the previous 10,000. The small modulations of this long-term climate pattern that we’ve seen for several million years come from solar fluctuations of various durations and intensities, ocean cycles (which are just modulations of solar activity) and of course volcanic activity. The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?

  4. The fact that different authors get peaks at different points does not necessarily imply a multimodal population. I chalk it up to tremendously large error bars myself (Lindzen convinced me that positive feedback estimates are incredibly uncertain).

    -J

  5. The average human has one testicle and one ovary.

    The issue of climate sensitivity *is* the core issue. I think Ira make some very substantial arguments here.

  6. R. Gates;
    The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?>>>

    If you knew the slightest thing about physics, which you seem to take a certain amount of time, effort and pride to demonstrate you would know the answer.

    The long term effects of (for example) doubling of CO2 via a rise over a long period of time are precisely and exactly the same as the long term effects of doubling of CO2 over a short period of time. If doubling of CO2 results in +1 degree, then doubling of CO2 results in +1 degree no matter if it takes 100 years or 10,000 to reach double. Since we know from the very geological record you cite, that high levels of CO2 did NOT result in significantly higher temperatures, the obvious conclusion is that climate sensitivity to CO2 is exceedingly low so as to be insignificant.

  7. R. Gates says:
    December 18, 2011 at 10:40 am

    “.. The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record”

    I think that’s been our sceptical argument all along – it’s an unanswered question.

  8. R. Gates says:
    December 18, 2011 at 10:40 am
    “The kick-start is the Milankovitch cycle with the thermostat being CO2. After the Holocene climate optimum, CO2 levels were generally steady to falling slightly, like they do during all interglacials, until of course the modern industrial era when CO2 levels rose as much in a few hundred years as they had in the previous 10,000.”

    So you’re saying Antarctica is warming like hell?

  9. Very interesting. There seems to be a minor mix-up: At the beginning of the subsection “Geographic Zones Have Different Sensitivities,” the graphic has the North Temperate zone at 4 deg. C cooler at the LGM and the South Temperate zone 3 deg. C cooler; but the author’s text immediately following the graphic has this reversed.

    [Leigh: THANKS, fixed. Ira]

  10. R. Gates says:
    December 18, 2011 at 10:40 am
    “The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?”

    I’ll let others deal with the correctness of this statement (there have been periods of very much higher CO2 and there have been relatively rapid temp changes with out a clear connection to CO2). All I want to say is that “the central question” after more than a century of warming still remains a question – maybe not so central though now. The frightening warming over about 30 years seems to have reached a hiatus since about 1995 (Phil Jones pointed this out and Trenberth bemoaned the travesty of no change) and in a few years this hiatus looks to be lasting at least 20 years. Surely this takes the “C” off of CAGW at least, because now we know that the galloping warming still can’t out shine natural variability. Maybe it even takes the “A” off in terms of strength of the effect when you consider that all the projections, models, paleo alarm gongs that predicted the hurricanes, sea level acceleration (it too has taken a bend for the flats), snow not being a thing of the past and it returning even to Kilimanjaro, Lake Chad filling up again….. So what we may be left with is a GW cycle that may have run its course. You have to be at least disappointed that things haven’t turned out as was supposed to be 95% certain.

  11. Sunrise and sunset are relatively short as compared to night and day, yet sunrise and sunset are the “statistical average”. According to climate science, sunrise and sunset are what we should expect to see most of the time.

    The assumption that there is but a single average temperature is at the heart of climate science. This assumption has never been proven. Indeed there is much evidence that this assumption is wrong.

    When you build a house on a poor foundation, you do not get a good result.

  12. Interesting post. I am dubious of the validity of the Schmittner 2011 sensitivity PDF in view of its abnormal, multi-modal shape. I suspect that some methodological issue with the study may be more responsible than the existence of multiple populations, although you may be right – I haven’t looked into it in any detail.
    However, I have investigated the AR4 WG1 sensitivity studies in some detail, and I disagree with the statement:
    “It is clear from the several, widely-spaced peaks in the IPCC AR4 Figure 9.20 curves that Equilibrium Climate Sensitivity is indeed multi-modal. Yet, ALL the individual curves are mono-modal.”

    I think that the differing peaks in AR4 WG1 Figure 9.20 instead reflect the fact that most of those studies (possibly all) have an incorrect central estimate of climate sensitivity. That is not surprising, if one examines the eight studies in detail and sees the extent of their dependence on simulations by GCMs and/or other pretty complex climate models, substantially differing assumptions as to ocean heat uptake and forcings (such as from aerosols), and differing choices of mathematical/statistical methods (some of which are of questionable validity).
    Only one of the eight IPCC Figure 9.20 studies (Forster & Gregory 2006) is wholly observationally based, and the IPCC distorted its equilibrium climate sensitivity (ECS) PDF by multiplying its height, at each value of ECS, by the square of ECS, greatly increasing the fatness of its upper tail.

  13. Schmittner combines paleo reconstructions with climate models to arrive at his probabilities, so I wouldn’t give much on his results. We know that the GCMs are incapable of predicting the current climate, in other words, they are wrong; so his results will be affected by that. When the foundation is bogus, what builds on top of that is likely to be even more wrong.

    Schmittner’s paper says “Significant discrepancies occur over Antarctica, where the model underestimates the observed cooling by almost 4 K, and between 45-50° in both hemispheres, where the model is also too warm. Simulated temperature changes over Antarctica show considerable spatial variations”

    Yeah, they just never get Antarctica right.

  14. So is there any reliable figures with the so called rise in C02 that differentiates between man caused and natural C02. Seems most everyone yelling CAGW leaves that part out.

  15. Here’s an excerpt from an interview with Nathan Urban, one of the Schmittner et all co-authors:

    ————
    Q: Does this study overturn the IPCC’s estimate of climate sensitivity?

    No, we haven’t disproven the IPCC or high climate sensitivities. At least, not yet. This comes down to what generalizations can be made from a single, limited study. This is why the IPCC bases its conclusions on a synthesis of many studies, not relying on any particular one.

    While our statistical analysis calculates that high climate sensitivities have very low probabilities, you can see from the caveats in our paper (discussed further below), and my remarks in this interview, that we have not actually claimed to have disproven high climate sensitivities. We do claim that our results imply “lower probability of imminent extreme climatic change than previously thought”, and that “climate sensitivities larger than 6 K are implausible”, which I stand by. I do not claim we have demonstrated that climate sensitivities larger than 3 K are implausible, even though we calculate a low probability for them, because our study has important limitations.
    ——-

    http://newscience.planet3.org/2011/11/24/interview-with-nathan-urban-on-his-new-paper-climate-sensitivity-estimated-from-temperature-reconstructions-of-the-last-glacial-maximum/

    So, definitely somewhat short of a “Nail in the coffin”.

    And you do realise they use models, don’t you?

  16. Gary Pearse said: (to R. Gates)

    “So what we may be left with is a GW cycle that may have run its course. You have to be at least disappointed that things haven’t turned out as was supposed to be 95% certain.”

    ——-
    As I’ve got no personal “horse in this race”, it would be hard for me to be disappointed or encouraged, but I would suggest that this race is far from over, and in fact is probably coming to one of its most interesting points. We seem to have a rather quiet sun period ahead for a few cycles at least, perhaps not dissimilar to that which we saw during the Dalton minimum, or perhaps, as some suggest, even as deep as the Maunder. Either way, it will be a splendid time for see the true effects and forcing due to solar influences stacked up against the ever increasing levels of CO2 and other greenhouse gases, with modulation by aerosols tossed in for good measure. Indeed, with all the many ways we have of measuring the many variables of earth, sun, ocean, atmosphere, etc., this will be a most exciting time to following the study of climate. Many favorite theories from both sides of the AGW issue will be tossed aside or seriously modified, and by 2030, we’ll all be the wiser for it.

  17. I think the emphasis on temperature is misplaced. First, there’s Yellowstone and other super volcanoes. Yellowstone is perhaps ‘overdue’ and when she blows the effect on the life that’s left will be so overwhelming that all this discussion will seem quaint.

    Second, I’m fascinated by the reconstructions of CO2 levels over the past 500 million years or so. Basically the CO2 level has gone in one direction–down–while life has proliferated. From the Devonian through the Carboniferous it fell off a cliff as life exploded.

    CO2 levels even now are the lowest they’ve been in half a billion years. It looks to me that life (well plant life which is food) removes CO2 from the atmosphere with amazing efficiency. More life means less CO2. We NEED CO2 and we’re actually doing something about it! We should be patting ourselves on the back.

    Anything else, like temperature or sea level rise we merely adjust to because nothing is as important to life as food.

  18. The next bit from the same interview is also interesting:

    “It is rare that a single paper overturns decades of work, although this is a popular conception of how science works. Many controversial results end up being overturned, because controversial research, almost by definition, contradicts large existing bodies of research. Quite often, it turns out that it’s the controversial paper that is wrong, rather than the research it hopes to overturn. Science is an iterative process. Others have to check our work. We have to continue checking our work, too. Our study comes with a number of important caveats, which highlight simplifying assumptions and possible inconsistencies. These have to be tested further.

    There is a great quote from an article in the Economist that sums up my feelings, as a scientist, about the provisional nature of science.

    “In any complex scientific picture of the world there will be gaps, misperceptions and mistakes. Whether your impression is dominated by the whole or the holes will depend on your attitude to the project at hand. You might say that some see a jigsaw where others see a house of cards. Jigsaw types have in mind an overall picture and are open to bits being taken out, moved around or abandoned should they not fit. Those who see houses of cards think that if any piece is removed, the whole lot falls down.”

    Most scientists I know, including myself, are “jigsaw” types. We have to see how this result fits in with the rest of what we know, and continue testing assumptions, before we can come to a consensus about what’s really going on here.”

    I think I like this guy!

  19. R. Gates says: December 18, 2011 at 10:40 am
    [what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record]

    Simple: Carbon Dioxide goes up and the Temperature remains the same.

    Average Global Temperature Data from the Climate Research Unit at the University of East Anglia below, indicates no change in the last 13 years, while Carbon Dioxide has risen by about 30%.

    Year Deviation from the base period 1961-90, degrees C

    1998 0.529
    1999 0.304
    2000 0.278
    2001 0.407
    2002 0.455
    2003 0.467
    2004 0.444
    2005 0.474
    2006 0.425
    2007 0.397
    2008 0.329
    2009 0.436
    2010 0.470
    2011 0.356

    Source: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3vgl.txt

  20. Thanks Ira, for this very clear post. This strenghtens my opinion that the atmosphere is completely indifferent to any change in CO2, due to the fact that:

    A. there is no way to determine how sensible it is because of the fact that that depends of the circumstances as described by you, and
    B. the evidence leads to negative feedback in reaction on any change in temperature as described by Lindzen.

    We have a remarkable stable climate and earth and atmosphere have survived 4 billions years without burning everything into oblivion. What else can be the conclusion when it’s also known that CO2-levels follow temperature: CO2-levels follow climate conditions. That’s all there is.

  21. Good to see such a study. Next could someone look at the temporal aspects. Since CO2 releases, human-inspired and otherwise, generally occur close to the ground, and many will have diurnal variations, as well as seasonal ones, it would be unwise to ignore the effects on climate which occur before the assumed high degree of mixing within the local and regional troposphere, and between hemispheres (north and south, Pacific and non-Pacific) has had time to take place. I would imagine that CO2 releases near, for example, the ITCZ, will have different effects from those on during winter in northern Europe. I wouln’t, a priori, expect either effect to amount to much, but such a study might contribute to a more conclusive ‘putting CO2 in its rightful place’ which I think we do need. Not least since the corruption of the IPCC has rendered their pronouncements all but worthless.

  22. Manabe and Wetherwald (1974) The effects of Doubling CO2 concentration of the climate of a general circulation model cautioned this point as well:
    “The models of Rasool and Schneider (1971) and Manabe and Wetherald (1967) are globally averaged models. However, the climate of the planet Earth is maintained by the nonlinear coupling between various processes, such as the poleward heat transfer by the atmospheric and oceanic circulations, as well as the vertical heat transfer by reactive transfer and convection. Thus it is clear that one cannot obtain a definitive conclusion, using globally averaged model, concerning the effect of an increase in CO2 upon climate.”
    The title of Wanabe and Wetherwald’s paper was also a remarkably honest way to represent their work as the result of a model rather than a representation that it actually represented the complexity of the natural system.

  23. nc says:
    December 18, 2011 at 11:52 am

    So is there any reliable figures with the so called rise in C02 that differentiates between man caused and natural C02. Seems most everyone yelling CAGW leaves that part out.

    —————

    Not at all. We know the rise in CO2 is “man caused” because we are emitting about 30Gt per year and the atmosphere is retaining about half of that, the rest being taken up by the oceans. There are also isotopic studies that confirm the source of the CO2 as being from fossil fuels.That the rise in CO2 is “man caused” is one of the few things in climate science that can be considered effectively a certainty.

    Need links?

  24. Thus, it would be wrong, IMHO, to assign more than some small fraction of the warming since the LGM to CO2 increases.

    What I find interesting is this sudden notion that CO2 was the cause of the late 20th century warming when it was of nearly the same duration and rate as earlier periods of warming.

    Let me give an example by analogy.

    Imagine that there is a town on a beach someplace and for many years people really didn’t pay much attention to the tides. They knew that the level of the sea varied and simply accepted that … it just varies and it is what it is and sometimes it varies more than at other times and nobody was really much concerned about attempting to control the tides, they simply lived with the hand nature dealt them.

    Now lets say the town grows and needs more fresh water and they figure out a way to distill sea water into fresh water. Let’s also suppose there is a guy, let’s call him Jim, doesn’t much like the enterprising people who are making the water because they are making a handsome profit from it. Jim also takes notice of the tides. One day he notices that they are having an unusually low tide. He immediately comes to the conclusion that the water plant is removing water from the ocean and that is the reason the tide is getting lower. He does not draw any attention to the fact that the tides have been this low in the past and that, in fact, they have been varying on a rather regular cycle where tides that time of year tend to be more extreme than at other times of year. Jim decides to alarm up the population and tells them that if the tides continue to decline at the current rate, the harbor will go dry and the livelihood of all the fisherman will be destroyed and the water plant MUST be closed down right away and we must reduce the number of people in the town so that they can survive on the existing sources of water without the operation of the plant.

    Well, a strange thing happens. The change in tides reverses. Now the tides begin to increase. Jim is shown to be wrong. At the same time, the people of the town, their attention drawn to this sea, begin to use it for recreation and they start swimming and wading in it in droves. Jim doesn’t much care for the businesses that have sprung up to cater to these people as they are making a handsome profit, especially the swimwear manufacturers. One day Jim is sitting on his porch watching the people playing in the water with contempt when he puts an ice cube in his glass and notices the level of the water rises. He puts another piece in and it rises even more.

    Now he gets an idea. He goes to his buddies Phil and Mike who also hate the swimwear people. He explains that he believes he can convince the town council that all those people swimming in the water is causing the tides to rise. And according to his calculations, the rate of growth of swimmers seems to correlate with the rate of the rise of the tides. And if things continue as they are, left unchecked, the number of swimmers will grow to a number that causes the tides to flood the entire city. So Jim begins to sound the alarm. The city council creates a committee to study the problem which Phil joins and gains great influence over. They produce an assessment that validates Jim’s warning. Then they decide to look for someone to help them implement changes to avert disaster. Mike volunteers to (for a fee) consult with the town to create ways to solve the problem. Regulations are drawn up that would limit the number of people who can swim at any given time. They will be charged a swim tax. In addition, it is decided that the attractive swimwear is one reason for the increase in this recreational activity so a tax is placed on that as well. The mayor of the town then has the families of his council create a new business producing very unattractive swimwear in which Mike and Jim and Phil invest. Nobody likes the new swimwear but the government subsidizes it and combined with the tax on the old swimwear, it does become an affordable alternative. The beach and old swimwear tax pays for the subsidy on the new swimwear, the old swimwear people start to lose money, Mike profits handsomely from his consultation, the politicians kin and cronies profit from the new swimwear, Jim is happy that the evil swimwear company and the businesses catering to the swimmers are not doing so well.

    But the people HATE the taxes and they hate the swimwear but they go along because they don’t want their town to go under water. Well, just as this is being put into place, something strange happens, the tides begin to reverse and they begin to start to decline again. And about that time, and old timer pipes up and says “Hey, this happens every year. The tides come in and the tides go out … they have been ever since I was born. And the tides are high every year at this time. Remember last spring when we had such high tides?” Now Jim is getting worried. If the townsfolk find out that they have been tricked, he is going to have a very difficult time. He tries to tell the people that the old timer doesn’t know what he is talking about, he’s just a crazy old man who has never been to school and can’t possibly know what he is talking about. Jim goes on to explain that the tides are actually still rising but the rise is “hidden” because the winds have shifted and are now blowing the water out to sea and that the hidden buildup will return with a vengeance when the wind turns and all that water comes flooding back in, the town will be in even worse condition. While he can’t exactly be sure where the water is going, it is obvious that the wind has changed, isn’t it? So who are you going to believe, that crazy old-timer who has never been to school or me, a person who has been to many years of school?

    Since the end of the little ice age, we have seen a rise in temperatures. We have also seen that it seems to rise for about 30 years, take a 30 year hiatus, rise for another 30, drop for 30, and rise again. The HadCRUT3 data clearly show a late 19th century rise, a cooling, an early 20th century rise, a cooling and a late 20th century rise that seems to have stopped about 7 years or so ago.

    http://www.woodfortrees.org/plot/hadcrut3gl/from:2004/plot/hadcrut3gl/from:2004/trend/plot/hadcrut3gl/from:1975/to:2004/plot/hadcrut3gl/from:1975/to:2004/trend/plot/hadcrut3gl/from:1942/to:1975/plot/hadcrut3gl/from:1942/to:1975/trend/plot/hadcrut3gl/from:1911/to:1942/plot/hadcrut3gl/from:1911/to:1942/trend/plot/hadcrut3gl/from:1879/to:1911/plot/hadcrut3gl/from:1879/to:1911/trend/plot/hadcrut3gl/from/to:1879/plot/hadcrut3gl/from/to:1879/trend

    Only the late 20th century rise is said to be CO2 related. It is claimed that if we don’t do anything, the rise will continue and accelerate and we will all be flooded and our crops burned out. But it stopped in about 2004. Now we are told the heat is “hidden” someplace but will pop back out of its hiding place with a vengeance when whatever is causing the current cooling stops. Nope, do NOT look at the previous cycles of natural variability, we don’t have natural variation anymore, all climate change is human caused (or something). Do not pay any attention to that previous cyclical variation.

    Sorry for such a long comment, I just thought the analogy was a parallel for what is going on with climate change.

  25. The point that Ira makes here is of the most fundamental importance for the claim that climate science is something other than a baby in the birth canal. I would like to make the same point using different terminology. Climate scientists have given no attention to the “systematic” aspects of their so-called science. In particular, when they compare various kinds of temperature measurements they do not have a clue whether or not two measurements should be classified as the same kind of event. The ultimate reason for this inability is that they have no unified set of physical hypotheses that apply to all events of temperature measurement. Actually, the situation is worse than that because they do not have some set of physical hypotheses that apply to just one kind of temperature measurement.

    They use common sense entirely and treat proxy tree ring width measurements as comparable to thermometer measurements of air temperature. In doing this, they are Begging the Question (arguing in a circle) as to what makes a temperature measurement comparable to another. It is their duty to create a scientific system that explains why temperature measurement event X is comparable to temperature measurement event Y but they have done no work on this.

    Ira, you nailed them. I am so pleased that you made this contribution to our understanding that climate science is a baby in the birth canal and is not a genuine science in any meaningful sense of the phrase. This is a hugely important contribution. You deserve a MacArthur Genius Grant for this one. (Fat chance any skeptic will ever get one.)

  26. Andrew30,

    Your use of short-term temperature series, subject as it is to short-term climatic forcing and variations, is inappropriate when looking at the longer-term effects as from CO2. You’d just as well use your short-term series to prove that Milankovitch cycles have no effect on the climate, as it too, can’t be seen in your short-term series.

  27. I agree Ira that the different modes should give one pause.

    But the real reason there are all these different estimates is because “they are just making up the data and the math that goes into them”.

    I’ve delved into most of those papers and they do not use real data and new math just appears out of nowhere. For example, does anyone think the high Arctic was +1.0C in the ice ages. Alley 2000 says the Greenland ice cap was -20.0C, the north Atlantic proxies are -5.0C, the sea floor north of Canada shows glacial grooves which pushed right out to the continental shelves. The 0.15C/W/m2 impact of Mount Pinatubo somehow proves 0.75C/W/m2 when GHGs are involved.

  28. Ira:

    I believe you’ve erred. Contrary to your understanding, the notion of the equilibrium climate sensitivity (TECS) references no statistical population; there can be no population, as TECS is not an observeable feature of the real world. Rather than referencing a statistical population it references a statistical ensemble; the elements of the latter are the projections of various models.

  29. R. Gates says: December 18, 2011 at 10:40 am
    [what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record]

    R. Gates says: December 18, 2011 at 12:50 pm
    [Your use of short-term temperature series, subject as it is to short-term climatic forcing and variations, is inappropriate when looking at the longer-term effects as from CO2. ]

    R. Gates, you did ask about short-term. Perhaps you have a problem with sort-term memory, so I have grouped your statements in one place (above) so you can refresh your goal posts.

    Carbon Dioxide goes up and Temperature stays the same, for Natural Reasons.

  30. The above article makes a lot of sense.

    Aggregate statistical properties of multi modal distributions cannot be reliably interpreted. This neatly compliments the objection that average temperature has no physical meaning.

    Arguments constructed on the validity of global mean temperature must therefore justify themselves on both of the above points.

    If peer review does not insist on this, our most prestigious scientific journals could be infected by sub-prime scientific philosophy. (Ho Ho, as they say)

  31. R. Gates says:
    December 18, 2011 at 12:50 pm
    “Andrew30,
    Your use of short-term temperature series, subject as it is to short-term climatic forcing and variations, is inappropriate when looking at the longer-term effects as from CO2. ”

    R. Gates, absorption and re-radiation of LWIR is near-instantaneous, so that’s obviously not a “longer-term effect”. What other longer term effect could there be? Ah, yes, an accumulation of energy, measurable as heat. So, CO2 could theoretically lead to such an accumulation, and we could call that a longer-term effect. But we can’t find the heat. Even Kevin Trenberth can’t find it. And when no energy accumulates, what’s left? What “longer-term effect” do you suspect that we can’t measure?

  32. Question was:
    [what will the “effect be of raising CO2” to levels not seen in at least 800,000 years over a “time-frame so short” so as to not find a parallel in the geologic record]

    Response was:
    Carbon Dioxide goes up and the Temperature stays the same.

    R. Gates says:

    Your use of short-term temperature series [which was the original question], subject as it is to short-term climatic forcing and variations [as asked in the original question], is inappropriate when looking at the longer-term effects as from CO2 [redirect original question from short-term].

    You’d just as well use your short-term series [actually not Anrdew30s but rather CRU Time Series] to prove that [Straw man redirect attempt begin] Milankovitch cycles have no effect on the climate [Straw man redirect attempt end], as it too [re-enforce straw man], can’t be seen in your [again not Andrew30’s but rather CRU] short-term series [re-direct from original question copmplete].

  33. John B says:
    December 18, 2011 at 12:31 pm
    “There are also isotopic studies that confirm the source of the CO2 as being from fossil fuels.That the rise in CO2 is “man caused” is one of the few things in climate science that can be considered effectively a certainty.”

    Dr. Murry Salby has a different opinion about the antropogenic isotope “fingerprint”, namely that it is not distinguishable from natural sources:
    Jo Nova:

    http://joannenova.com.au/2011/08/blockbuster-planetary-temperature-controls-co2-levels-not-humans/

    Podcast:

    http://www.thesydneyinstitute.com.au/podcast/global-emission-of-carbon-dioxide-the-contribution-from-natural-sources/

    A year earlier, Dr. Roy Spencer said:
    “1. The interannual relationship between SST and dCO2/dt is more than enough to explain the long term increase in CO2 since 1958. I’m not claiming that ALL of the Mauna Loa increase is all natural…some of it HAS to be anthropogenic…. but this evidence suggests that SST-related effects could be a big part of the CO2 increase.
    2. NEW RESULTS: I’ve been analyzing the C13/C12 ratio data from Mauna Loa. Just as others have found, the decrease in that ratio with time (over the 1990-2005 period anyway) is almost exactly what is expected from the depleted C13 source of fossil fuels. But guess what? If you detrend the data, then the annual cycle and interannual variability shows the EXACT SAME SIGNATURE. So, how can decreasing C13/C12 ratio be the signal of HUMAN emissions, when the NATURAL emissions have the same signal???
    -Roy”

    http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/

  34. About your -2C cooling of the tropics during the last glaciation, you might want to have a look at:

    http://www.gfdl.noaa.gov/bibliography/related_files/bush9901.pdf

    1. Introduction
    One of the more perplexing problems confronting scientists attempting to reconstruct the climate of the Last Glacial Maximum (LGM) is the mounting evidence for tropical cooling of ~6C [Guilderson et al., 1994; Stute et al., 1995; Rind and Peteet, 1985; Thompson et al., 1995]. This cooling is significantly larger than the -2C proposed by the Climate: Long-Range Investigation, Mapping, and Prediction (CLIMAP) Project [1981] as well as by other studies [Broecker, 1986; Lyle et al., 1992].

    If that is the case, it puts the tropics closer in like with Antarctica

  35. John B says:
    December 18, 2011 at 11:59 am
    “[That scientist:] “Those who see houses of cards think that if any piece is removed, the whole lot falls down. Most scientists I know, including myself, are “jigsaw” types. We have to see how this result fits in with the rest of what we know, and continue testing assumptions, before we can come to a consensus about what’s really going on here.”

    I[John:] think I like this guy!”

    I think he’s wrong about the house of cards vs. jigsaw classification with regard to IPCC consensus science.

  36. DirkH says:
    December 18, 2011 at 1:31 pm

    John B says:
    December 18, 2011 at 12:31 pm
    “There are also isotopic studies that confirm the source of the CO2 as being from fossil fuels.That the rise in CO2 is “man caused” is one of the few things in climate science that can be considered effectively a certainty.”

    Dr. Murry Salby has a different opinion about the antropogenic isotope “fingerprint”, namely that it is not distinguishable from natural sources:
    Jo Nova:

    http://joannenova.com.au/2011/08/blockbuster-planetary-temperature-controls-co2-levels-not-humans/

    Podcast:

    http://www.thesydneyinstitute.com.au/podcast/global-emission-of-carbon-dioxide-the-contribution-from-natural-sources/

    A year earlier, Dr. Roy Spencer said:
    “1. The interannual relationship between SST and dCO2/dt is more than enough to explain the long term increase in CO2 since 1958. I’m not claiming that ALL of the Mauna Loa increase is all natural…some of it HAS to be anthropogenic…. but this evidence suggests that SST-related effects could be a big part of the CO2 increase.
    2. NEW RESULTS: I’ve been analyzing the C13/C12 ratio data from Mauna Loa. Just as others have found, the decrease in that ratio with time (over the 1990-2005 period anyway) is almost exactly what is expected from the depleted C13 source of fossil fuels. But guess what? If you detrend the data, then the annual cycle and interannual variability shows the EXACT SAME SIGNATURE. So, how can decreasing C13/C12 ratio be the signal of HUMAN emissions, when the NATURAL emissions have the same signal???
    -Roy”

    http://wattsupwiththat.com/2008/01/25/double-whammy-friday-roy-spencer-on-how-oceans-are-driving-co2/

    ——————————-

    I trust you are equally skeptical about these guys’ results.

    So, if the rise in CO2 is not due to man:

    1. Where is our 30 Gt / year going?
    2. Why is the ocean a net sink of CO2, not a source (it’s getting more acidic/less alkaline due to CO2 take-up)?
    3. Where is the extra CO2 coming from if it’s not the oceans and it’s not fossil fuel burning?

  37. This just tells me that systems science is not physics, and climate science is so bad that systems scientists think they can improve it. Sorry, but they cannot — because they don’t understand that climate science is a fundamental failure now, and cannot be saved by mere statistical re-interpretations of consensus-approved evidence (I remind you that the consensus disapproves, and ignores, evidence it doesn’t like). Compare this to my simple Venus/Earth temperature comparison, which not only invalidates the greenhouse effect entirely (CO2 climate sensitivity is zero), reforms the physics of atmospheric warming, and confirms the Standard Atmosphere as the equilibrium state of the atmosphere (predominant over any and all variations from that state), but also invalidates the radiative transfer theory (which believers say “proves” the greenhouse effect) as the ruling physics of atmospheric warming. Very few want to face this fact, that the physics of atmospheric warming has gone wrong, simply because climate scientists chose to forget the Standard Atmosphere, and instead embrace the “greenhouse effect” and a “runaway climate” bogeyman.

  38. Another thing that may not be fully appreciated is that the oceans were saltier at the LGM than they are today. This is for two reasons. First was the transfer of water out of the oceans onto land which left more salt in the oceans and the second was due to brine rejection due to the increased ocean extent.

    This would have resulted in some changes in ocean circulation. For example, today the North American Deep Water is more saline than the Antarctic Bottom Water. During the LGM, this situation was reversed. The Southern Ocean was apparently much saltier during the LGM than it is today. Basically, there are so many things that increase temperatures in various regions that it is difficult to tell how much change is due to a variation of a single factor. A wind change, for example, can have a huge impact on Greenland or Iceland or Svalbard temperatures. A change in salinity can, too, can alter the themohaline circulation. Today being more “thermo” than during the LGM and at the LGM being more “haline” than today.

  39. DirkH says: December 18, 2011 at 1:22 pm
    What “longer-term effect” do you suspect that we can’t measure?

    It only takes a small amount of mass to hide the missing heat.

    The missing energy has coalesced in to Higgs-Bozos in Europe and the gravitational effect has caused the mass of multiple satellites and recent satellites launches to re-enter the atmosphere. This has slowed the post-1950 mass loss of Earth and prevented the removal of vast amounts of energy (E=MC2) from the planet. What is needed for the investigation are more expensive gravity wave detectors (not located near rail lines) to quantify the catalytic effect of Carbon Dioxide in the Photon to Higgs-Bozos conversion.

  40. The inclusion of this kindness was totally unnecessary – this group are hell bent on ruining the world economy to cure a non-existent problem.

    “And, sadly, not all Climate Scientists are competent, and some have impure intentions.”

  41. What a waste of time and effort. Carbon dioxide sensitivity is simply zero, no matter how you slice it. It follows from Ferenc Miscolczi’s work on absorption of infrared radiation by the atmosphere. Using NOAA database of weather balloon observations that goes back to 1948 he was able to show that the transmittance of the atmosphere to outgoing IR has been constant for the last 61 years. During that same time period the amount of carbon dioxide in the air increased by 21.6 percent. This means that he addition of this amount of carbon dioxide to air had no effect whatsoever on the absorption of IR by the atmosphere. This is an empirical observation, not derived from any theory. Time to put away those nineteenth century calculations by Arrhenius and Tyndall and listen to what nature tells us about infrared absorption by the atmosphere. First of all, with no absorption from added carbon dioxide there can be no enhanced greenhouse effect, hence sensitivity is zero. This result does not mean that there is no theory, however. There is one, and it is this Miskolczi theory that nature follows, not something from the nineteenth century that IPCC is still pushing. This Miskolczi theory sets a cap on the total absorption of IR by the atmosphere so that if any greenhouse gas should increase this increase is compensated for by reduction of water vapor in the air. This is possible because unlike other gases there is an infinite supply of it from the ocean. According to this, adding more carbon dioxide simply lowers the amount of water vapor in the air and keeps the total absorption unchanged. Miskolczi’s observations of the NOAA weather balloon data are only possible if this is what actually happens. Note that this is the exact reverse of what IPCC climate models assume. That carbon dioxide which was added to air did not miraculously disappear and does absorb but this absorption is compensated for by the reduction of water vapor in the air that automatically takes place. Miskolczi also calculated the theoretical value of the required cap to atmospheric absorptivity and found that it must have an optical thickness of 1.86 in the infrared. This corresponds to a 15 percent transmittance for outgoing infrared radiation by the atmosphere. Next, using seven subsets of the NOAA database to calculate separate values for this optical thickness he found them all to come very close to the theoretical value of 1.86. This result was reported to the EGU meeting in Vienna last April. All I can say is, why are these guys still babbling about sensitivity?

  42. R. Gates says: “The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?”
    *******************************
    A short answer: The effect will be an explosion of growth by the plant kingdom followed by an expansion of the animal kingdom, enabled by the increased carrying capacity of the biosphere’s primary energy storage medium. Carbohydrate from CO2.

  43. R. Gates says: December 18, 2011 at 10:40 am
    Ira said:
    “Thus, while increases in CO2, all else being equal, do cause some increase in mean temperatures, it is clear from the Ice Core record, where temperature changes lead CO2 changes by from 800 to 1200 years, that something else causes the temperature to change and then the temperature change causes CO2 to change. Thus, it would be wrong, IMHO, to assign more than some small fraction of the warming since the LGM to CO2 increases.”
    ———–
    Of course, that “something else” that causes temperatures to start to change would be Milankovitch cycles with the follow through warming coming from CO2 released from warming oceans as well as less being absorbed by phytoplankton, leading to the rise in CO2 from the LGM to the Holocene maximum of about 100ppm over a period of approximately 10,000 years. The kick-start is the Milankovitch cycle with the thermostat being CO2. …

    R. Gates, I agree that the Milankovitch cycles are the best suspects for the “something else” that seems to be responsible for the several Ice Ages of which we are aware. However, I would not call CO2 any kind of “thermostat”. Quite the contrary. Have a close look at the Ice Core graphs and you will see that temperatures begin their rise from the depths of Glacial Maximum at the very moment in time that CO2 levels are at their lowest. Temperatures rise for hundreds of years until the warming oceans outgas more and more CO2 and CO2 levels begin to rise, admittedly causing temperatures to rise a bit further. Then, in each and every cycle, when CO2 levels are at their very highest, temperatures begin a relatively rapid fall. Temperatures fall for hundreds of years until the cooling oceans asorb more and more CO2 and CO2 levels begin to fall, admittedly causing temperatures to fall a bit further. That ain’t any kind of thermostat with which I am familiar!

    After the Holocene climate optimum, CO2 levels were generally steady to falling slightly, like they do during all interglacials, until of course the modern industrial era when CO2 levels rose as much in a few hundred years as they had in the previous 10,000. The small modulations of this long-term climate pattern that we’ve seen for several million years come from solar fluctuations of various durations and intensities, ocean cycles (which are just modulations of solar activity) and of course volcanic activity. The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?

    Yes, human-caused CO2 is most likely a substantial part of the cause of the recent rise in CO2 levels, along with natural CO2 rise due to continuing warming and outgasing of the oceans. However, comparing the current Ice Age with previous ones, our descendants seem to be due a drop in temperatures if natural patterns repeat. In that case, they may be thankful for whatever recent and future warming may be contributed by high, human-caused CO2 levels.

    Furthermore, while it is probably true that current CO2 levels are higher than they have been in the past several hundred thousand years, they were higher -probably much higher- over the millions of years hominids have trod the Earth, and the hundreds of millions of years since other plant and animal life evolved on Earth. I base that on the fact that most plants thrive best in elevated CO2 greenhouses, with CO2 levels of 1000 to 2000 ppm, three to five times as high as current levels. In addition, humans and other animals suffer no ill effects at those levels of CO2. To me, that is evidence that CO2 levels were in the range of 2000 ppm, or more, when multicellular life first evolved on Earth and during the half-billion years during which plants and animals evolved.

  44. 1. Where is our 30 Gt / year going?

    One part remains in the atmosphere (in warmer years more), the other part goes into reservoirs (oceans mostly, in colder years more)

    2. Why is the ocean a net sink of CO2, not a source (it’s getting more acidic/less alkaline due to CO2 take-up)?

    Because of the anthropogenic input. Atmospheric CO2 is determined by climatic factors (temperature). So, the anthropogenic emissions will be distributed between the atmosphere and the oceans according to the climatic factors.

    3. Where is the extra CO2 coming from if it’s not the oceans and it’s not fossil fuel burning?

    Extra CO2 in the atmosphere is mostly from the oceans, maybe a small part is from burning carbon.

  45. Apart from all the theories and models, what seems to me to be the basic problem in understanding the relationship between all of the factors (finite?) in climate change is the lack of reliable and replicable data. How long will it take to get ‘reliable’ date? Who knows? I know I won’t be around when that question is answered. The Warmistas call upon the Precautionary Principle, and demand that something be done now, in spite of the lack of understanding about climate. On the other hand, there is the age-old advice to physicians: First, do no harm.

  46. R. Gates;
    As I’ve got no personal “horse in this race”, it would be hard for me to be disappointed or encouraged>>>

    Really? Then why did you defend Al Gore’s on air experiment to the point that you were willing to bet it was accurate and clingingto your position was becoming increasingly idiotic? Why did you over a considerable period of time claim to be 25% skeptic and 75% warmist until you were challenged to produce a single instance in which you presented a skeptical point of view?

    Really R. Gates, you must be wondering why it is that I go out of my way to debunk you, perhaps you even feel like I’m picking on you. I am. The notion that you do not have a “horse in this race” doesn’t hold up to even a brief scrutiny of your many comments on this site over a considerable period of time. That, in a nutshell, is the crux of the problem. People claiming one thing and saying another.

    Your claim to be neutral is nothing but a smoke screen to try and provide your pro-warmist comments with more credibility than they otherwise deserve. All you need do to get me to stop picking on you is to approach the topic with facts and logic instead of smokescreens painting yourself as something you clearly are not.

  47. Thank you Ira for what I see as a very good post. The “multi-modal probability distribution” you describe is bound to bring about, just as you seem to suspect, wrong answers. –

    The “mental leap” necessary to see and understand that averaging the Sun’s Wattage per square meter, i.e. dividing the “Solar Constant” by 4 is the same as halving the W/m² that actually does constantly fall upon (albeit on 50%) of the Earth’s surface is also bound to bring about wrong answers – even if it makes numbers on a plan, or graph, crunch up very nicely.

    Oh, and R. Gates, my answer to your question/comment on December 18, 2011 at 10:40 am goes a bit like this: “On any Ice-core Graph” you will find that wheresoever the rising CO2 curve crosses the rising temperature (T) curve, that’s the point at which the CO2 takes over the warming.

    If, however, you cannot find such a junction, then CO2 is probably always a product, or result, of any warming – of any type – anywhere.

  48. Warming is very uneven. That is all this says. I think you are trying to draw rather overblown conclusions from that observation. In particular these “sensitivities” you are measuring are numbers which mean very little in our current context. The pattern of warming from the LGM to now is not the pattern of of warming that would be observed if the world were to warm from its current climate. These local “sensitivities” say nothing about what could happen to climate today.

  49. Edim says:
    December 18, 2011 at 3:08 pm

    1. Where is our 30 Gt / year going?

    One part remains in the atmosphere (in warmer years more), the other part goes into reservoirs (oceans mostly, in colder years more)

    2. Why is the ocean a net sink of CO2, not a source (it’s getting more acidic/less alkaline due to CO2 take-up)?

    Because of the anthropogenic input. Atmospheric CO2 is determined by climatic factors (temperature). So, the anthropogenic emissions will be distributed between the atmosphere and the oceans according to the climatic factors.

    3. Where is the extra CO2 coming from if it’s not the oceans and it’s not fossil fuel burning?

    Extra CO2 in the atmosphere is mostly from the oceans, maybe a small part is from burning carbon.

    ————–

    So, Edim, why is the ocean now raising CO2 from 280ppm to 390ppm when it hasn’t been above 280ppm for the last 800K years, even though temperatures have changed more in that time (though probably not as fast)? We are burning 30 Gt / year, the atmosphere is gaining around 15 Gt / year (the rest going into the ocean) and atmospheric concentrations are higher than they have been for 800K years. It takes some serious mental gymnastics to not join those dots.

    Really, there are lots more reasonable grounds on which to fight AGW than this one.

  50. Cooling is very uneven. That is all this says. I think you are trying to draw rather overblown conclusions from that observation. In particular these “sensitivities” you are measuring are numbers which mean very little in our current context. The pattern of cooling from the MWP to now is not the pattern of cooling that would be observed if the world were to cool from its current climate. These local “sensitivities” say nothing about what could happen to climate today.

  51. CynicalScientist says:
    December 18, 2011 at 3:43 pm
    Warming is very uneven. That is all this says. I think you are trying to draw rather overblown conclusions from that observation.>>>

    LOL. The CAGW meme is that all the data shows that is the case. They only allow that it will be more pronounced at the poles than at the tropics, other than that they repeat over and over again the ridiculous notion of CO2 doubling causing an increase in the “average” temperature of the earth. How else are we to interpret their doublespeak? Is there anywhere in the IPCC reports that speak to the notion of the warming being uneven? Do they explain anywhere that the bulk of the warming will come at night time lows in winter at high altitudes? Do they explain anywhere that the least warming will come in the tropics, and it very well may be so small at to be unmeasurable? Do they not point repeatedly at the arctic screaming “its happening already” and “polar amplification” when they know very well that in any warming cycle small that is exactly what to expect and the fact that there is no corresponding warming in the tropics is just the laws of physics working exactly as expected and so even if (BIG “IF”) CO2 is driving the warming, it is nearly meaningless in the global picture?

    Of course then there’s their favourite rebuttal when data shows that the earth had been warming since the LIA, that temps were warmer than now all over the world during the MWP, THEN they stick their nose in the air and say…”well, those were just regional”.

    REALLY? When there is no global pattern of warming, it just means the warming is “uneven”. But when data shows that warming and cooling happen at the same time in different places, its just regional. Would you clowns please pick one? Warming is uneven, ot it is even. Oddly, no matter which one you pick, your argument is sunk. That’s why you clowns argue one way in one situation, and then use the exact same data showing the exact same thing to argue a different way in a different situation.

    Itz one thing to grasp at straws, another to poke yourself in the eye with them.

  52. R. Gates says:
    December 18, 2011 at 12:50 pm
    Andrew30,

    Your use of short-term temperature series, subject as it is to short-term climatic forcing and variations, is inappropriate when looking at the longer-term effects as from CO2.

    We have no direct evidence CO2 has any longer term effects. The physics tells us CO2 has an immediate effect. What the effect is, an hour, a day, a year later we simply don’t know, because feedbacks are so poorly understood.

    And as for the multi-modal graph above. If it shows anything, it shows we have no idea what the climate’s sensitivity to 2X CO2 is.

    One thing it does show is that a climate sensitivity of less than 1 is politically unacceptable.

  53. Some thing to keep in mind:

    1. The transition from glacial to interglacial conditions is very fast. Probably in less than a century we go from glacial temperatures to interglacial temperatures.

    2. Transition from interglacial to glacial is gradual overall but very unstable. Temperatures of an interglacial period generally decline as it progresses. Temperatures are generally very stable during an interglacial, much more stable than during interglacials, but they tend to decline. Toward the end of the interglacial, temperatures go unstable and seem to be characterized by increasingly cold cool periods separated by a jump to nearly temperatures.

    3. Glacial periods have extremely unstable climate. Temperatures can change from extremely cold to nearly interglacial temperatures and go back to deep glacial cold in less than a millennium. These interstadials can see quite warm temperatures but they don’t “stick” and we slip back into cold very quickly.

    4. Glacial periods tend to get colder as they progress.

    5. Glacial periods end just as they reach their coldest time.

    So if solar forcing from orbital changes activates these cycles, and if these orbital changes are gradual and ongoing, why don’t we see the glacial period slowly warm into the interglacial? It is likely that we pop out with an interstadial event when conditions are good for allowing us to stay there for a while. In fact, we very nearly had an interglacial about 34,000 years ago when we had a major interstadial where got very warm, to temperatures seen during the Holocene (warmer than temperatures during the 8.2ky event), and it stayed relatively warm for a considerable period gradually cooling, then falling quickly to very cold temperatures only to nearly pop completely to Holocene temperatures again very quickly 30,000 years ago.

    There were 11 such rapid warming/cooling events in just the last 40,000 years. And by rapid, I mean extremely rapid. I am talking global warming/cooling from temperatures of a deep glacial to near modern temperatures and back again to glacial conditions in less than 500 years in some cases. Rapid warming was generally completed in much less than a century. The Younger Dryas ended in a period of 50 years warming from glacial conditions to pre-borial in the easy span of a single human’s lifetime.

    The point is that there is a very well-documented history on this planet of extreme climate change. Much more extreme climate change than we have experienced this century. None of these extremely climate change events (over a score of them, both warming and cooling in only the last 40,000 years) can be attributed to atmospheric CO2. These are climatic tsunamis.

    The problem with these “climate scientists” is they depend on the illusion that climate is somehow stable. It isn’t. The 8.2ky event, the Little Ice Age, the Younger Dryas and the warming following these events are nothing. Nature can throw some mighty fastballs. I am seriously of the opinion that these so-called “scientists” don’t really have a clue what they are talking about. Too clever by half, if you ask me.

  54. R. Gates says:
    December 18, 2011 at 10:40 am

    “The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record?”

    Well, the effect of NOT raising the CO2 level is a mile-thick sheet of ice covering much of the northern hemisphere wiping out a good fraction of our civilization not to mention rivers and lakes and wetlands and forests and etcetera.

    Personally I’m willing to gamble that raising CO2 level can’t be worse than not raising it.

    It almost seems inevitable that when you’re teetering on the edge of an ice age like the earth is today what with the Holocene interglacial getting statistically long in tooth it’s almost inevitable that some perfect storm of volcanoes or one real doozy will be the straw that breaks the camel’s back and ice age here we come. As an engineer I’d just as soon have a bigger safety margin and if manmade CO2 buys us a few degrees, right where we need it in the high northern latitudes where glacial expansion begins. Now throw the general benefits of higher CO2 levels for plant growth and water usage and longer growing seasons and raising CO2 level is such a good thing it’s kind of nutty to worry about it. Don’t be nutty, Gates.

  55. davidmhoffer says:
    December 18, 2011 at 4:54 pm

    ” Is there anywhere in the IPCC reports that speak to the notion of the warming being uneven?”

    Yes.

    It’s nice that you asked. You should ask more and state less, Mr. Jedi Salesman.

  56. Andrew30 says:
    Cooling is very uneven.

    Exactly so. And furthermore the pattern of cooling will not simply reverse the pattern of warming.

    Some of you seem to think I am defending AGW. You should read more carefully.

  57. “Those who see houses of cards think that if any piece is removed, the whole lot falls down. Most scientists I know, including myself, are “jigsaw” types. We have to see how this result fits in with the rest of what we know, and continue testing assumptions, before we can come to a consensus about what’s really going on here.”

    The reality is that most scientists want to believe they are dealing with a jigsaw puzzle and will strongly reject evidence that what they are in fact dealing with is a house of cards with key supporting cards missing (as shown by the evidence).

    This has been documented by Kuhn and others. Although there is little research on it as a socio-psychological phenomena. IMO it is a characteristic all humans share and isn’t specific to science.

  58. Thanks for that Ira.
    With Moore’s Law holding, I would expect that it will be several decades before any useful predictions from models will exists. Climate is just too complex. Besides, IMHO, science is provisionary. Otherwise Newton’s Law’s would be universal (or should I say multiverse).

  59. Harry Dale Huffman says:
    December 18, 2011 at 2:03 pm

    Harry, Harry, Harry…

    I’ve told you many times your temperature/density notions about planetary atmospheres is so totally misunderstanding gas laws it isn’t even funny.

    Well, actually it IS funny.

    Venus is hot because of its atmosphere. You have that part right. You also have the part right about the high surface temperature not being caused by CO2 retaining solar energy through the so-called greenhouse effect. There’s no sunlight even gets close to the surface on Venus. It’s pitch black. The surface is so hot because the temperature gradient from the molten core of the planet does not take a radical dip downward close to the surface as it does on the earth. That’s because the 90X thicker atmosphere of Venus insulates the crust a lot better than our wispy atmosphere and heat from the mantle rises further because it isn’t being carried off as fast. So you see it isn’t the sun heating the surface of Venus it’s the molten interior of the planet that’s heating it. The temperature gradient of the planet from core to surface is different on Venus because the surface is far better insulated.

  60. Well, the effect of NOT raising the CO2 level is a mile-thick sheet of ice covering much of the northern hemisphere wiping out a good fraction of our civilization not to mention rivers and lakes and wetlands and forests and etcetera.

    And from the standpoint of the UN Agenda-21 people, that is a feature, not a bug. Most of what is now Canada being deposited by a glacier in the middle of Indiana is a “sustainable” development as far as they are concerned. There’s no sustainable development like no development at all. A lack of development can be sustained forever.

  61. Ira Glickstein, PhD says:
    December 18, 2011 at 3:03 pm
    Have a close look at the Ice Core graphs and you will see that temperatures begin their rise from the depths of Glacial Maximum at the very moment in time that CO2 levels are at their lowest.

    Ira, there is nothing in this to suggest that CO2 is anything but an inverse forcing for climate. As you state:

    ” at the very moment in time that CO2 levels are at their lowest. Temperatures rise for hundreds of years” “when CO2 levels are at their very highest, temperatures begin a relatively rapid fall”

    Where is the observational evidence that increasing CO2 leads to increasing temperatures? All the ice cores show is that climate science has cause and effect backwards.

  62. John B says:
    December 18, 2011 at 4:31 pm
    So, Edim, why is the ocean now raising CO2 from 280ppm to 390ppm when it hasn’t been above 280ppm for the last 800K years,

    Likely because this interglacial has lasted longer than average, allowing more CO2 to out-gas, or because we are using different techniques to measure modern CO2 levels as compared to 800K years ago.

    What we do know is that 280ppm is very low in terms of CO2 levels over most of the past billion or so years that life has existed on this planet. CO2 levels this low are typically only seen during ice ages, which are not exactly “life friendly”.

  63. Harry Dale Huffman says:
    December 18, 2011 at 2:03 pm

    You won’t believe this but when I read your article linked above and saw the factor 1.176 I could not believe it myself, for I recognized that figure, from doing some analysis on Venus’s atmosphere and temperatures but from a totally different direction. I was tracing energy transfers in relation to the mass attenuation coefficient across Venus’s entire atmospheric column, from surface to TOA, Venus’s atmosphere having 94 times the mass compared to earth.

    The brunt of my analysis was why Venus, with a surface radiative power of 17004 Wm-2 (740K) can transfer to a 65.3 Wm-2 power at the TOA, why. There is a large attenuation of radiation as you progress upward from the surface to the TOA. (and amazingly that same effect occurs in our atmosphere by pure mass but by 94th root to that on Venus due to the 94 times mass)

    Would you like to see that parallel, if your are still monitoring this thread? It really floored me and it would take me a little while to type it up in an understandable form.

  64. Dave Springer says:
    December 18, 2011 at 5:38 pm
    from: Jedi Scientist
    to: Jedi Salesman
    Frequently Asked Question 11.1
    Do Projected Changes in Climate Vary from Region to Region?

    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-11-1.html

    ROFLMAO@u>>>

    Why thank you for that Mr Springer. For once you’ve managed a retort with relevant information, though your perserverance in demonstrating that your are a complete and total jerk is also on full display.

    On the other hand, the link you provide speaks to regional factors that cause idiosyncratic warming. However, the article was in regard to the uneveness of various factors including geographic differences just as the link you posted suggest. Despute this, as the article also discusses, the IPCC estimates for CO2 sensitivity are mono-modal while the data clearly shows that sensitivity is multi-modal.

    In other words my dear friend, the article you linked to indeed says that warming will not be uniform. However, the discussion was in regard to sensitivity directly attributed to CO2. In terms of sensitivity to CO2, the IPCC has been quite clear that they consider it uniform. In fact, they calculate an increase in radiance at TOA as 3.7 w/m2 on a 24 hour basis. Given that they claim also the source of the radiance is LW absorbed from earth surface and re-radiated back to earth, one can only wonder how the tropics, radiating as much as 450 w/m2 or more, and the polar regions, which may be under 200, can possibly result in a uniform TOA increase of 3.7w/m2. Further, since the poles are nearly devoid of water vapour, which dwarfs CO2 as a GHG, the imbalance and lack of uniformity between the tropics and the poles should be larger still. Furthermore, the tropics are net absorbers of energy, while the poles are net losers of energy.

    In other words, in terms of sensitivity to CO2 increases, there is nothing possibly uniform at all. Despite which, the IPCC attempts to pain that picture. Please note that while the link you supplied does in fact explain the variations expected in warming due to regional wind patters and so on, it says not one damn thing about variations in sensitivity to CO2.

  65. wayne says:
    December 18, 2011 at 6:53 pm
    Would you like to see that parallel, if your are still monitoring this thread?

    A number of readers that have followed Dale’s work would likely be interested.

  66. The work of Schmittner et. al. is just another piece of computer simulation.

    The post here analysing a multi-modal description is implicitly accepting Schmittner et. al.

    Why should this particular simulation be trusted over any other?

  67. It almost seems inevitable that when you’re teetering on the edge of an ice age like the earth is today what with the Holocene interglacial getting statistically long in tooth it’s almost inevitable that some perfect storm of volcanoes or one real doozy will be the straw that breaks the camel’s back and ice age here we come.

    It will likely be a period such as the Little Ice Age that we simply aren’t able to recover from. If you look back over the past 2000 years, we have had several cool periods. Each one bottoms out a little cooler than the previous did. Each warm period tops out a little cooler than the previous one did. We know that this warm period is colder than the MWP because we can’t yet establish farms in Greenland in places where crops were grown then in Greenland. Same is true for valleys in Scandinavia and the Alps that are currently too cold or too wet from retreating ice to grow a crop but supported crops during the MWP.

    Looking at the various literature in paleoclimatic reconstructions of all sorts and seeing the various recurring cycles that seem to appear in all of them, it would seam reasonable, simply based on the number of these conducted using different proxies by different people at different times that we are not likely to get much warming between now and the end of this century.

  68. Ira,

    Thanks for your thoughtful and educated response. I asked an open question and you provided one possible set of answers. Of course the sequestration of CO2 during cooler glacial periods doesn’t just involve the ocean’s uptake, but biological uptake and even increased uptake through rock weathering and the carbon- rock cycle. As CO2 is a non-condensing GH gas, it is not so directly affected by temperature changes as water vapor is, and thus, it can maintain temp in a range.

    I am still of the opinion that global temps will reach about 3C higher per the doubling of CO2 from pre-industrial levels. This meets both the average of many GCM’s as well as what has been seen from paleoclimate studies when fast and slow feedbacks are considered.

  69. R. Gates;
    As I’ve got no personal “horse in this race”, it would be hard for me to be disappointed or encouraged>>>

    followed by:

    R. Gates;
    I am still of the opinion that global temps will reach about 3C higher per the doubling of CO2 from pre-industrial levels. >>>>

    Which is it R. Gates? Do you have a horse in the race? Or not?

  70. jimmi_the_dalek says: December 18, 2011 at 7:27 pm
    The work of Schmittner et. al. is just another piece of computer simulation.
    The post here analysing a multi-modal description is implicitly accepting Schmittner et. al.
    Why should this particular simulation be trusted over any other?

    jimmi_the_dalek, I do not accept the numeric ECS(2xC) results of Schmittner any more than those of the other computer-based models. Although Schmittner moves the mean value down a bit from the IPCC 3ºC to a slightly lower number, which is at least in the right direction, I still believe the correct range is closer to 0.5ºC than to any number in the IPCC range of 2ºC to 4.5ºC.

    The value of Schmittner to me is in three aspects:

    1) Schmittner got the FORM of the ECS correct. It seems blindingly obvious to me that the effect of doubling CO2 levels has got to be very different for each geographical zone, for areas over Land vs over Water, for mountaneous vs flat terrain, for areas over warmer water that net outgases CO2 vs cooler water that net absorbs CO2, for cloudy vs clear areas, and for various combinations of aerosols (dust, etc.) and cosmic rays and so on and on.
    2) Schmittner should be congratulated for letting the results show their true multi-modal form. I think the IPCC totally misunderstands the complexity of the Climate System and they totally missed the meaning of the multi-modal aspect of Climate Sensitivity even though it was staring them in the face in the form of those multiple mono-modal curves in figure 9.20 of AR4. I think at least some of the prior modelers got multi-modal results and, thinking they were wrong, diddled their data to get a mono-modal result.
    3) Schmittner also demolishes the IPCC “Fat Tail” fettish. I understand that at least one of the curves in the IPCC ensemble was, in its initial form, devoid of a Fat Tail, and, like the monkeys they are, the IPCC modified the data to pin a Fat Tail on that donkey. Schmittner effectively places an upper bound of 6.3ºC on ECS(2xC), which should slim down all future right-hand tails.

  71. Davidmhoffer,

    Apparently your sensibilities are not quite refined enough to differentiate the difference between having an opinion as to an outcome and having a vested interest in such. A pity for you.

  72. It seems blindingly obvious to me that the effect of doubling CO2 levels has got to be very different for each geographical zone, for areas over Land vs over Water, for mountaneous vs flat terrain, and for various combinations of aerosols (dust, etc.) and cosmic rays and so on and on.

    Well, of course it is. The reason is that the 24hr LWIR emitted by these different land areas are different and that is what “greenhouse” warming is acting on. Look at the LWIR radiation from the surface of the ocean off San Francisco an hour after sunset on July 1. Then look at it an hour before dawn. It is practically unchanged. Now check the LWIR emitted from the Black Rock Desert near Gerlach Nevada an hour after sunset and again an hour before sunrise. WAY different.

    But overall, I believe we are going to find that the climate sensitivity they have claimed for CO2 has been overstated generally. About half of the warming out of the LIA happened before 1940. Most of that was in the 1920’s. We had nearly an identical period of warming in the late 20th century which I believe is now completed. If we are *really* lucky, we might have another such period around 2035 or so but I am beginning to doubt that. The best we might get is a hiatus of cooling for 30 years rather than any warming.

    2) I think the IPCC totally misunderstands the complexity of the Climate System

    I don’t think that is accurate. Their “mission” (The Cause) is to produce a plausible mechanism by which CO2 emissions can be shown to cause “unsustainable” conditions. It is a multi-front effort that includes the IPCC reports in addition to gatekeeping of the published literature in the field of climate science, gatekeeping of research grants, pressure applied to keep the rest of the researchers in the field in line, smearing of any who get out of line, damaging any journal or information outlet that provides a conduit for information counter to the notion that CO2 emissions cause an “unsustainable” future, and coaching of mass media outlets in a coordinated public propaganda campaign. All of this has been revealed in the climategate emails and in the experiences of people in the field of climate research. It really has nothing at all to do with really “understanding” how the climate works, it is about creating a mechanism whereby fossil fuel use can be regulated globally.

    I think at least some of the prior modelers got multi-modal results and, thinking they were wrong, diddled their data to get a mono-modal result.

    Or they were told in the review process or by informal review by peers before submission that the multimodal result was not going to be allowed to be published. If the paper was in any way a “threat” to the cause or something “skeptics” could latch onto, it is going to be shot down.

    3) Schmitner also demolishes the IPCC “Fat Tail” fettish. I understand that at least one of the curves in their ensemble was, in its initial form, devoid of a Fat Tail, and, like the monkeys they are, the IPCC modified the data to pin a Fat Tail on that donkey.

    Of course they did because that makes the case against CO2 more plausible and THAT is the point. This isn’t about real climate change at all, that is orthogonal to the purpose of all of this. The “mission” is to show a reason why CO2 emissions must be centrally regulated by a bunch of unelected, self-appointed, bureaucrats. I’m wondering how they even got it published. I’ll look for the “hockey mask” statement in the document, it must have been in there somewhere.

    Here’s the thing: You can only have a uniform response to increased CO2 if LWIR is uniform. If I have a major La Nina condition and the equatorial Pacific gets much colder than normal, there just isn’t going to be much LWIR. If I get a lot of snow that reflects visible light before it has a chance to be converted to LWIR, all the CO2 in the world isn’t going to make any difference. Heck, if I paint enough roofs white, you could have an atmosphere of pure CO2 and it won’t cause any warming because so much visible light is being reflected.

    The extent to which CO2 is going to cause warming is going to depend on the nature of the LWIR aimed at it.

    Also note that atmospheric CO2 acts both ways, it also blocks LWIR from the sun. So the more CO2 you get, the less IR you get to the surface (that is absorbed and re-radiated). But even that doesn’t matter, either, because the atmosphere is already opaque at the CO2 absorption wavelengths. ALL of the LWIR being emitted is already being absorbed by the atmosphere. Adding some more CO2 is like taking a solid wall, applying a coat of paint, and claiming it is now more “light proof”.

    The entire conversation is silly. It isn’t about CO2 and it isn’t about climate change. It is about an enabling mechanism for “sustainable development” which is the real underlying UN mission. Google it.

  73. David Springer says:

    “Now throw the general benefits of higher CO2 levels for plant growth and water usage and longer growing seasons and raising CO2 level is such a good thing it’s kind of nutty to worry about it. Don’t be nutty, Gates.”
    —–
    Seems a bit of a bit of a gamble. To raise CO2 to levels not seen in 800,000 years to prevent a return to glacial conditions? It might even be a gamble worth taking were it not for the side effects, such as the acidification of the oceans that seems to be accompanying the rapid buildup of CO2. I suppose I’d like to be more assured that we’ll be able to grow enough grains and the ocean food chain will remain healthy with such geologically rapid changes.

  74. And once you have Googled “sustainable development” move on to “education for sustainable development”. It is a HUGE multinational UN project that practically every nation, state/province, district, county, town has bought into and CAGW is the enabling mechanism for the entire thing. Without CAGW, they would have to come up with and entirely new foundation for what amounts to probably THE primary focus of the United Nations and many local governments.

  75. R. Gates

    When modern species first appeared, CO2 levels were much higher than today, Today’s CO2 levels are DANGEROUSLY LOW for most plant species now living on this planet. People do not realize how quickly CO2 is scrubbed out of the atmosphere (and ocean). The PETM saw an absolutely ferocious release of CO2 into the atmosphere. Within the period of one single modern glacial cycle (about 150,000 years) it was all gone and back to previous levels. During that period, mammals thrived, spread out, became dominant and primates appeared and began to diversify.

  76. R. Gates says:
    December 18, 2011 at 9:24 pm
    Davidmhoffer,
    Apparently your sensibilities are not quite refined enough to differentiate the difference between having an opinion as to an outcome and having a vested interest in such. A pity for you>>>

    Wow. First you said you had no “horse in the race” regarding what sensitivity actually is, and then you claim to believe that sensitivity is 3 degrees C per CO2 doubling. On the one hand, you proclaim to be a neutral observer interested in the data and what it shows, and on the other hand you state clearly that you are not a neutral observer, you have already draw your conclusions before the data to do so is actually in.

    You defended Al Gore’s experiment as being an accurate illustration, but it was demonstrated to be a fake. You bet with me that it would show the warming effects of CO2 if it was done as illustrated, but it did the opposite. I’ve challenged you multiple times to produce a single statement made by you that was supportive of the skeptic position, which you cannot do, though this blog is laden with your pro warmist remarks and defense of various members of the “the team” with some truly lame excuses, yet you continue to pretend you are neutral.

    You also appear to be a formula writer, Each time you engage with someone, you begin with a compliment like “Thanks for your thoughtful and educated response.” followed by a tangential argument that is mostly true, partly wrong, but cleverly worded to draw to conversation in a different direction like “Of course the sequestration of CO2 during cooler glacial periods doesn’t just involve the ocean’s uptake, but biological uptake and even increased uptake through rock weathering and the carbon- rock cycle. As CO2 is a non-condensing GH gas, it is not so directly affected by temperature changes as water vapor is, and thus, it can maintain temp in a range.”

    Then you follow that with a conclusion based on an out of context remark that draws as itz evidence the not quite relevant “facts” that you’ve tabled under the guise of being a nice guy making a compliment toward the original author, and then using your nice guy persona to throw in some more “facts” and spin a whole picture around it to support the conclusions that you then announce.

    On the one hand, I’m sure it is easier for you to write by following this pattern, but it is also easier to debunk since the pattern is so obvious.

  77. Under the “sustainable development” framework of the UN which the US has signed onto, there comes into play a concept called the “precautionary principal” where something doesn’t have to be proved to be acted upon, it simply has to be deemed plausible and a potential danger to “sustainable development” of the planet. So the mission of the IPCC is to create a plausible threat that can be acted upon under the concept of the “precautionary principal”.

    In short, to justify spending hundreds of billions of dollars, something has to simply be deemed plausible by “consensus” (consensus is a huge deal in sustainable development concepts). In order to be no longer considered, it has to be absolutely proved to be implausible. It is impossible to prove a negative. In this way, the UN can then justify whatever it wants by simply cooking up a plausible scenario, gaining consensus that it is, indeed plausible, and then may act in accordance with the “precautionary principal”.

    George Orwell would be proud. George HW Bush signed the US up to it and Clinton confirmed it.

    http://en.wikipedia.org/wiki/Precautionary_principle

    One of the primary foundations of the precautionary principle, and globally accepted definitions, results from the work of the Rio Conference, or “Earth Summit” in 1992. Principle #15 of the Rio Declaration notes:

    “In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.”

    This definition is important for several reasons. First, it explains the idea that scientific uncertainty should not preclude preventative measures to protect the environment. Second, the use of “cost-effective” measures indicates that costs can be considered. This is different from a “no-regrets” approach, which ignores the costs of preventative action.

    It doesn’t matter if CAGW is actually happening or not. It only matters that it is PLAUSIBLE.

  78. ferd berple, I may have jumped the gun… again. That seems to be just a coincidence for those two factors don’t seem related physically. Will get back if I can find a tie.

  79. The proposed mechanism for global warming is that long wave IR emitted at the Earth’s surface is absorbed by CO2 in the atmosphere and almost instantly re-radiated, with 50% back toward the surface. This slows the rate of radiative cooling of the Earth’s surface, and thereby leads to conductive warming of the atmosphere. However not all of the Earth’s surface responds equally to incident LWIR. As can be seen from this simple experiment –

    http://tallbloke.wordpress.com/2011/08/25/konrad-empirical-test-of-ocean-cooling-and-back-radiation-theory/

    – water that is free to evaporatively cool is not greatly effected by LWIR. This is significant as 71% of the Earth’s surface is ocean.

    I believe the multi-modal sensitivity proposed in Schmittener 2011 is likely correct in form, but the values shown for sensitivity are still way too high. However I hope that this new discussion of “multi-modal sensitivity” may lead to Willis and Joel revising their previous positions on the issue.

    Taking the claimed basic “black body” sensitivity figure of around 1.2 degrees for a doubling of CO2, reducing that by 70% over the oceans and allowing for negative water vapour feedback of 0.5, my crude calc for “multi-modal” sensitivity is 0.3 degrees. Which is neither dangerous nor catastrophic, especially when the pre-industrial figure for CO2 of 280ppm is most likely far too low.

  80. The amount of CO2 in Earth’s atmosphere is (the emission rate) – (the scrubbing rate)

    The thing is that the scrubbing rate varies. One thing it varies with is the amount of CO2. If you put more CO2 into the atmosphere, the scrubbing rate of CO2 goes up. This is because of things like CO2 fertilization of plants (mostly phytoplankton), they respond with higher growth rates causing an increase in the rate of biomass production. Another increase in scrubbing will come from increased carbonic acid in rainfall causing more reaction with rock during erosional processes. Scrubbing rates can also change due to uplifting of mountain ranges exposing more rock to be eroded, removal of surface soil exposing rock, glacial retreat exposing rock to erosion.

    The greatest natural sources of CO2 emissions are volcanic activity and natural gas seepage. As Earth cools, the amount of CO2 produced from volcanism declines. This results in a reduction of CO2 emissions into the atmosphere and a decline in atmospheric CO2 as it is scrubbed out. Now to some extent this is self-regulating at the low end because as atmospheric CO2 declines, its scrubbing rate slows. Rain water is not so reactive, plants don’t put on as much biomass and so the scrubbing rate reaches rough equilibrium though does continue to slowly fall as insoluble carbonates are deposited on the ocean floor over time.

    A mass release of CO2 simply means that biological and geological scrubbing processes speed up and remove the CO2 from the air. A new uplift of a mountain range can suddenly (in relative geological terms) increase the geological scrubbing rate and reduce atmospheric CO2 to a new equilibrium level by causing an offsetting reduction in biological scrubbing (plants don’t grow as well causing them to reduce the amount of biomass they put on and thereby reduce the CO2 they remove from the atmosphere).

    At the current atmospheric levels of CO2 (record low levels on a geological scale), additional CO2 is more beneficial than removing CO2. As many plant species are near the lower edge of their ability to survive, adding CO2 will provide a growth benefit to plant species globally. This additional CO2 will be scrubbed out very quickly, probably within a couple of centuries, once fossil fuels have become too scarce to burn and power generation is finally switched to nuclear.

  81. crosspatch says:
    December 18, 2011 at 9:26 pm

    “Now check the LWIR emitted from the Black Rock Desert near Gerlach Nevada an hour after sunset and again an hour before sunrise. WAY different.”

    This raises a further interesting question. As materials cool, their emission spectra should down shift. For a rapidly cooling desert this could mean that the spectral peak of outgoing long wave IR could sweep across the 15 micron band. That is to say that 15 micron LWIR emitted from the desert surface may not be a constant percentage of total emitted IR. LWIR back radiated to the surface would be re-emitted at a longer wavelength, bypassing CO2. It may be in areas of the Earth’s surface that can cool rapidly CO2 can cause a short period in which cooling is reduced, but not long enough to trap heat over a diurnal cycle.

  82. R. Gates;
    “As I’ve got no personal “horse in this race”, it would be hard for me to be disappointed or encouraged”

    Aye, ‘Tis no horse, but a male donkey. Are you not longer 75% warmist? And what were you before that? Your dissembling brand of sophistry fools no one.

  83. Bah! Humbug! All this talk about CO2 and temperature and not a single valid explanation as to how CO2 in the atmosphere warms the surface more than the sun did when it was shining. Or even when the sun is not shining.

  84. This R Gates person seems to be particularly concerned about the rising levels of CO2 and our doom by fire.

    Think sir what the records tell us for around 800 years ago the world suffered thermagedon and life and humans prospered. Strangely this 800 year number rears its head in many ways, one is the increase in CO2 after such a warming event.

    The inertia in all things planetary is huge in both time and scope, temperature and CO2 are not immune, many life times must be lived to see both cause and effect. The time has come for scientists involved in AGW to walk away some distance, then turn around and look back at what they have done.

    Mr Gates you are supporting a fallacy, a fantasy even, CO2 in our atmosphere in larger quantities is a bonus for the entire planet.

  85. The calculated sensitivity appears to be a balance between the water vapour positive feedback and the convective negative feedback. Because one effect would send temperatures hundreds of degrees kelvin higher, and the other effect brings it back to just a few tens of degrees above blackbody, then calculations based on these “large signal” results will generally have the positive just shading the negative, leading to feedback amplification.

    However, compared to the large signal results, the positive water vapor feedback is greatly saturated, and the negative convective feedback is massively amplified, so for mine negative feedback predominates in the “small signal” real world.

    It is very common to lump all these global effects together to understand a physical system, so I have no problem with anyone doing that.

  86. davidmhoffer says:
    December 18, 2011 at 6:58 pm

    “For once you’ve managed a retort with relevant information, though your perserverance in demonstrating that your are a complete and total jerk is also on full display. ”

    Morons like you bring out the jerk in me.

  87. “Finally, if CO2 is as strong a driver of surface temperatures as the IPCC would have us believe, how in the world can anyone explain the apparent fact that, given a doubling of CO2 levels, the modern Arctic is about 1°C COLDER than the LGM Arctic?”

    Lots of ways? Even warmists don’t deny things like continental drift, geological time changes in oceanic circulation patterns, geological time changes in orbital eccentricity and so on. 20,000 years ago, the pole star wasn’t even Polaris — the Earth’s axis wasn’t even close to Polaris (just as in a couple hundred more years, it will stop being particularly close to being Polaris for us, as well).

    I appreciate your argument about multimodality, and I think that the argument concerning sensitivity is dead on the money — there are various sanity checks and the more extreme values of climate sensitivity fail them (Roy Spencer, in his book, points out that even time LOCAL measurements of temperature susceptibility — how fluctuations in temperature respond to solar variability — don’t agree with large positive sensitivity and may actually be consistent with negative feedback that STABILIZES temperatures rather than AMPLIFIES them, which of course makes a lot of sense in an open system that hasn’t been driven to either “snowball Earth” or “fireball Earth” extremes by natural variation.

    However, this argument is ill-placed and if anything distracts from your otherwise powerful conclusions. The IPCC is asserting that “geological time variability being more or less constant on a timescale of a few centuries, CO_2 is the important driver”. That is, given more or less constant patterns of oceanic circulation, steady patterns of decadal oscillations, irrelevant fluctuations in solar insolation, no major variations due to volcanic or other aerosols, and a complete lack of coupling between the magnetic state of the Sun or solar wind and the climate of the Earth, CO_2 is the major driver.

    They don’t deny that all of those things vary on millennial time scales. Only decades or centuries, although in their sillier papers they try to extend the hockey stick back a thousand years or more. The fact that they all vary on decadal time scales and that there is evidence that all of them are indeed important contributers to climate is what they are ignoring, but citing a thermal record from 20,000 years ago isn’t evidence against this as even they know that the last ice age happened, right?

    rgb

  88. crosspatch says:
    December 18, 2011 at 10:16 pm

    “If you put more CO2 into the atmosphere, the scrubbing rate of CO2 goes up.”

    Interestingly this is a typical reaction for an equilibrium system being forced out of equilibrium i.e. the more out of equilibrium it becomes the harder it “tries” to regain equilibrium. The transfer of heat between two bodies of different temperatures is one example.

    Here’s my take. The ocean contains far more CO2 than the atmosphere and its average surface temperature sets the equilibrium point with the atmosphere. At the interglacial average temperature of 16C that equilibrium point is 280ppm. Trying to reduce or increase atmospheric CO2 without changing average SST of the ocean will cause the ocean to take up the excess or emit the shortfall. The biosphere I think is probably pretty close to neutral as plants taking up CO2 are offset by more organisms which eat plants and emit CO2. That won’t be instantaneous but pretty fast. Probably the same for weathering if for no other reason than most of the surface is ocean where acid rain won’t have any rocks to fall upon.

    So my hypothesis is that if anthropogenic CO2 emission ceases then the PPM in the atmosphere will decrease at the same rate it increased – quickly at first then slower and slower as the natural equilibrium point of 280ppm is approached.

    The equilibrium point in relation to temperature can be gauged in the ice core data. Average temperature during an interglacial is about 3C higher than during a glacial period and atmospheric CO2 is about 80ppm higher during the interglacial. That works out to about 25ppm per degree C. If average SST is half a degree warmer now than in 1950 that might account for 10ppm of the higher CO2 level today while the other 80ppm is due to something else. I suspect that “something else” is anthropogenic.

  89. Konrad says:
    December 18, 2011 at 10:01 pm

    “water that is free to evaporatively cool is not greatly effected by LWIR. This is significant as 71% of the Earth’s surface is ocean.”

    Yes. In working through the physics this is what you discover. It is borne out by ocean heat budget studies which find that 70% of ocean heat loss on average is by evaporation, 25% by radiation, and the remainder by conduction. Land surfaces on the other hand lose most of their heat via radiation. The key to this while thing is that downwelling LWIR is absorbed by the first couple of microns of water surface which instantly raises the evaporation rate and the energy is removed as soon as it arrives. Water vapor being lighter than air the energy is carried aloft insensibly and released at the cloud layer where it becomes sensible again.

    Once the physics involved is understood and accepted all the data falls neatly in place. The greenhouse effect is a land-based phenomenon. So-called sensitivity that the IPCC says is between 1C and 3C is actually 1C because two thirds of the earth’s surface is uneffected by greenhouse gases. The northern hemisphere gets twice as much greenhouse warming because there’s twice as much land surface in the northern hemisphere. All the temperature measurements are in agreement with this. There is no Trenberthian missing heat hiding in the ocean and it can’t be found because it never entered the ocean in the first place. It was instantly rejected as latent heat of vaporization and carried up to the cloud layer where it then made its way out of the atmosphere into space. The “missing heat” is in a spherical volume of space about 100 light years in diameter surrounding the earth. Trenberth is looking in the wrong direction for it! :-)

  90. Dave Springer says:
    Your comment is awaiting moderation.

    December 19, 2011 at 3:02 am

    davidmhoffer says:
    December 18, 2011 at 6:58 pm

    “For once you’ve managed a retort with relevant information, though your perserverance in demonstrating that your are a complete and total jerk is also on full display. ”

    Morons like you bring out the jerk in me.

    ——————————————————————–

    I see the above was selectively skipped over and left in the moderation queue since subsequent comments of mine have appeared. All I ask is that moderators be fair about this. Hoffer called me a jerk and the comment was approved. In a fair world that entitles me to respond in kind. Tit for tat, so to speak. Moron for jerk.

    I have no problem with being snipped but I do have a problem when there is a double standard employed in the process.

  91. All the Last Glacial Maximum climate model studies are tuned so that they get 3.0C per doubling (or 2.3C per doubling as in this case). They are done by pro-AGW’ers after all.

    They are tuned because they use artificially low Albedo (the amount of sunshine reflected back to space) values. This provides almost no change at all in the Solar Forcing and lets the GHG Forcing take more of the credit for the temperature decline.

    Here is the current Solar Forcing and Albedo by 10 degree Latitude bands and what this study (and all LGM climate model studies) use as the change at the LGM.

    The study values are NOT what the Earth surface conditions at the LGM would have produced.

    Take another -15 W/m2 off the Solar Forcing (to reflect realistic conditions) and redo the math – then you are down to 1.0C to 1.5C per doubling.

  92. Konrad says:
    December 18, 2011 at 10:25 pm

    “This raises a further interesting question. As materials cool, their emission spectra should down shift. For a rapidly cooling desert this could mean that the spectral peak of outgoing long wave IR could sweep across the 15 micron band. That is to say that 15 micron LWIR emitted from the desert surface may not be a constant percentage of total emitted IR. LWIR back radiated to the surface would be re-emitted at a longer wavelength, bypassing CO2. It may be in areas of the Earth’s surface that can cool rapidly CO2 can cause a short period in which cooling is reduced, but not long enough to trap heat over a diurnal cycle.”

    If I’m not mistaken GCMs take this into account. Surface temperature doesn’t “sweep through” a 15um blackbody peak. Surface temperatures don’t vary through the right range to do that.

    http://www.spectralcalc.com/blackbody_calculator/blackbody.php

    At 100F peak emission is 9um. At -20F peak emission is 11um.

    However that “sweep” does move the peak emission frequency closer to or further from the CO2 sweet spot at 15um so there is definitely a relationship between surface temperature and CO2’s effectiveness as a greenhouse gas. The colder it is (within the normal range of earth surface temperatures) the greater the greenhouse effect. This handily explains why higher latitudes get more AGW than lower latitudes.

    Follow the physics if you want the truth about climate. Follow the money if you want the truth about climate scientists. ;-)

  93. Robert Brown says:

    December 19, 2011 at 3:18 am

    “Finally, if CO2 is as strong a driver of surface temperatures as the IPCC would have us believe, how in the world can anyone explain the apparent fact that, given a doubling of CO2 levels, the modern Arctic is about 1°C COLDER than the LGM Arctic?”

    Lots of ways? Even warmists don’t deny things like continental drift

    ———————————————————————————————–

    Continental drift is 1-10cm per year. Since the LGM the maximum drift would be 2 kilometers. That isn’t enough to significantly change climatic zone even if the drift were entirely latitudinal.

    Brown, you may do decent work when you slow down to think about it and seek peer review before publishing but your off the cuff comments here leave a lot to be desired. I suggest you think more and write less in this forum in the furture. Or at least think more.

  94. All very interesting but the fundamental problem is that the climate is not in thermal equilibrium thus the application of equilibrium thermodynamics to the problem of climate sensitivity is incorrect unless you can demonstrate that equilibrium thermodynamic equations are a reasonable approximation. Thus the results have little value otherwise and you are wasting your time. I don’t see any of this! Non equilibrium thermodynamic analyses are notoriously difficult if not impossible to do but that’s what makes them fascinating

  95. My personal experiences at Real Climate as the butt of insulting and snarky rebuffs, as well as the editing, snipping, and blocking of my comments (see Slipping Some Past the Goalie at RC) has sensitized me to the value of due courtesy and diversity of opinion.

    Yes, the online world is something like the rough and tumble of the old wild west. Some of that is inevitable due to the lack of face-to-face contact, and the tendency of some commenters to anonymously hide behind screen names.

    But, I hope WUWT can be better than that. Let us try to avoid questioning the motives of others and stick to science-based arguments.

    WUWT is far more open to contrary comments, both from warmists and those I call “disbelievers”. I fully support that policy. I would rather have the moderators err on the side of passing snarky comments so long as those comments do not cross the line into libel of individuals or groups.

    So, go ahead and make those kinds of comments if you must. On the other hand, I personally wish that more of us (and I include myself because I have been a bit snarky in the past) would be more courteous and try to stick to science-based arguments rather than personal attacks.

    advTHANKSance

  96. wayne Job says:
    December 19, 2011 at 12:31 am
    This R Gates person seems to be particularly concerned about the rising levels of CO2 and our doom by fire.
    ——–
    Nope, this R. Gates person has never spoke of any such thing as “doom by fire”. A run- away greenhouse effect is highly unlikely.

  97. ****
    Dave Springer says:
    December 18, 2011 at 6:12 pm

    So you see it isn’t the sun heating the surface of Venus it’s the molten interior of the planet that’s heating it. The temperature gradient of the planet from core to surface is different on Venus because the surface is far better insulated.
    ****

    I considered this too. But if true, measurements would show that Venus was emitting more energy than it receives from the sun, like Jupiter. Someone may correct me, but I think that is not true — Venus is like the Earth in that it radiates outward the same energy that it receives (minus reflection of course).

  98. mods ~ if Mr. Springer wishes to call me a moron while in the same sentence stipulating to being a jerk, I see no reason for suppressing his remarks. I’m certain that your readership is quite capable of determining for themselves if, in fact, I am a moron or not. As my expectation is that any who have followed my participation in this forum for any length of time will conclude otherwise, I can only leave it up to others to speculate as to what the root cause of Mr. Springer’s behaviour pattern actually is.

    REPLY: I’ve grown tired of Mr. Springer’s behavior here – he’s on notice – Anthony

  99. henry@allofyouseekingthetruth

    I am surely puzzled by my latest results
    which show that
    in Easter Island -27.16 degrees (Chile)
    Maxima have risen at 0.035 degrees C per annum
    Means have risen at 0.014 degrees C per annum
    Minima have risen by -0.007 degrees C per annum

    whereas in Marcus Island (MinamiToriShima) + 24.28 degrees (Japan)
    Maxima have risen 0.012 degrees C per annum
    Means have risen 0.012 degrees C per annum
    Minima have risen 0.021degrees C per annum

    This difference in warming of NH and SH seems to be quite significant and is caused by;?

    http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

  100. Just to be sure that everyone understands
    exactly what puzzles me (about the results reported in my previous post)
    I am hoping Dave Springer will help me??
    ;;
    it seems on the NH, temps are pushed up by minima
    whereas on the SH, temps. are clearly pushed up by maxima.

    But , now, how can that be? There must be a rational explanation somewhere?

  101. Snarky comments re R Gates, and others, really do not belong here. If we skeptics are only preaching to the choir, what is the point? We need to encourage those on the AGW side who are respectful of logic and science to make their arguments here, not to stay away.

    Another point: Even Dr Glickstein, whose post here is wonderful, shows a hunger for certainty in his comments re the anthropogenic source of the recently rising CO2. The isotopic ratio arguments for the rising CO2 being anthropogenic may be strong, but never will they or any other scientific argument based on such indirect proxies be “certain”. One of the John B entries above references an alternative hypothesis. Embracing any certainty in Science closes the mind when it is best to keep the mind a little open.

  102. Ira reckons
    When a probability distribution includes more than one population, the mean may, quite literally, have no MEANing! All bets are off.
    ———-
    The important word here is “may”.

    The climate system is one system whose parts can be considered individually. A population of mothers and children is a bogus bogus analogy for that system.

  103. Ira says
    First of all, notice that NONE of the individual IPCC graphs are multi-modal
    ———-
    My lying eyes says that 2 of those curves are multimodal.

  104. Ira says
    Thus, it would be wrong, IMHO, to assign more than some small fraction of the warming since the LGM to CO2 increases.
    ——–
    So what is a sensible way of finding this fraction when 2 causes are related by a feedback loop?

    Consider an old fashioned regenerative radio receiver. Let’s say its open loop gain is 1. we add 0.1 positive feedback and assume we get a gain of 10. If I want to claim that regenerative receivers don’t work I point at the 0.1 number. If I want to claim they do work I point at the 10 number. Who is right?

    P.S. The example is only illustrative. I have forgotten how regenerative receivers work.

  105. Ira says
    Thus, the IPCC curves, taken as a group, seem to support Schmittner’s results of multi-modality.
    ———
    It is pointless claiming that without a detailed examination of what each of the individual “IPCC” curves represent.

  106. Ira says

    Finally, if CO2 is as strong a driver of surface temperatures as the IPCC would have us believe, how in the world can anyone explain the apparent fact that, given a doubling of CO2 levels, the modern Arctic is about 1°C COLDER than the LGM Arctic?
    ———-
    That’s an easy question to answer!

    Let’s consider the astonishing fact that climate is affected by more than one thing.

    And that the relationship between temperature and those other things is not linear.

    There are two factors of interest here. The amount of sunlight and the amount of CO2.

    Let’s assume the amount of sunlight is different in the 2 hemispheres because of the milankovich cycles. That would account for the different temperatures in the northern and southern hemisphere.

    Changes in the CO2 amount simply change how effective the changes in insolation are at producing changes in temperature. This effect is likely the same in both hemispheres.

    So we have a plausible explanation. Don’t know if its the correct explanation. But Ira has not made a good argument against CO2.

  107. Ira says
    That calls into question each and every one of those curves
    ———
    Well the question is to what degree are they wrong and should the curves be rejected altogether? This appears to be what Ira is trying to sneak by.

    Seems like a good opportunity for that well worn logical fallacy: if something is a little bit wrong it must be totally wrong.

  108. LazyTeenager says: December 19, 2011 at 1:45 pm
    Ira says
    First of all, notice that NONE of the individual IPCC graphs are multi-modal
    ———-
    My lying eyes says that 2 of those curves are multimodal.

    Lazy Teenager, I guess a lawyer might say that Knutti 02 is multi-modal and, if you include some of the lumps on the Fat Tail of Andronova 01, that could be as well. However, I think you have to agree that neither of these comes close to being as multi-modal as Schmittner 2011 nor as multi-modal as the combination of the eight IPCC graphs.

    An oval has no corners, but a lawyer might say it has two more corners than a circle!

  109. LazyTeenager;
    Let’s consider the astonishing fact that climate is affected by more than one thing.>>>

    Yes! Let’s!

    LazyTeenager;
    Let’s assume the amount of sunlight is different in the 2 hemispheres because of the milankovich cycles. That would account for the different temperatures in the northern and southern hemisphere>>>

    Gasp! So the fact that the earth’s land mass is predominantly NH, the NH has an ocean at the pole surrounded by land, the SH has a land mass at the pole surrounded by ocean, these things have nothing to do with it? That’s an astonishing number of major factors to dimiss in order to make your assumption that milankovich cycles account for the difference to make sense. Gee, and right after you were on the right track about climate being affected by more than one thing!

    LazyTeenager;
    There are two factors of interest here. The amount of sunlight and the amount of CO2.>>>>

    Whoa! Just two? But you said….

    LazyTeenager;
    Changes in the CO2 amount simply change how effective the changes in insolation are at producing changes in temperature.>>>>

    So…there’s no feedback after all? How interesting….

    LazyTeenager;
    So we have a plausible explanation.>>>

    No we don’t. By your own assertions above, we do not.

    LazyTeenager;
    Don’t know if its the correct explanation. >>>

    As Einstein once quipped; That’s not right. That’s not even wrong.

  110. Konrad says on December 18, 2011 at 10:01 pm and on December 18, 2011 at 10:25 pm:December 18, 2011.:

    “++++++++++”

    First of all, if you have not read any of the comments which I have made to various articles posted here on WUWT and to their various responses made by the many knowledgeable readers, then you may not be aware of my standpoint, or position on the CAGW hypostases.

    Well, I believe in making things easy, therefore my “stance” is that as long as I see no “Empirical Proof” for the theory of warming, of any sort, by CO2 (Carbon Dioxide) – or any other gas, for that matter, say H2O. – I shall agree with no-one who proposes such a scenario.

    In the case of H2O, – yes – Water Vapor (WV) has the property of retaining Thermal Energy (TE) for longer than any other Atmospheric gas component, – but then again WV is an intricate and undeniable link between the (liquid) surface and the atmosphere.

    I do like, and whole-heartedly approve of, the fact that you – do experiments. – I too, do experiments and so did Fourier, the Frenchman who is “mistakenly” heralded as the “Father of the Greenhouse Theory” – And So did John Tyndall. – Tyndall is the man they will tell you “proved” the existence of the “Greenhouse Effect”. He, of course, did no such thing as he failed to distinguish between absorption and reflection.
    Fourier wrote in his 1824 paper: “The heat of the sun, coming in the form of light, possesses the property of penetrating transparent solids or liquids, and loses this property entirely, when by communication with terrestrial bodies, it is turned into heat radiating without light”.

    Arrhenius in 1896 catches on to the ” loses this property of penetrating transparent solids entirely” bit and proposes that “Hotboxes” or the later “Greenhouses” derive their increased heat because of “trapped heat radiation” – Hehehehehe – “but omitted completely that “dark radiation” cannot penetrate water (and therefore its vapor.)

    I do not normally refer people to the various “who said what” on various blogs or web-sites but this time I’ll just say: Enter “Fourier 1824″ into your search engine – and you will be educated. – Well, if you don’t want to be, then OK, —– but it is interesting.

    -Meet Timothy Casey B.Sc (hons.)

  111. R. Gates in response to:

    “.. The central question is: what will the effect be of raising CO2 to levels not seen in at least 800,000 years over a time-frame so short so as to not find a parallel in the geologic record”

    says:

    “I think that’s been our sceptical argument all along – it’s an unanswered question.”

    So what are we disagreeing about then? Do you think the fact that the question is unanswered is reassuring?

    If you don’t know whether I will suffer permanent brain damage from you bashing me on the head with a baseball bat, does that mean you get to do it?

  112. LazyTeenager says:
    December 19, 2011 at 2:25 pm
    Let’s assume the amount of sunlight is different in the 2 hemispheres because of the milankovich cycles. That would account for the different temperatures in the northern and southern hemisphere.
    ———————————–

    First, the temperatures in the two hemispheres are not much different over the timelines we are talking about here. The North has more up and down variability and it appears that the southern hemisphere is leading the path into and out-of the ice ages by several thousand years but they both follow the same general pattern of 100,000 year ice ages followed by 15,000 year interglacials.

    Secondly, the sunlight (solar insolation at high latitude) does not follow the 100,000 year ice age cycle. Solar insolation goes up and down over a varying 30,000 year cycle. There should have been 3 interglacials in the last 100,000 years according to solar irradiance but there was only the current one. [These two charts might be right to left opposite to what people normally want to see but Excel doesn’t allow dual axis charts to be plotted in reverse order on the X-axis for both Y-axis).

    The Albedo of the ice is the explanation that works. When solar irradiance in high latitudes rises, it melts alot of ice. But it needs to break the back of massive 3 km high ice sheets and, over a short period of high solar irradiance, it is just not enough. Global temperature get higher by a degree or so but the ice age just continues on with 2 km high sheets instead.

    When 2 or 3 of these high solar irradiance periods have accumulated, the 3 km high ice sheet’s back is finally broken and it melts back all the way to 75N. No change in CO2 is required for this explanation and in fact, it lags behind the ice sheet melt by 800 to 2000 years.

  113. LazyTeenager’s “plausible explanation” deserves what I think is the likely (and testable) explanation. CO2 is well mixed in the atmosphere and the PPM are similar northern and southern hemisphere so, IMO, CO2 is not an important factor in the Arctic/Anarctic temperature trend differences (this also suggests to me that climate sensitivity is very low, but I won’t go there). Without any research, I would guess that the amount of sunlight is similar, but perhaps the differing land/water ratios is a factor here. Ocean currents are part of the mix, too, but climate scientists don’t know enough to say how, do they? We actually don’t know much about the Arctic and Anarctic ice areas before the satellite era, do we? So, if I am correct, we have about 30 years of evidence of decreasing Arctic and stable Anarctic icefield areas with some paleo and anecdotal information about earlier periods. What I think is happening and is testable is this- carbon black particulate from the heavily industrialized northern hemisphere had altered the Arctic albedo, causing increased spring-summer melting. Additional melting changes the albedo further and is a positive feedback. I am a luke warmer and think AGW is mainly because of the above Arctic albedo change which effects northern hemisphere weather patterns. I’ve spent 4 years now studying the science. Have I missed any important scientific papers in this area. Is my hypothesis testable? Has it been tested?

  114. LazyTeenager says: December 19, 2011 at 1:35 pm
    Ira reckons
    When a probability distribution includes more than one population, the mean may, quite literally, have no MEANing! All bets are off.
    ———-
    The important word here is “may”.

    The climate system is one system whose parts can be considered individually. A population of mothers and children is a bogus bogus analogy for that system.

    Yes, Lazy Teenager, even system scientists sometimes consider parts of complex, tightly integrated systems individually. But, they always keep in mind the truth that the whole is more than the sum of its parts. According to chaos theory, a butterfly in Africa flapping its wings a particular way may be the cause of a major hurricane in the US southeast striking one city rather than another. In other words, a small change in initial conditions may result in major differences later on.

    How do you explain the IPCC Equilibrium Climate Sensitivity curves? Each is supposed to limit the probable range of ECS(2xC) to some high level of certainty, yet some of the curves hardly overlap. Assuming that each of the research groups were competent and honest, that has to mean that they based their very different curves on some different aspect of the climate system. I find Schmittner 2011 refreshing because they clearly show that doubling CO2 levels over land has 150% more effect than an equal doubling over the ocean. Furthermore, looking at the five peaks in the ocean curve, you can see that some parts of the ocean have 200% more warming for the same amount of CO2 doubling.

    Thus, the land and ocean, and the different parts of the land and ocean, are, in effect, different poplulations in terms of their sensitivity. Infants and their mothers are both, quite obviously, members of the human population, but their heights do not overlap, and they have radically different rates of change of height. In both cases, throwing these different populations into one calculation of a mean, is, as I said, MEANingless, ito the extent that it leads to wrong predictions. QED.

  115. Doug Allen (Dec. 19, 2011 at 5:37 pm):

    You ask “Is my hypothesis testable?” In order for it to be testable, the associated statistical population would have to be identified but this has not yet happened.

  116. LazyTeenager says: Seems like a good opportunity for that well worn logical fallacy: if something is a little bit wrong it must be totally wrong.
    *****************************
    There is no such thing as being a little bit pregnant! Either you are or you are not.

    If you make a mistake on the first step of a calculation every subsequent step will be wrong.

    The AGW premise is a false one, wrong from the first step.

  117. I have been looking at my own results again

    http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

    (after adding my latest result from Minamitorishima-Japan)

    First: I am looking at the SH.

    Looking at the Means I am finding that there is virtually no warming in the SH since 1974. It is apparently the same in the antarctic:

    http://www.nerc-bas.ac.uk/icd/gjma/amundsen-scott.ann.trend.pdf

    http://www.nerc-bas.ac.uk/public/icd/gjma/vostok.ann.trend.pdf

    The above graphs are remarkably flat. (does anyone here perhaps know where the original data of these 2 graphs are?)

    In fact, seeing that Maxima rose by an incredible 0.045 degrees C per annum in the SH, and minima are falling by -0.017 degrees C, there must be a nett loss of energy over the whole of the SH.

    Now, on the NH, on the other hand, maxima and means and minima are almost on par with each other, all increasing at about 0.027 degrees C per annum since 1974.

    the only explanation I can think of is that our current weather systems pick up warmth from the SH and drop it at the NH. That is all that would explain that maxima and minima and means are rising at almost the same rate here.

    Note that I only cut up my results in NH and SH because I was curious. But we cannot cut up earth in 2 pieces to make a point. I have to bring everything back to one global result: 0.0137. Obviously, the final conclusion of all the results in my tables is still that warming is driven by increasing maxima,

    i.e. less clouds and/or more intense sunshine.

    Warming caused by CO2 is a red herring. It does not happen. Nobody has proved it to me.

    http://www.letterdash.com/HenryP/the-greenhouse-effect-and-the-principle-of-re-radiation-11-Aug-2011

  118. Dave springer “working through the physics this is what you discover. It is borne out by ocean heat budget studies which find that 70% of ocean heat loss on average is by evaporation, 25% by radiation, and the remainder by conduction”

    Do you have a reference to the physics calculation?

  119. RichardG. Should’nt your statement be: THAT (not “the”)
    AGW premise is a false one, wrong from the first step. There are many AGW premises that are probably correct- man made changes in land albedo from farming, ranching, surface mining, clear cutting of forests, etc. and the one I referred to above, the 800 pound gorilla, I think- carbon black particulate fallout on the Arctic. This carbon black particulate fallout or cryoconite AGW is testable, Terry Oldberg, in several ways. “Albedos of typical materials in visible light range from up to 0.9 for fresh snow, to about 0.04 for charcoal, one of the darkest substances.” First, sample the Arctic albedo and the Anarctic albedo at different times in different snow and ice-covered areas. With enough samples, it would be possible to make an albedo estimate, eg. Anarctic albedo might average 0.85 and Arctic albedo amight average 0.75. Don’t the Terra and Aqua satellites already do this? From the albedo and sunlight information, also available from satellite data, it is possible to determine the differential in warming and whether or not that differential explains (at least part of) the temperature trend differences during the satellite era. Undoubtably, it’s more complicated than the above example, which doesn’t include the positive feedback of ice melt, which changes the albedo to less than 0.40 (sea water), and the negative feedback of aerosols. These need to be estimated and are largely known from satellite data. I would guess that this has already been studied and written up, but since it doesn’t support the CO2 CAGW meme, we don’t hear much about it. Can someone supply references to such studies that support or falsify my hypothesis that Arctic and Anarctic albedo differences are very important parts of AGW? From atmospheric physics, we know the role of CO2, but not the important part, the feedbacks. If northern hemisphere carbon black albedo changes (plus melting feedbacks and aerosol contributions) can be shown to contribute to the Arctic warming trend (remember there has been little or no warming on the Anarctic), then this will show that CO2 is a weak forcing agent and that climate sensitivity is lower than the Hansen and IPCC AR4 models project. What I am hypothesizing is that a model of Arctic and Anarctic AGW temperature trend from cryoconite requires fewer assumptions than Hansen and AR4 “CO2 forcing” climate models because there are fewer unknowns and uncertainties- albedo is measurable and the effect is testable and verifiable. Therefore the results might better describe an AGW reality than the CO2 AGW models with their so many unknowns, uncertainties, and, I think, confirmation bias.

  120. Henry@Paul Linsay

    Paul, he will refer you to “Seasonal mixed layer heat budget of the tropical Atlantic ocean”
    I tested his theory and it did not work out.
    Specifically I looked in the Pacific at 2 islands exactly at the same distances opposite of the equator. If his theory would stand up I would have to find exactly the same results on those two islands.

    http://wattsupwiththat.com/2011/12/18/co2-sensitivity-is-multi-modal-all-bets-are-off/#comment-836956

    What I am finding is two different results that are similar to what I am finding in general looking on the SH and the NH. See my comment just before yours.

  121. I just want to note Dr Glickstein’s association with the University of Maryland, a institution of such high caliber that a BA in History from them qualifies one for an engineering position.

    Nice post Ira. But then as a sub sailor and son of a Chem Eng, I have a natural bias for a systems approach.

  122. timg56 says:

    “I just want to note Dr Glickstein’s association with the University of Maryland, a institution of such high caliber that a BA in History from them qualifies one for an engineering position.”

    Whatever are you going on about?

  123. What the person said,

    “However, I would not call CO2 any kind of “thermostat”. Quite the contrary. Have a close look at the Ice Core graphs and you will see that temperatures begin their rise from the depths of Glacial Maximum at the very moment in time that CO2 levels are at their lowest. Temperatures rise for hundreds of years until the warming oceans outgas more and more CO2 and CO2 levels begin to rise, admittedly causing temperatures to rise a bit further. Then, in each and every cycle, when CO2 levels are at their very highest, temperatures begin a relatively rapid fall. Temperatures fall for hundreds of years until the cooling oceans asorb more and more CO2 and CO2 levels begin to fall, admittedly causing temperatures to fall a bit further. That ain’t any kind of thermostat with which I am familiar!”

    “Furthermore, while it is probably true that current CO2 levels are higher than they have been in the past several hundred thousand years, they were higher -probably much higher- over the millions of years hominids have trod the Earth, and the hundreds of millions of years since other plant and animal life evolved on Earth.”

    Therefore because current global temperatures are lower than previous periods with lower CO2 levels over the past several hundred thousand years, this just confirms the climate science mechanism based on the first extract at the top. If CO2 dictated what happened back then and occurred now global temperatures would have to be warmer now than any time during the last several hundred thousand years. It is clear at the short term, medium term and long term based on all observed science, CO2 is not a thermostat controlling the planet.

    Not suprising considering the approx difference in temperatures during the LGM from this article in different regions compared to how they have changed recently during the instrumental record. Slow changes have never explained the sudden onset of ice ages and interglaciers with this remaining a unknown mechanism.

    BUT,

    One concern showing the arctic been 1c higher than today during the LGM (read about this observation before, don’t remember the source) likely be very important in determining the mechanism for future ice ages. This would also fit in ideally with unexpected cooling after high peak in CO2 levels were reached.

    This mechanism to explain sudden ice ages is based on a initial warming planet caused by stronger/change in flow of energy reaching the Arctic ocean. This is different to the Atlantic conveyor shutting down temporarly, although this did occur later at some time, but never during interglaciers . Where with increasing energy reaching the ocean causes sea ice to reduce and exposes more ocean water surface to very cold air. This increases the rate of evaporation via latent heat increasing cloud albedo and percipitation around the Arctic regions. This in turn increases snowfall especially where regions have little and with time increases snow cover and eventually glaciers form/or increase shifting further South. Albedo significantly increases via snow/sea ice and glaciers, placing the Earth back in a major ice age. After so long the previous extra energy input to the Arctic ocean ceases and snowfalls around the Arctic decline back to much lower previous levels. This causes a reduction in buildup of glaciers around the Arctic and a long period of drier conditions with albedo levels much higher than previous interglacier. Despite energy levels returning to previous standards via the Arctic ocean, the world is now stuck in an ice age because albedo levels reflect far more energy back into space.

    Towards the end of the ice age temperatures drop in the Arctic with energy input signicantly reduced. This causes the glaciers around the Arctic to slowly decline because the snow source has been cut off. Areas further South in temperate regions mainly now get there snow source from the South rather than the North. This leads to slow melting with increasing interaction of warm tropical air meeting very cold polar air. During the very long solar cycles this tropical air eventually becomes warm enough and more northerly to induce a sudden warming. With the dry Arctic region and less energy source moving there, this is unable to produce enough snowfall via the orginal trigger to prevent this from happening.

    With a huge area of ice melt flowing into the oceans this can then trigger a temporary ocean conveyor shutdown. We are talking ice sheets covering most of North America melting so on a scale far greater than anything today. Bringing back almost ice age conditions back for a time. But without the energy trigger in the Arctic ocean this cannot remain and warming eventually continues with the ocean conveyor returning to normal for this period. The major ice age therefore won’t return again until high energy source shifts back into the Arctic ocean with favourable long term solar cycle side by side at the same time. Currently in science don’t know of a better possible explanation for causes of sudden ice ages than this one I have in mind.

  124. As Ira said: There is “SOMETHING ELSE” what causes the temps to rise…
    This is the point: The “something else” and “not the CO2….” …this should be put in extra BOLD!
    as long as the IPCC hides the “Something else” and as long climate septics and scientists
    keep repeating “CO2″ and do not look into the “Something else”, all simulation models will always be wrong in their forecast and you will be guessing, assuming and theorizing on the surface without getting to the bottom…. Get ready to seach for the “Something else”, this is the only senseful means to understand climate. Taking into account the “Something else” you will be able to calculate, understand and precisely forecast climate…… No problem…
    As long as you all milling around the CO2 as in Mecca around the Holy Stone…..
    all futile…no wonder…its obvious…
    JS

  125. Smokey,

    Dr Ira Glickstein is an Associate Professor at the University of Maryland, where I earned my undergrad degree (BA History). Having a dad with a Chem E from Case and a brother with a Mech E from Georgia Tech, I regularly had to correct them whenever they got on the topic of which was the better engineering school. The answer is University of Maryland, as I was Component Reliability Engineer at a nuclear power plant. (Only non-engineering degreed person in an engineering slot at the plant.) A school which supplies engineers out of their history program is obviously superior.

    Ira is upholding the excellence that Maryland is.

  126. To carry on with Ira’s proposition of multi-modal CO2 sensitivity, I’ve back-fit the CO2 sensitivity actually observed in the paleo-record back 545 million years (basically using the most accepted CO2 estimates against the most accepted temperature estimates over time).

    (This turns out to be difficult computer resource issue because one has to re-fit the CO2 vs Temperature observations onto the same timeline (at 100 Mya for example) – hard to explain but it takes 8 hours of modern computer running time to produce these numbers).

    Technically it works out that the CO2 sensitivity is something like NULL (no relationship) to as much as 1.5 C per doubling.

    I have a few more CO2 numbers that I could put in this now (basically only in the 350 ppm range) but it would look exactly the same.

  127. Doug Allen (December 20, 2011 at 7:14 am):

    Thanks for taking the time to respond and for giving me the opportunity to clarify what I meant in my previous post.

    Few climatological models are testable. In AR4, none of the models supporting the IPCC’s conclusion of CAGW were testable. An inability to test a model follows from the failure by the builder of this model to identify the statistical population in which it is testable.

    In climatology, a single object, the Earth, is under observation. It follows that a study is necessarily of the “longitudinal” variety. Under this circumstance, the starting point in the description of a model’s population is to divide the time line into segments. Each such segment corresponds to a different statistical event. Let t1 designate the time at the beginning of an event and let t2 designate the time at the end of the same event. In testing a model, at time t1 the state S1 of the associated system is observed and the state S2 at time t2 is predicted. At time t2, the state S2 of the system is observed. The predicted value of S2 is compared to the observed state in testing the model.

    Events in which this scenario plays out are called “observed events.” A collection of observed events is called a “statistical sample.” In the sample that is used in testing a model, the observed events must not have been used in the construction of the model.

    In a model that references the statistical population in which it is testable, the relative frequency of observing an event of a particular description can be measured and the relative frequency in the limit of observed events of infinite number can be estimated. The estimated relative frequency is called the “limiting relative frequency.”

    A variety of definitions are in common use of what is meant by the term “probability.” Under one of these definitions, the probability of observing an event of a particular description is the limiting relative frequency of observing an event of this description; this is called the “frequentist interpretation.” In his article, Ira Glickstein assumes that when the literature refers to the probability density of a particular value for the equilibrium climate sensitivity (TECS), the word “probability” should be given the frequentist interpretation. The probability of observing an event of a particular description is then the limiting relative frequency of this event in the model’s statistical population, as assumed by Glickstein.

    However, this assumption cannot be correct, for as TECS is not an observable feature of the real world, the relative frequency with which the value of TECS lies between specified bounds cannot be measured and thus the limiting relative frequency cannot be estimated. What climatologists seem to have in mind is a Bayesian interpretation of the word “probability”; under this interpretation, the probability of an event of a particular description is one’s subjective degree of belief in the proposition that this event will be observed.

    In view of the unobservability of TECS, speculations about the numerical values of the probability densities of the various possible values for TECS are not testable. It follows from the lack of testability that these speculations lie outside science. In AR4, IPCC Working Group I makes speculations of this kind but implies in various ways that they are scientific in nature. Knowing that the word “probability” cannot not take on its frequentist interpretation can assist one in seeing through this deception.

  128. Bill Illis says:December 20, 2011 at 4:39 pm
    To carry on with Ira’s proposition of multi-modal CO2 sensitivity, I’ve back-fit the CO2 sensitivity actually observed in the paleo-record back 545 million years (basically using the most accepted CO2 estimates against the most accepted temperature estimates over time). …

    Technically it works out that the CO2 sensitivity is something like NULL (no relationship) to as much as 1.5 C per doubling.

    http://img801.imageshack.us/img801/289/logwarmingpaleoclimate.png

    Bill Illis, I looked at your graph and saw a high brown curve “controls 85% of Greenhouse Effect” and a lower Red curve “controls 40% of Greenhouse Effect”. Perhaps I do not understand the blue dot “Temp vs CO2 Observations” but it does not seem to me that, given that the brown curve contains 85% of the dots, that the red curve only contains 40%. It seems to me it contains more, but, then I may not understand what the blue dots represent.

    Is it possible to run the program again and generate a 1ºC and a 0.5ºC curve through the dots?

    Also, please explain what it means that a given curve “controls X% of Greenhouse Effect”.

    It looks like you have found something that may inform our discussion, particularly when you say ” the CO2 sensitivity is something like NULL (no relationship) to as much as 1.5 C per doubling.” Is “NULL” the same as 0.0ºC per doubling? Are you claiming that it is equally likely that ECS(2xC) is 0ºC or 1.5ºC? Inquiring minds want to know.

    advTHANKSance

  129. Terry Oldberg says: December 20, 2011 at 4:46 pm
    … Few climatological models are testable. … A variety of definitions are in common use of what is meant by the term “probability.” Under one of these definitions, the probability of observing an event of a particular description is the limiting relative frequency of observing an event of this description; this is called the “frequentist interpretation.”

    In his article, Ira Glickstein assumes that when the literature refers to the probability density of a particular value for the equilibrium climate sensitivity (TECS), the word “probability” should be given the frequentist interpretation. The probability of observing an event of a particular description is then the limiting relative frequency of this event in the model’s statistical population, as assumed by Glickstein.

    However, this assumption cannot be correct, for as TECS is not an observable feature of the real world, the relative frequency with which the value of TECS lies between specified bounds cannot be measured and thus the limiting relative frequency cannot be estimated.

    What climatologists seem to have in mind is a Bayesian interpretation of the word “probability”; under this interpretation, the probability of an event of a particular description is one’s subjective degree of belief in the proposition that this event will be observed. …[Emphasis added]

    Thanks Terry Oldberg for your thoughtful comment. I happen to be a bit familiar with the Frequentist/Bayesian controversy which I wrote about as a side note in my Google Knol (http://knol.google.com/k/bayesian-ai-advisor-drill-here-drill-now#)

    Bayesian AI Advisor … Bayes Theorem has practical applications. Use it to make real world decisions. Bayes Theorem is not just an obscure artifact of the statistics of probability handed down to us from centuries ago. You can use it now to make decisions that will affect your financial well-being. A relatively simple Excel-based tool helps you choose the right course of action in the face of uncertain probabilities and inexact test results. It is available for FREE.

    I approached the Bayes Theorem from an engineering rather than mathematical point of view. I constructed an Excel spreadsheet that might be useful when a decision that has significant financial implications must be made in the face of uncertainty. For example, assume you have estimates of the following: 1) the probability of success if you take action without doing further testing, 2) the cost of further testing and the probability the test results will be reliable, 3) the cost of taking some action, 4) the financial benefit to you if the action is successful, and 5) the compensation you expect for taking the risk.

    In a such a situation, a Bayesian approach may have advantages over a Frequentist approach. Here is some background about the Bayesian controversy from my Google Knol ( http://knol.google.com/k/bayesian-ai-advisor-drill-here-drill-now#Background_(2D)_What_is_Bayes_Theorem(3F))

    Bayesian Controversy
    According to http://psychology.wikia.com/wiki/Bayesian_probability there is considerable controversy between “Bayesians” and “Frequentists” as to the true meaning and interpretation of “probability”.

    Bayesian Interpretation
    “In the philosophy of mathematics Bayesianism is the tenet that the mathematical theory of probability is applicable to the degree to which a person believes a proposition. Bayesians also hold that Bayes’ theorem can be used as the basis for a rule for updating beliefs in the light of new information —such updating is known as Bayesian inference. In this sense, Bayesianism is an application of the probability calculus and a probability interpretation of the term probable, or —as it is usually put —an interpretation of probability.”

    Frequentist Interpretation
    “A quite different interpretation of the term probable has been developed by frequentists. In this interpretation, what are probable are not propositions entertained by believers, but events considered as members of collectives to which the tools of statistical analysis can be applied.”

    Discussion
    The frequentists demand that probability statements be based on hard data derived from actual observation and experiments. On the other hand, Bayesians allow each person to assign different Bayesian probabilities to the same proposition.

    “Although there is no reason why different interpretations (senses) of a word cannot be used in different contexts, there is a history of antagonism between Bayesians and frequentists, with the latter often rejecting the Bayesian interpretation as ill-grounded. The groups have also disagreed about which of the two senses reflects what is commonly meant by the term ‘probable’.” …

    Since I am an engineer, I always need a concrete example and I provide this one in my Google Knol:

    An interesting example would be if ten coin tosses resulted in seven heads and three tails. A frequentist would say the probability is 70/30 heads unless and until further tosses proved otherwise. A Bayesian would consider the situation from a larger prospective. Was the coin provided by a trusted person or some stranger in a bar? Is there reason to believe the coin is fair or loaded? Based on that consideration, one Bayesian might assign a probability of 50/50, since ten tosses with a 70/30 result is statistically possible with a fair coin. Another Bayesian might conclude, from the situation, that the coin is probably loaded and assign a 70/30 probability. Yet another might split the difference and assign 60/40, …

    Of course, if the coin is tossed another hundred times, both the frequentists and the Bayesians may change their assigned probabilities. It may turn out that the frequentist 70/30 was closer to the truth -or- that the Bayesian 50/50 was a better call.

    OK, given this background, I think I understand why you are correct when you say I took a Frequentist interpretation of probability when looking at the Schmittner and IPCC ECS(2xC) graphs. Perhaps a Bayesian interpretation would be more applicable, but, as my knowledge of abstract statistics is limited, I’d appreciate it if you would take a shot at outlining how such an approach would work.

    advTHANKSance

    • Ira Glickstein (Dec. 20, 2011 at 7:14 pm):

      Thanks for taking the time to respond! You should understand that there is a theorem from probability theory that is called “Bayes’ theorem.” As this theorem is logically correct, one has no logical alternative to assigning numerical values to probabilities in a manner that conforms to this theorem. Additional to Bayes’ theorem is a logically dubious procedure for assigning numerical values to probabilities that is called “Bayesian.” In following the Bayesian procedure, one assures conformity to Bayes’ theorem. However, as I reveal in the following paragraph, in doing so one may violate the precept of the classical logic that is known as the “law of non-contradiction.”

      In following the Bayesian procedure, one makes an argument in which the prior PDF (probability density function) is a premise and the posterior PDF is the conclusion. Given that this premise is true, the posterior PDF logically follows. Under the law of non-contradiction, no more than one prior PDF can state a true proposition. However, the means by which this proposition may be identified are far from obvious.

      Let P(Z) designate a probability density function and let P(.) designate the probability density. In grappling with the problem raised in the previous paragraph, many a climatologist has followed the lead of Thomas Bayes and Pierre Simon Laplace by selecting for service as the prior PDF a PDF whose probability densities are uniform in a selected interval in Z and otherwise nil. However, excepting special circumstances the choice of such a prior is arbitrary. The arbitrariness violates the law of non-contradiction thus facilitating specious proofs in which the law of non-contradiction plays the role of a false premise. Some of the IPCC’s “proofs” of CAGW have this characteristic.

      There is an exception to the rule that the method of selection for the prior PDF is arbitrary. It occurs in the circumstance that Z is the limiting relative frequency of events of a particular description. This procedure is called “maximum entropy expectation.” A description of it is available at my company’s Website. The URL is http://www.knowledgetothemax.com. Through the use of maximum entropy expectation, one ensures that the values which are assigned to probabilities capture all of the available information but no more.

      Under specialized circumstances, the value that is assigned to a probability under maximum entropy expectation is identical to the one that is assigned under Laplace’s law of succession. The latter value is (x + 1)/(n + 2) where n is the count of observed events and x is the count of observed events of a particular description. This formula may be compared to the assignment of x/n under the frequentist idea of maximum likelihood estimation. The assignment under maximum likelihood estimation overstates the amount of information in the observed events while the assignment under maximum entropy expectation accurately states this amount.

  130. Terry Oldberg,
    Thank you for your reply. I also went to your interesting web page, but that will require study when I’m bright eyed and bushy-tailed and haven’t had two glasses of wine!
    My background in in field biology science is very different from an engineer’s or statitician’s with strengths and limitations that both enhance and limit my understandings of climate science. My hero is Darwin who, despite limitated math and statistics skill, and without knowledge of the genetic publications of Gregar Mendel (Darwin anticipated the need for a mechanism like genetics) made one of the most important discoveries in science. Darwin made careful observations of the natural world which, by inductive reasoning, resulted in seeing patterns in nature that had not been noticed before. Darwin’s genius was his keen observation and deductive reasoning. His 1859 publication is well written and an excellent primer, even today, on the type of discerning and judgement required to infer new insights (which were heretical- no confirmation bias!) about interactions in nature which included not just biology, but geology, psychology, and climate. One of my criticisms with climate science is that there are too many statisticans and not enough data! And no one has displayed the observational genius that charaterizes Darwin’s work. Climate science, being so confoundedly interdisciplinary- requiring skill in meteorolgy, physics, geology, chemistry, biology, and statistics- means that there’s little chance that any one person can understand or even appreciate what is required to understand climate science. I think this underappreciation of the difficulties of understanding plays as great a role as untestable climatological models in the IPCC’s facile declarations of what’s “likely” and “very likely.” Another real problem, confirmation bias, seems to play an especially large role in model testing that requires longitudinal observation and testing, with almost everyone wanting to infer trends and sensitivities with far too little data. I’m glad that Ira Glickstein and so many others are presenting data and looking at it in original ways. I think more data and better observation will move the science forward, ironically, by showcasing uncertainties. We won’t have a good indication IMO of our current temperature trend or of climate sensitivities (there probably are many) during my lfetime. No easy A/B testing and no laboratory replications with mother Earth and climate science. As Bill Bryson in “A short History of Everything” might say- I’m so sorry.

  131. Doug Allen says: December 20, 2011 at 7:25 pm
    Terry Oldberg,
    Thank you for your reply. …

    My background … in field biology science is very different from an engineer’s or statitician’s with strengths and limitations that both enhance and limit my understandings of climate science.

    My hero is Darwin who, despite limitated math and statistics skill, … made one of the most important discoveries in science. Darwin made careful observations of the natural world which, by inductive reasoning, resulted in seeing patterns in nature that had not been noticed before. …

    One of my criticisms with climate science is that there are too many statisticans and not enough data! … I’m glad that Ira Glickstein and so many others are presenting data and looking at it in original ways. …

    Doug Allen: Well stated. I couldn’t agree with you more. THANKS for your kind words!

    You say “too many statisticians and not enough data” but perhaps the problem is too many statistics (temperature records, satellite measurements, ice core records, etc, etc, etc) as well as too many statisticians (on the government dole) and not enough information or understanding.

  132. Doug Allen says: RichardG. Should’nt your statement be: THAT (not “the”)
    AGW premise is a false one, wrong from the first step. There are many AGW premises that are probably correct- man made changes in land albedo from farming, ranching, surface mining, clear cutting of forests, etc.
    **
    No, I mean that *The* AGW premise is false. That MAN causes *Global* warming from CO2. All the effects that you list and explain are really localized effects, and to varying degrees can be very real. But the notion that there is a holy grail of a global average temperature that really means anything is an artificial construct, a false premise, just as a *global average climate* is meaningless. There is good reason that climates (yes that is plural) are described and classified by zones and biomes. This article supports and explains with statistics what I mean when I say that all climate is local. There is no question that historically the real world is constantly changing naturally. There is no question that man can alter his local climate. For proof look no further than the micro-climate inside my shoes or under the collar of my warm jacket or in my living room. But the notion that I will keep that room warmer by pumping it full of CO2 is as silly as the notion that we can do likewise with the atmosphere by adding 1/100th of 1% more CO2. The fallacy of misplaced precision is as rampant here as it is when I see Global Temperature averages measured to 3 decimal places when most thermometers are calibrated to 2 degrees. I dare any one to detect a one degree change, by feel, blindfolded, when the wind is blowing, in the shade, standing on grass in the fog on a south slope etc.(how many other variables contribute to climate). When I read my thermometer I find my self saying “that’s *about* 25 degrees”, not “25.137”. There are clusters of false premises that live under the collective roof called AGW.

    So yes albedo is important, locally. But I find the notion of an Average Global Albedo absurd. If I’m a farmer in Calgary, the climate in Fresno or Honolulu or Fairbanks has no practical meaning for me. My concern is the number of frost free days and when the killing frost will hit on average. A change of 1/10th of a degree per decade global average is really meaningless. Man occupies a small fraction of 30% of the earth and I’m afraid that global climate science breaks down because of sampling error and bias. With so many unknowable variables I fall back on what I know and has been proven to be true: that the biosphere benefits from both higher CO2 levels and warmer temperatures.

  133. Richard says:
    that the biosphere benefits from both higher CO2 levels and warmer temperatures.

    Henry@Richard
    We agree on the important points like the statement you made above!

    However, regarding my tables here:

    http://www.letterdash.com/HenryP/henrys-pool-table-on-global-warming

    on the issue of accuracy reported,

    in my case, that last decimal is relevant,

    I am looking at a sample so the value of 0.0137 degrees C increase globally since 1974 is a pure mathematically calculated value. It is an estimate. It means that earth has warmed about 37 x 0.0137 = 0.5 degrees C since 1974, on average.
    Now, if we had left out the 0.0037 and simply rounded off to 0.01 degree warming per year, we would have ended up at 0.37 degrees C, a considerable error…

    Now, in as far as the accuracy of measurements has improved over the past 4 decades: I don’t know and I donot report on that. I do suspect that accuracy has improved and that it has probably more biasedly improved towards the higher temps. So, I do admit that a large portion of the 0.5 degree warming could be due to improved accuracy in measurement. But I donot know how much. I think the satellites are reporting 0.012 instead of my 0.0137 so, if my sample and estimate is close to the real world, the error already seems to be at least 10%.

  134. Ira Glickstein, PhD says:
    December 20, 2011 at 6:37 pm
    ———-

    The brown curve is just the 3.0C per doubling of CO2 proposition. 2 doublings (1120 ppm) is +6.0C.

    It is not well understood that this proposition also means that CO2/GHGs control virtually all of the Greenhouse Effect. As the curve goes down to low levels of CO2, say under 50 ppm, the curve approaches -33.0C and there would be no Greenhouse Effect . It is the same result that Lacis, Schmidt, Rind, and Ruedy obtained in 2010: “Atmospheric CO2: Principal control knob governing Earth’s temperature”. Science. The curve and the paper imply that CO2 controls virtually all of the water vapour levels as well and therefore it is responsible for 85% of the Greenhouse Effect.

    1.5C per doubling implies that CO2 controls only 40% of the Greenhouse effect (it also has a major impact on water vapour levels but only about half of the level – which is what the real empirical data is the last 40 years is showing as well).

    My comment that the sensitivity might be NULL, is that global temperatures throughout history might have NO relationship to CO2 levels. I wouldn’t call it 0.0C per doubling, it is Null/none. Temperatures have been -20C at very high CO2 levels, they have been -7.0C at very high CO2 levels, it has been +4.0C at 280 ppm, it has been -2.0C at 280 ppm, it has been +4.0C at 220 ppm and it has been +10C at just 500 ppm. On the whole, it really looks like there is No relationship. Although it might be 1.5C if one was able to control for all the variables such as continental drift, ocean circulation, resulting surface Albedo, lower solar irradiance through time etc. It is certainly not as high as 3.0C per doubling.

  135. ****
    Matt G says:
    December 20, 2011 at 8:54 am

    One concern showing the arctic been 1c higher than today during the LGM (read about this observation before, don’t remember the source) likely be very important in determining the mechanism for future ice ages. This would also fit in ideally with unexpected cooling after high peak in CO2 levels were reached.
    ****

    I’ve seen several papers using models (yeah, I know, but the authors seemed to have no apparent “agenda”). The models suggest the high Arctic ocean is ice-free most of the year during the glacial maximums! The numbers indicated that was the only way the glaciers could be built up to such size in the time-frames involved — nowadays those Arctic areas where the glaciers started are literally arid — hardly 5″ precip a yr. The models indicated a sort-of reversal of the N Atlantic thermocline — water upwelling in the high Arctic ocean instead of sinking.

    Search for “Gildor-Tziperman-2000c.pdf”

  136. IMHO the properties of the major players shows some important characteristics.

    CO2 – gas at various ambient temperatures, trace gas at even thousands of ppm, specific heat capacity of less than 1 joule/gram. Sure it heats up easily but it is still a trace gas and when it radiates it also “cools” doesn’t it.

    Water – changes between 3 phases at various ambient temperatures, as vapour still a trace gas, specific heat at least 4 times that of CO2 but that is negligible compared to latent heat which is 333 times CO2 specific heat (ice to water) and some 2500 times CO2 specific heat (water to vapour).

    Even though still technically a trace gas water vapour is some 60 to 100 times the concentration of CO2.

    All weather events appear to be initially linked to water and that other ignored energy transport medium – convection.

    Hurricanes, storms winds etc etc are all linked to convection combined with condensing water – CO2 doesn’t even figure.

    Co2 does not have some unknown magical power which allows it to be responsible for heating the Earth right out of proportion to its physical characteristics of specific heat capacity and almost negligible concentration.

    The whole idea that a gas which has no inherent energy creation properties can heat the Earth more than the Sun can is absurd – I wait to be PROVEN wrong but will not accept as proof the output from a computer model – I want FACTS,

  137. HenryP says:
    I appreciate your data sets and methods. Now we must consider the important distinction between precision and accuracy. I would say that over the last 4 decades the *Precision* of the data collection has improved, but we really cannot know what the accuracy is if we did not collect it originally. As an example please refer to Willis’ post here.
    Hansen’s Arrested Development

    http://wattsupwiththat.com/2011/12/20/hansens-arrested-development/#more-53430

    The CERES satellite provides Hansen with extremely precise data, but he doesn’t trust it’s *accuracy* so he adjusts it to bring it into conformance with his expectations. I was taught that this amounts to intellectual cheating. (Mendel did it with his sweet pea data). The urge to ‘clean up’ our data is very strong. But science is rarely tidy, often messy, and your data is what it is.

    This brings us to the core of the climate gate tragedy. The Harry_Read_Me.txt files expose the sad truth that the sacrosanct data base that underpins the entire debate, that we all are dependent upon as source data, is utterly and admittedly corrupted with bad record keeping to the point where we cannot know what we CAN trust. The tragedy is that when error and uncertainty creep in at the foundation, all of the work of the unknown thousands of scientists who build their work upon it is ruined. As far as I am concerned HAD-CRUT, NASA-GISS, NOAA, NCDC, none of it can be trusted.
    And as always, the cover-up is worse than the crime. Let me re-phrase that. The cover up is the crime. The mistakes with the database were not a crime. Mistakes happen.

  138. Rosco says: December 21, 2011 at 4:46 pm

    … CO2 – gas at various ambient temperatures, trace gas at even thousands of ppm, specific heat capacity of less than 1 joule/gram. Sure it heats up easily but it is still a trace gas and when it radiates it also “cools” doesn’t it. … Even though still technically a trace gas water vapour is some 60 to 100 times the concentration of CO2. …

    Co2 does not have some unknown magical power which allows it to be responsible for heating the Earth right out of proportion to its physical characteristics of specific heat capacity and almost negligible concentration.

    The whole idea that a gas which has no inherent energy creation properties can heat the Earth more than the Sun can is absurd – I wait to be PROVEN wrong but will not accept as proof the output from a computer model – I want FACTS …

    Thanks for your comment, Rosco, but it seems you misunderstand the basic physics of the Atmospheric “Greenhouse” Effect. It does not involve any “inherent energy creation” nor any kind of “unknown magical power” of CO2 gas (or H2O water vapor) at all.

    ALL the heat energy in the Atmosphere, Surface, and Interior of the Earth (except for a small amount due to heat energy released from the molten core) comes from the Sun. That energy, in the form of radiation at various wavelengths is either reflected back into Space (by light colored things like snow, clouds, etc.), or it is absorbed by the Surface. When the Surface absorbs this energy, it warms. When any material warms, it radiates that energy out towards Space (and, when it radiates, as you say, it cools).

    Absent the Atmosphere, the Earth Surface would be too cold to support life as we know it.

    Certain so-called “Greenhouse” gases in the Atmosphere, mainly consisting of water vapor (H2O) but also including CO2, CH4, and others, have the well-known property of absorbing certain wavelengths of radiation emitted from the Surface (called Long-Wave InfraRed – LWIR) while passing most of the wavelengths emitted by the Sun (called Short-Wave IR – SWIR). The reason most of the energy emitted by the Sun is SWIR has to do with its high temperature, which is around 6000ºF. The reason most of the energy emitted by the Earth Surface is LWIR has to do with its temperature, which is far lower.

    OK, here is the part you do not seem to understand. Please pay close attention and you will understand why it is not “energy creation” nor “magic”.

    When water vapor or CO2 in the Atmosphere absorbs LWIR energy, it blocks that energy from escaping to Space. The water vapor and CO2 gas warms up. Then, like all material that warms up, they radiate energy and, as you correctly say, “when it radiates it also ‘cools””. So far we are on the same page, I hope.

    Yes, the CO2 and water vapor radiate when they cool. But, they radiate in ALL DIRECTIONS. Some goes up towards Space, BUT SOME COMES DOWN TOWARDS EARTH. Thus, part of that radiative energy, which, absent the Atmosphere, would have been lost to Space, comes back to Earth and is absorbed again, once more warming the Surface.

    That, in a nutshell, is the basic physics of the Atmospheric “Greenhouse” effect. Because of the water vapor and CO2 in the Atmosphere, some of the energy that would have escaped to Space makes a round trip back to Earth.

    Now, please go read my series on the Atmospheric “greenhouse” effect (, 1 – Physical Analogy , 2 – Atmospheric Windows , 3 – Emission Spectra , 4 – Molecules and Photons , 5 – Light and Heat) and you should see that there is no magical energy creation machine at work. Good luck.

    PS: Accepting the basic physics does NOT amount to going along with all the CAGW (Catastrophic Anthropogenic Global Warming) nonsense. Yes, additional human-caused CO2 in the Atmosphere and human-caused land use that reduces albedo (reflectiveness) is responsible for some fraction of the warming experienced in the past century.

    The proper scientific argument is not if this is true, but how much warming has actually occurred, how much of that is human-caused, and whether the consequences are likely to be negative or a net benefit to humanity.

    My answer, expressed in my postings at WUWT, is that the official climate Team, has: (1) systematically diddled the data to exaggerate the warming by up to 50%, (2) grossly misrepresented the science in their climate models to exaggerate their predictions of future warming by a factor of up to 400%, and (3) way overstated the possible negative consequences of of moderate warming and increases in levels of Atmospheric CO2 (which may very well have net benefits to humanity).

  139. Terry Oldberg says: December 21, 2011 at 5:04 pm
    Ira Glickstein (Dec. 20, 2011 at 7:14 pm):

    Thanks for taking the time to respond! …

    Under specialized circumstances, the value that is assigned to a probability under maximum entropy expectation is identical to the one that is assigned under Laplace’s law of succession. The latter value is (x + 1)/(n + 2) where n is the count of observed events and x is the count of observed events of a particular description. This formula may be compared to the assignment of x/n under the frequentist idea of maximum likelihood estimation. The assignment under maximum likelihood estimation overstates the amount of information in the observed events while the assignment under maximum entropy expectation accurately states this amount. [Emphasisadded]

    OK Terry, thanks for the information and for sharing you knowledge, but now I feel like I’ve been told the details of building a clock rather than what I need to know, which is what time is it? Since I’m an engineer and not a mathematician or statistician, I need a solid example of some sort to show how my interpretation of the Schmittner and IPCC ECS(2xC) curves is “Frequentist” and should be “Bayesian” and what difference that would make.

    What I gleaned out of your reply, and put in bold above is that the Frequentists, in effect, use x/n and the Bayesians use (x + 1)/(n + 2). OK, I can see that, if x and n are relatively small numbers, the difference between x/n and (x + 1)/(n + 2) will be large. On the other hand, if x and n are of the order of 10 or more, the maximum difference is 11/12 (92%) which is generally close enough for many engineering purposes.

    So, Terry, it appears that, in the data set available regarding the Climate System, namely observations of temperatures and CO2 levels at various times in the past, n, the count of observed events, and x, the count of observed events of a particular description are quite large. Yes, the individual readings, as they are based on proxies in some cases and thermometer readings of unknown accuracy in others, may not be totally accurate, but, again it seems to me, they are sufficiently accurate for the purpose at hand (again in an engineering sense).

    If you (or other WUWT readers) have further clarification, I am “all ears”.

  140. The big difference in the earths atmosphere is that the major transporter of heat is convection not radiation.

    Open the roof vent in a greenhouse and you will be amazed at how quickly the hot gasses rush out and up. A lot faster than it is radiating with the vent closed.

    A greenhouse works by blocking convection not by blocking radiation.

  141. IRA Glickstein (Dec. 22, 2011 at 11:07 am):

    It sounds as though I gave you a bunch of details on a topic that was not of direct interest to you. Sorry about that. I’ll try to give you some more digestible meat on which to chew.

    It is notable that under maximum likelihood estimation, the value assigned to the probability of observing statistical events of a particular events of a particular description is x/n, where n is the count of events in the sample and x is the count of events of a the particular description. Now, if you were to read the report of Working Group I in AR4, I believe you would conclude with me that there is no reference to observed events, to a sample or to the underlying population. The idea of a statistical event is simply missing from Working Group I’s argument regarding the equilibrium climate sensitivity (TECS). The conclusion that this idea is missing is corroborated by the fact that the equilibrium temperature (the “steady-state” temperature in engineering jargon) is not an observable feature of the real world. As it is not an observable, observed statistical events for which the equilibrium temperature is the outcome do not exist.

    From facts stated above, I conclude that the interpretation which should be placed upon the word “probability” in reference to the function that maps TECS to its probability density is not the frequentist interpretation. An alternate interpretation is that the word “probability” has the Bayesian interpretation of one’s “subjective degree of belief.” Under this interpretation, the probability of an event is a proportion in a statistical ensemble. The frequentists’ statistical population is replaced by the Bayesians’ statistical ensemble.

    If, as seems to be the case, the probability is the proportion in a statistical ensemble, the contents of this ensemble are the numerical values that could possibly be taken on by TECS. The idea is that in Earth’s climate system, TECS has a specific value but this value is uncertain.

    Note that to determine the value of the probability that the value of TECS lies between specified bounds, one integrates the probability density function within these bounds. The probability which is computed in this way represents Working Group I’s subjective degree of belief in the proposition that the value of TECS lies within these bounds. Working Group I’s contention cannot be tested in lieu of the existence of a statistical population. It follows the lack of testability that this contention lies outside science. The IPCC makes a pretense of conducting a scientific investigation but this is a sham.

    By the way, your impression that x+1/n+2 is the Bayesian assignment to a probability is incorrect. This is the assignment under the application of the Bayesian method that is called “Laplace’s law of succession.” It is not the assignment that is made to the probability that TECS lies within specified bounds. It is easy to see that this is true, for as there are no observed statistical events, neither the count n of these events not the count x of the subset of these events that are of a particular description exists as a concept.

  142. Ira,

    I realized that I can plot the paleoclimate observation data on the same kind of chart that the article is based on.

    Since it is a quite strange at the end of the day, it needs to be to be split into three parts – one dealing with all the numbers going back 545 Mya, then one dealing with just the last 50 Mya (which has many more datapoints) and then the last 1 million years (which is completely contaminated by the Ice Albedo effect of the ice ages).

    Time on the Y-axis, Implied CO2 sensitivity per doubling on the X-axis.

    Last 545 Mys. (in the deep paleo climate 1.5C per doubling is the most common).

    In the last 50 Mys, significant randomness where any number fits the observations. There is huge range here +/- 40C per doubling (which is where my Null comment comes from).

    In the last 1 million years, one cannot determine the CO2 sensitivity without properly accounting for Albedo. In my mind, this is extemely important. All of the previous CO2 sensitivity studies are based on “some type of data” but there is no way any of them are determining the CO2 sensitivity without just constructing data where none exists.

    If one were to take the last 150 years, the lags in the climate system (ocean heat accumulation for example) makes this impossible unless one builds in several assumptions. But we are on track for something like 1.0C per doubling (if the lag is 7 years which seems to be what the recent ocean data is saying) or as much as 1.5C per doubling (if the lags are as long as 35 years or longer which the theory is now moving towards).

Comments are closed.