Guest essay by Dr. Tim Ball
Elaine Dewar spent several days with Maurice Strong at the UN and concluded in her book The Cloak of Green that, “Strong was using the U.N. as a platform to sell a global environment crisis and the Global Governance Agenda.” Strong conjectured about a small group of world leaders who decided the rich countries were “the principle risk to the world.” These countries refused to reduce their environmental impact. The leaders decided the only hope for the planet was for collapse of the industrialized nations and it was their responsibility to bring that about. Strong knew what to do. Create a false problem with false science and use bureaucrats to bypass politicians to close industry down and make developed countries pay.
Compare the industrialized nation to an internal combustion engine running on fossil fuel. You can stop the engine in two ways; cut off the fuel supply or plug the exhaust. Cutting off fuel supply is a political minefield. People quickly notice as all prices, especially food, increase. It’s easier to show the exhaust is causing irreparable environmental damage. This is why CO2 became the exclusive focus of the Intergovernmental Panel on Climate Change (IPCC). Process and method were orchestrated to single out CO2 and show it was causing runaway global warming.
In the 1980s I warned Environment Canada employee Henry Hengeveld that convincing a politician of an idea is a problem. Henry’s career involved promoting CO2 as a problem. I explained the bigger problem comes if you convince them and the claim is proved wrong. You either admit your error or hide the truth. Environment Canada and member nations of the IPCC chose to hide or obfuscate the truth.
1. IPCC Definition of Climate Change Was First Major Deception
People were deceived when the IPCC was created. Most believe it’s a government commission of inquiry studying all climate change. The actual definition from the United Nations Environment Program (article 1) of the United Nations Framework Convention on Climate Change (UNFCCC) limits them to only human causes.
“a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over considerable time periods.”
In another deception, they changed the definition used in the first three Reports (1990, 1995, 2001) in the 2007 Report. It’s a footnote in the Summary for Policymakers (SPM).
“Climate change in IPCC usage refers to any change in climate over time, whether due to natural variability or as a result of human activity. This usage differs from that in the United Nations Framework Convention on Climate Change, where climate change refers to a change of climate that is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and that is in addition to natural climate variability observed over comparable time periods.”
It was not used because Reports are cumulative and to include natural variability required starting over completely.
It is impossible to determine the human contribution to climate change if you don’t know or understand natural (non-human) climate change. Professor Murray Salby showed how the human CO2 portion is of no consequence, that variation in natural sources of CO2 explains almost all annual changes. He showed that a 5% variation in these sources is more than the total annual human production.
2. IPCC Infer And Prove Rather than Disprove a Hypothesis
To make the process appear scientific a hypothesis was inferred based on the assumptions that,
• CO2 was a greenhouse gas (GHG) that slowed the escape of heat from the Earth.
• the heat was back-radiated to raise the global temperature.
• if CO2 increased global temperature would rise.
• CO2 would increase because of expanding industrial activity.
• the global temperature rise was inevitable.
To further assure the predetermined outcome the IPCC set out to prove rather than disprove the hypothesis as scientific methodology requires. As Karl Popper said,
It is the rule which says that the other rules of scientific procedure must be designed in such a way that they do not protect any statement in science against falsification.
The consistent and overwhelming pattern of the IPCC reveal misrepresentations of CO2. When an issue was raised by scientists performing their role as skeptics, instead of considering and testing its validity and efficacy the IPCC worked to divert, even creating some false explanations. False answers succeeded because most people didn’t know they were false.
3. CO2 Facts Unknown to Most But Problematic to IPCC.
Some basic facts about CO2 are unknown to most people and illustrate the discrepancies and differences between IPCC claims and what science knows.
• Natural levels of Carbon dioxide (CO2) are less than 0.04% of the total atmosphere and 0.4% of the total GHG. It is not the most important greenhouse gas.
• Water vapour is 95 percent of the GHG by volume. It is the most important greenhouse gas by far.
• Methane (CH4) is the other natural GHG demonized by the IPCC. It is only 0.000175 percent of atmospheric gases and 0.036 percent of GHG.
• Figure 1 from ABC news shows the false information. It’s achieved by considering a dry atmosphere.
Figure 1
• The percentages troubled the IPCC so they amplified the importance of CO2 by estimating the “contribution” per unit (Figure 2). The range of estimates effectively makes the measures meaningless, unless you have a political agenda. Wikipedia acknowledges “It is not possible to state that a certain gas causes an exact percentage of the greenhouse effect.”
Figure 2 (Source Wikipedia)
4. Human CO2 production critical to IPCC objective so they control production of the information.
Here is their explanation.
What is the role of the IPCC in Greenhouse Gas inventories and reporting to the UNFCCC?
A: The IPCC has generated a number of methodology reports on national greenhouse gas inventories with a view to providing internationally acceptable inventory methodologies. The IPCC accepts the responsibility to provide scientific and technical advice on specific questions related to those inventory methods and practices that are contained in these reports, or at the request of the UNFCCC in accordance with established IPCC procedures. The IPCC has set up the Task Force on Inventories (TFI) to run the National Greenhouse Gas Inventory Programme (NGGIP) to produce this methodological advice. Parties to the UNFCCC have agreed to use the IPCC Guidelines in reporting to the convention.
How does the IPCC produce its inventory Guidelines? Utilising IPCC procedures, nominated experts from around the world draft the reports that are then extensively reviewed twice before approval by the IPCC. This process ensures that the widest possible range of views are incorporated into the documents.
They control the entire process from methodology, designation of technical advice, establishment of task forces, guidelines for reporting, nomination of experts to produce the reports, to final report approval. The figure they produce is a gross calculation, but it is estimated humans remove 50% of that amount.
Regardless, if you don’t know natural sources and variabilities of CO2 you cannot know the human portion. It was claimed the portion in the atmosphere from combustion of fossil fuels was known from the ratio of carbon isotopes C13/C12. Roy Spencer showed this was not the case. In addition, they ignore natural burning of fossil fuels including forest fires, long-burning coal seams and peat; as Hans Erren noted, fossil coal is buried wood. Spencer concluded,
If the C13/C12 relationship during NATURAL inter-annual variability is the same as that found for the trends, how can people claim that the trend signal is MANMADE??
The answer is, it was done to prove the hypothesis and further the deception.
5. Pressure For Urgent Political Action
Early IPCC Reports claimed the length of time CO2 remains in the atmosphere as very long. This implied it would continue as a problem even with immediate cessation of CO2 production. However as Segalstad wrote,
Essenhigh (2009) points out that the IPCC (Intergovernmental Panel on Climate Change) in their first report (Houghton et al., 1990) gives an atmospheric CO2 residence time (lifetime) of 50-200 years [as a “rough estimate”]. This estimate is confusingly given as an adjustment time for a scenario with a given anthropogenic CO2 input, and ignores natural (sea and vegetation) CO2 flux rates. Such estimates are analytically invalid; and they are in conflict with the more correct explanation given elsewhere in the same IPCC report: “This means that on average it takes only a few years before a CO2 molecule in the atmosphere is taken up by plants or dissolved in the ocean”.
6. Procedures to Hide Problems with IPCC Science And Heighten Alarmism.
IPCC procedures and mechanisms were established to deceive. IPCC has three Working Groups (WG). WGI produces the Physical Science Basis Report, which proves CO2 is the cause. WGII produces the Impacts, Adaptation and Vulnerability Report that is based on the result of WGI. WGIII produces the Mitigation of Climate Change Report. WGI and WGII accept WGI’s claim that warming is inevitable. They state,
Five criteria that should be met by climate scenarios if they are to be useful for impact researchers and policy makers are suggested: Criterion 1: Consistency with global projections. They should be consistent with a broad range of global warming projections based on increased concentrations of greenhouse gases. This range is variously cited as 1.4°C to 5.8°C by 2100, or 1.5°C to 4.5°C for a doubling of atmospheric CO2 concentration (otherwise known as the “equilibrium climate sensitivity”).
They knew few would read or understand the Science Report with its admission of serious limitations. They deliberately delayed its release until after the Summary for Policymakers (SPM). As David Wojick explained,
Glaring omissions are only glaring to experts, so the “policymakers”—including the press and the public—who read the SPM will not realize they are being told only one side of a story. But the scientists who drafted the SPM know the truth, as revealed by the sometimes artful way they conceal it.
…
What is systematically omitted from the SPM are precisely the uncertainties and positive counter evidence that might negate the human interference theory. Instead of assessing these objections, the Summary confidently asserts just those findings that support its case. In short, this is advocacy, not assessment.
An example of this SPM deception occurred with the 1995 Report. The 1990 Report and the drafted 1995 Science Report said there was no evidence of a human effect. Benjamin Santer, as lead author of Chapter 8, changed the 1995 SPM for Chapter 8 drafted by his fellow authors that said,
“While some of the pattern-base discussed here have claimed detection of a significant climate change, no study to date has positively attributed all or part of climate change observed to man-made causes.”
to read,
“The body of statistical evidence in chapter 8, when examined in the context of our physical understanding of the climate system, now points to a discernible human influence on the global climate.”
The phrase “discernible human influence” became the headline as planned.
With AR5 (2013) they compounded the deception by releasing the SPM then releasing a correction. They got the headline they wanted. It is the same game as the difference between the exposure of problems in the WGI Science Report and the SPM. Media did not report the corrections, but the IPCC could now claim they detailed the inadequacy of their work. It’s not their fault that people don’t understand.
7. Climate Sensitivity
Initially it was assumed that constantly increasing atmospheric CO2 created constantly increasing temperature. Then it was determined that the first few parts per million achieved the greenhouse capacity of CO2. Eschenbach graphed the reality
(Figure 3).
Figure 3
It is like black paint on a window. To block sunlight coming through a window the first coat of black paint achieves most of the reduction. Subsequent coats reduce fractionally less light.
There was immediate disagreement about the amount of climate sensitivity from double and triple atmospheric CO2. Milloy produced a graph comparing three different sensitivity estimates (Figure 4).
Figure 4.
The IPCC created a positive feedback to keep temperatures rising. It claims CO2 causes temperature increase that increases evaporation and water vapour amplifies the temperature trend. Lindzen and Choi, discredited this in their 2011 paper which concluded “The results imply that the models are exaggerating climate sensitivity.”
Climate sensitivity has declined since and gradually approaches zero. A recent paper by Spencer claims “…climate system is only about half as sensitive to increasing CO2 as previously believed.”
8. The Ice Cores Were Critical, But Seriously Flawed.
The major assumption of the inferred IPCC hypothesis says a CO2 increase causes a temperature increase. After publication in 1999 of Petit et al., the Antarctic ice core records appeared as evidence in the 2001 Report (Figure 5).
Figure 5. Antarctic core core record
Four years later research showed the reverse – temperature increase preceded CO2 increase contradicting the hypothesis. It was sidelined with the diversionary claim that the lag was between 80 and 800 years and insignificant. It was so troubling that Al Gore created a deceptive imagery in his movie. Only a few experts noticed.
Actually, temperature changes before CO2 change in every record for any period or duration. Figure 6 shows a shorter record (1958-2009) of the relationship. If CO2 change follows temperature change in every record, why are all computer models programmed with the opposite relationship?
Figure 6; Lag time for short record, 1958 to 2009.
IPCC Needed Low Pre-Industrial CO2 Levels
A pre-industrial CO2 level lower than today was critical to the IPCC hypothesis. It was like the need to eliminate the Medieval Warm Period because it showed the world was not warmer today than ever before.
Ice cores are not the only source of pre-industrial CO2 levels. There are thousands of 19th Century direct measures of atmospheric CO2 that began in 1812. Scientists took precise measurements with calibrated instruments as Ernst Beck thoroughly documented.
In a paper submitted to the US Senate Committee on Commerce, Science, and Transportation Hearing Professor Zbigniew Jaworowski stated,
“The basis of most of the IPCC conclusions on anthropogenic causes and on projections of climatic change is the assumption of low level of CO2 in the pre-industrial atmosphere. This assumption, based on glaciological studies, is false.”[1]
Of equal importance Jaworowski states,
The notion of low pre-industrial CO2 atmospheric level, based on such poor knowledge, became a widely accepted Holy Grail of climate warming models. The modelers ignored the evidence from direct measurements of CO2 in atmospheric air indicating that in 19th century its average concentration was 335 ppmv[11] (Figure 2). In Figure 2 encircled values show a biased selection of data used to demonstrate that in 19th century atmosphere the CO2 level was 292 ppmv[12]. A study of stomatal frequency in fossil leaves from Holocene lake deposits in Denmark, showing that 9400 years ago CO2 atmospheric level was 333 ppmv, and 9600 years ago 348 ppmv, falsify the concept of stabilized and low CO2 air concentration until the advent of industrial revolution [13].
There are other problems with the ice core record. It takes years for air to be trapped in the ice, so what is actually trapped and measured? Meltwater moving through the ice especially when the ice is close to the surface can contaminate the bubble. Bacteria form in the ice, releasing gases even in 500,000-year-old ice at considerable depth. (“Detection, Recovery, Isolation and Characterization of Bacteria in Glacial Ice and Lake Vostok Accretion Ice.” Brent C. Christner, 2002 Dissertation. Ohio State University). Pressure of overlying ice, causes a change below 50m and brittle ice becomes plastic and begins to flow. The layers formed with each year of snowfall gradually disappear with increasing compression. It requires a considerable depth of ice over a long period to obtain a single reading at depth. Jaworowski identified the problems with contamination and losses during drilling and core recovery process.
Jaworowski’s claim that the modellers ignored the 19th century readings is incorrect. They knew about it because T.R.Wigley introduced information about the 19th century readings to the climate science community in 1983. (Wigley, T.M.L., 1983 “The pre-industrial carbon dioxide level.” Climatic Change 5, 315-320). However, he cherry-picked from a wide range, eliminating only high readings and ‘creating’ the pre-industrial level as approximately 270 ppm. I suggest this is what influenced the modellers because Wigley was working with them as Director of the Climatic Research Unit (CRU) at East Anglia. He preceded Phil Jones as Director and was the key person directing the machinations revealed by the leaked emails from the CRU.
Wigley was not the first to misuse the 19th century data, but he did reintroduce it to the climate community. Guy Stewart Callendar, a British Steam engineer, pushed the thesis that increasing CO2 was causing warming. He did what Wigley did by selecting only those readings that supported the hypothesis.
There are 90,000 samples from the 19th century and the graph shows those carefully selected by G. S. Callendar to achieve his estimate. It is clear he chose only low readings.
Figure 7. (After Jawaorowski Trend Lines added)
You can see changes that occur in the slope and trend by the selected data compared to the entire record.
Ernst-Georg Beck confirmed Jaworowski’s research. An article in Energy and Environment examined the readings in great detail and validated their findings. In his conclusion Beck states
Modern greenhouse hypothesis is based on the work of G.S. Callendar and C.D. Keeling, following S. Arrhenius, as latterly popularized by the IPCC. Review of available literature raise the question if these authors have systematically discarded a large number of valid technical papers and older atmospheric CO2 determinations because they did not fit their hypothesis? Obviously they use only a few carefully selected values from the older literature, invariably choosing results that are consistent with the hypothesis of an induced rise of CO2 in air caused by the burning of fossil fuel.
The pre-industrial level is some 50 ppm higher than the level claimed.
Beck found,
“Since 1812, the CO2 concentration in northern hemispheric air has fluctuated exhibiting three high level maxima around 1825, 1857 and 1942 the latter showing more than 400 ppm.”
The challenge for the IPCC was to create a smooth transition from the ice core CO2 levels to the Mauna Loa levels. Beck shows how this was done but also shows how the 19th century readings had to be cherry-picked to fit with ice core and Mauna Loa data (Figure 8).
Figure 8
Variability is extremely important because the ice core record shows an exceptionally smooth curve achieved by applying a 70-year smoothing average. Selecting and smoothing is also applied to the Mauna Loa data and all current atmospheric readings, which naturally vary up to 600 ppm in the course of a day. Smoothing done on the scale of the ice core record eliminates a great deal of information. Consider the variability of temperature data for the last 70 years. Statistician William Brigg’s says you never, ever, smooth a time-series. Elimination of high readings prior to the smoothing make the losses greater. Beck explains how Charles Keeling established the Mauna Loa readings by using the lowest readings of the afternoon and ignored natural sources. Beck presumes Keeling decided to avoid these low level natural sources by establishing the station at 4000 m up the volcano. As Beck notes
“Mauna Loa does not represent the typical atmospheric CO2on different global locations but is typical only for this volcano at a maritime location in about 4000 m altitude at that latitude. (Beck, 2008, “50 Years of Continuous Measurement of CO2on Mauna Loa” Energy and Environment, Vol. 19, No.7.)
Keeling’s son operates Mauna Loa and as Beck notes, “owns the global monopoly of calibration of all CO2measurements.” He is a co-author of the IPCC reports, that accept Mauna Loa and all other readings as representative of global levels.
As a climatologist I know it is necessary to obtain as many independent verifications of data as possible. Stomata are small openings on leaves, which vary in size directly with the amount of atmospheric CO2. They underscore effects of smoothing and the artificially low readings of the ice cores. A comparison of a stomata record with the ice core record for a 2000-year period (9000 – 7000 BP) illustrates the issue (Figure 9).
Figure 9.
Stomata data show higher readings and variability than the excessively smoothed ice core record. They align quantitatively with the 19th century measurements as Jaworowski and Beck assert. The average level for the ice core record shown is approximately 265 ppm while it is approximately 300 ppm for the stomata record.
The pre-industrial CO2 level was marginally lower than current levels and likely within the error factor. Neither they, nor the present IPCC claims of 400 ppm are high relative to the geologic record. The entire output of computer climate models begins with the assumption that pre-industrial levels were measurably lower. Elimination of this assumption further undermines the claim that the warming in the industrial era period was due to human addition of CO2 to the atmosphere. Combine this with their assumption that CO2 causes temperature increase, when all records show the opposite, it is not surprising IPCC predictions of temperature increase are consistently wrong.
The IPCC deception was premeditated under Maurice Strong’s guidance to prove CO2 was causing global warming as pretext for shutting down industrialized nations. They partially achieved their goal as alternate energies and green job economies attest. All this occurred as contradictory evidence mounts because Nature refused to play. CO2 increases as temperatures decline, which according to IPCC science cannot happen. Politicians must deal with facts and abandon all policies based on claims that CO2 is a problem, especially those already causing damage.
Source: The Global Warming Policy Foundation: CCNet 14/10/13
1. [1] “Climate Change: Incorrect information on pre-industrial CO2” Statement written for the Hearing before the US Senate Committee on Commerce, Science, and Transportation by Professor Zbigniew Jaworowski March 19, 2004
Related articles
- IPCC ‘s Bogus Evidence for Global Warming (americanthinker.com)
- 2013 On Track to be Seventh Warmest Year Since 1850 (climatecentral.org)
- Another Reason Why IPCC Predictions (Projections) Fail. AR5 Continues to Let The End Justify the Unscrupulous Means (wattsupwiththat.com)
- UN climate panel corrects carbon numbers in influential report (trust.org)
- The IPCC’s muddled definitions of climate change (ipccreport.wordpress.com)
Dumb Scientyist says:
“So, all those organizations are corrupt and riding the climate gravy train…”
Yes, as prof Richard Lindzen has shown. The rank-and-file is not corrupt; but the few individuals on the governing boards are bought and paid for activists.
We have been over this time and time again here. It gets tedious educating every newbie who comes along. Do an archive search, and get up to speed.
milodonharlani says:
November 18, 2013 at 5:29 pm
The equable warmth of these periods or epochs derive from the positions of the continents & heating of the oceans by volcanic activity, not from the gases incidentally injected into the air by these processes.
================
I agree that the positions of the continents over very long timescales (longer than a million years) have significant effects on ocean circulation and thus on the climate. But could you please link to a paper showing that direct oceanic heating by volcanic activity significantly affects climate during the epochs in question?
Note that the PETM was much too rapid to be explained by continental drift. It was a geologically brief spike of only ~200,000 years:
http://en.wikipedia.org/wiki/File:65_Myr_Climate_Change.png
================
The PETM impact hypothesis as you also may know has recently received new support:
http://www.pnas.org/content/110/2/425
================
Thanks, I hadn’t heard of that paper. I don’t yet see the connection to the impact hypothesis, but I did notice that it was cited by another recent paper which suggests the PETM CO2 excursion happened in just a few decades:
http://www.pnas.org/content/110/40/15908.short
Dumb Scientist says:
November 18, 2013 at 6:50 pm
Re tectonics, the PETM occurred during a period of more rapid than normal seafloor spreading, as during the Cretaceous, leading to ocean heating. A 200,000 year increase in volcanism could have caused the observed deep sea T increase. Also, the positions of the continents then could have amplified any such event, since the Southern Ocean now important in deep oceanic circulation & continental climates hadn’t yet developed.
At the very least, it’s far too soon to conclude that some increase in GHGs explain the PETM, let alone draw conclusions therefrom to justify dismantling the energy economy upon which seven billion people rely.
Re relevance to impact hypothesis of cited paper, the inclusions it found have mysterious provenance, arguable extra-terrestrial.
milodonharlani says:
November 18, 2013 at 6:58 pm
Re tectonics, the PETM occurred during a period of more rapid than normal seafloor spreading, as during the Cretaceous, leading to ocean heating. A 200,000 year increase in volcanism could have caused the observed deep sea T increase. Also, the positions of the continents then could have amplified any such event, since the Southern Ocean now important in deep oceanic circulation & continental climates hadn’t yet developed.
=========================
Citation?
=========================
At the very least, it’s far too soon to conclude that some increase in GHGs explain the PETM, let alone draw conclusions therefrom to justify dismantling the energy economy upon which seven billion people rely.
=========================
The two papers I linked (and many others) concluded that the PETM was caused by an increase in GHGs. I’ve explained starting on November 17, 2013 at 8:10 pm that I’m not trying to dismantle the energy economy. I’m trying to keep the fossil fuel industry from dismantling every other part of the economy by charging them for their pollution the same way all other businesses pay for waste disposal.
Dumb Scientist says:
November 18, 2013 at 7:15 pm
You’re most welcome on the linked paper.
Citation? Any paleogeographic map showing that Antarctica wasn’t yet separated by deep ocean channels from South America & Australia. I could link to many, as you must know. Ditto the level of submarine volcanism, as seafloor spreading was higher then than now, although less than during the Cretaceous, when thermal expansion of the oceans was probably at a Phanerozoic high. I suspect you could find papers on deep ocean heating at the PETM as easily as I could. Some obligatorily genuflect toward the One True Gas, but others allow for alternative explanations. But in any case, the oceans weren’t as hot at the PETM as during the steamiest interval of the Cretaceous, at which time they were almost literally scalding in the tropics, leading to fewer biological cloud condensation nuclei, a positive feedback. No magic gas need apply.
CO2 is not pollution. It is beneficial plant food, the salubrious effects of which are evident in the greening of the planet & great increase in yields of food & fiber crops since WWII.
Your citations show no such thing. There is zero evidence that a major CO2 increase from already high (by Cenozoic standards) levels preceded the PETM, let alone cause it. Indeed, if you want to talk physics, Arrhenius showed that the warming effect of CO2 even in the lab, let alone the atmosphere, is logarithmic, so the supposed five degree C rapid increase could not have been due to CO2. Nor is there evidence to support CH4 or any other GHG, except possibly H2O. Just supposition, ie WAG motivated by ideology, not science.
milodonharlani says:
November 18, 2013 at 7:34 pm
Citation? Any paleogeographic map…
============================
Citation? For the claim that direct oceanic heating by volcanic activity significantly affects climate during the epochs in question? To the extent that it could explain a 5C global warming spike in only ~200,000 years? And a carbon isotope excursion that indicates organic carbon far in excess of what can be explained by ocean outgassing using Henry’s Law?
============================
CO2 is not pollution. It is beneficial plant food, the salubrious effects of which are evident in the greening of the planet & great increase in yields of food & fiber crops since WWII.
============================
Farming practices and technology have also improved greatly since WWII. As I explained in my untouchable article, CO2 fertilization is a real beneficial effect. But it’s just overwhelmed by negative effects like more droughts and wildfires. For instance, Rice grows 10% less with every 1.8°F of night-time warming:
http://www.pnas.org/content/101/27/9971.full
============================
Arrhenius showed that the warming effect of CO2 even in the lab, let alone the atmosphere, is logarithmic, so the supposed five degree C rapid increase could not have been due to CO2.
============================
As you say, for over a century scientists have understood that GHG surface warming on Earth depends approximately on the logarithm of GHG concentrations. Here’s a review of multiple studies over the last 65 Myrs (including the PETM) which reveals a climate sensitivity of 2.2K – 4.8K per doubling of atmospheric CO2:
http://www.nature.com/nature/journal/v491/n7426/full/nature11574.html
============================
… the supposed five degree C rapid increase could not have been due to CO2. Nor is there evidence to support CH4 or any other GHG, except possibly H2O. Just supposition, ie WAG motivated by ideology, not science.
============================
The papers I linked seem to agree that scientists don’t know if the GHG was CO2 or CH4 (methane), and the exact source, quantities and release rates are still being disputed. But it’s certainly not due to H2O. That’s impossible. A significant fraction of emitted CO2 stays in the atmosphere for centuries, and methane takes about ten years to oxidize into CO2.
Increasing the concentration of CO2 or methane can and has warmed the climate.
But H2O quickly rains out of the atmosphere in just a few weeks if its concentration gets too high, or evaporates from the oceans if its concentration gets too low.
Increasing the concentration of H2O by itself can’t warm the climate because the equilibrium concentration of H2O is essentially predetermined by all the other factors which determine sea surface temperatures. Water vapor is a feedback, not a forcing, so it can’t possibly have caused the PETM.
Dumb Scientist:
The equilibrium climate sensitivity is not a scientifc or logical concept as the equilibrium temperature is not an observable feature of the real world..
Dumb Scientist says:
November 18, 2013 at 9:38 pm
Actually, sir, no one knows how long CO2 molecules stay in the atmosphere. It’s an important area of investigation. But we’re pretty sure it isn’t centuries.
milodonharlani says:
November 18, 2013 at 9:50 pm
Actually, sir, no one knows how long CO2 molecules stay in the atmosphere. It’s an important area of investigation. But we’re pretty sure it isn’t centuries.
======================
“[19] The carbon cycle of the biosphere will take a long time to completely neutralize and sequester anthropogenic CO2. We show a wide range of model forecasts of this effect. For the best guess cases, which include air/seawater, CaCO3, and silicate weathering equilibria as affected by an ocean temperature feedback, we expect that 17– 33% of the fossil fuel carbon will still reside in the atmosphere 1 kyr from now, decreasing to 10– 15% at 10 kyr, and 7% at 100 kyr. The mean lifetime of fossil fuel CO2 is about 30– 35 kyr.”
“[20] A mean atmospheric lifetime of order 104 years is in start contrast with the ‘‘popular’’ perception of several hundred year lifetime for atmospheric CO2. In fairness, if the fate of anthropogenic carbon must be boiled down into a single number for popular discussion, then 300 years is a sensible number to choose, because it captures the behavior of the majority of the carbon. A single exponential decay of 300 years is arguably a better approximation than a single exponential decay of 30,000 years, if one is forced to choose. However, the 300 year simplification misses the immense longevity of the tail on the CO2 lifetime, and hence its interaction with major ice sheets, ocean methane clathrate deposits, and future glacial/interglacial cycles. …”
http://geosci.uchicago.edu/~archer/reprints/archer.2005.fate_co2.pdf
Dumb Scientist says:
November 18, 2013 at 3:45 pm
No, the rest isn’t models. It’s evidence from the ancient climate and fundamental physics. For instance, over at least the past 420 million years, CO2 has acted as a greenhouse gas which warmed the long-term climate by 1.5C to 6.2C per doubling of CO2:
http://www.nature.com/nature/journal/v446/n7135/full/nature05699.html
It is near impossible to separate the warming influence on CO2 levels from the influence of increasing CO2 levels on temperature during a glacial-interglacial transition, as there is a huge overlap between the two. But the detailed measurements at Epica Dome C for the last tranistion already give a clue:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/epica5.gif
be it that the error margin is still too huge to be certain.
Further, there is a huge shift in timing of CO2 vs. T and CH4 at the end of the previous interglacial: T and CH4 where already at a new minimum (and ice sheets at a new maximum), when CO2 levels start to drop, without a measurable effect on T or ice sheets:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/eemian.gif
which points to little effect of CO2 on T.
d18O measured in N2O is a reverse measure for ice sheet formation, here reversed to show ice sheet extent.
Ironically, the only way to determine the ~0.9K bare no-feedbacks sensitivity is to use a model. The only climate sensitivity that ancient climate data can tell us about is the sensitivity in the real world.
Sorry, the “model” used to estimate the temperature effect of 2xCO2 is based on the direct absorbance of CO2 over the complete column of air (70 km) as calculated by Hitran and Modtran, based on the line by line spectrum of CO2 and other GHGs in the atmosphere at different pressures (as measured in laboratories). Nothing to do with the speculation by GCM’s…
Dumb Scientist says:
November 18, 2013 at 10:46 pm
we expect that 17– 33% of the fossil fuel carbon will still reside in the atmosphere 1 kyr from now, decreasing to 10– 15% at 10 kyr, and 7% at 100 kyr.
That is the Bern model. But the Bern model only may have merit if 3000-5000 GtC is released by humans: all available oil and gas and lots of coal. Then we have a saturation of the deep oceans and other much slower mechanisms are needed to remove the excess CO2.
Meanwhile, the deep oceans are far from saturated and still absorb a large part of human emissions. Of the ~9 GtC/yr emitted by humans (as mass, not individual molecules):
~0.5 GtC/yr goes into the ocean surface with a decay rate of 1-3 years (which saturates the ocean surface – the buffer/Revelle factor)
~1 GtC/yr goes into more permanent storage by vegetation, where no saturation is in sight.
~3 GtC/yr goes into the deep oceans, where no saturation is in sight.
The latter now has ~300 GtC human emissions stored, or less than 1% of the deep ocean carbon content. If that all gets into equilibrium with the atmosphere after thousands of years, that means a very long term increase of 3 ppmv CO2 in the atmosphere. That is all.
The error the Bern model makes is translating an enormous emission of CO2 in the far future to the current emissions. There is no reason to expect a saturation of the deep oceans for the next centuries and no saturation at all of the storage in vegetation, which is in fact unlimited for CO2 (but limited for other necessities).
That makes that besides the ocean surface, which is readily saturated, the deep oceans and vegetation are the current (near) unlimited sinks and the slower sinks play no significant role at all.
The current sink rate is ~4.5 GtC/yr with an atmospheric pressure in the atmosphere of ~230 GtC (110 ppmv) above equilibrium, the e-fold decay rate is 230/4.5 or ~51 years or a half life time of ~40 years.
That there is no reduction in sink rate over time is clear from the “airborne fraction” over time:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2.jpg
or recent:
http://www.ferdinand-engelbeen.be/klimaat/klim_img/acc_co2_1960_cur.jpg
Dumb Scientist says:
November 18, 2013 at 3:45 pm
“No, the rest isn’t models. It’s evidence from the ancient climate and fundamental physics. For instance, over at least the past 420 million years, CO2 has acted as a greenhouse gas which warmed the long-term climate by 1.5C to 6.2C per doubling of CO2: “
http://www.nature.com/nature/journal/v446/n7135/full/nature05699.html
——————–
Dumb Scientist, your cited reference explicitly states, to wit:
“Here we estimate long-term equilibrium climate sensitivity by modelling carbon dioxide concentrations over the past 420 million years and comparing our calculations with a proxy record. Our estimates are broadly consistent with estimates based on short-term climate records, and indicate that a weak radiative forcing by carbon dioxide is highly unlikely on multi-million-year timescales. We conclude that a climate sensitivity greater than 1.5 °C has probably been a robust feature of the Earth’s climate system over the past 420 million years, regardless of temporal scaling.”
Dumb Scientist, ….. DUH, ….. “…..we estimate ….by modeling ….comparing our calculations …Our estimates are broadly consistent with …and indicate that ….is highly unlikely ….We conclude that …has probably been a robust feature …”
What you have therein, Dumb Scientist, is nothing more than “weazelworded” rhetoric being touted as a means of justifying the author’s “junk science” CAGW agenda.
Dumb Scientist, … here is the “proxy record” graph the author made reference to, to wit:
Paleo historic graph of atmospheric CO2 and temperatures
http://www.biocab.org/Geological_Timescale.jpg
And here is the abstract source for the above graph, to wit:
http://www.biocab.org/carbon_dioxide_geological_timescale.html
And Dumb Scientist, … for your quest of general knowledge on paleoclimate I suggest you start by reading the follow article, to wit:
Climate and the Carboniferous Period
“Late Carboniferous to Early Permian time (315 mya — 270 mya) is the only time period in the last 600 million years when both atmospheric CO2 and temperatures were as low as they are today (Quaternary Period ).” http://www.geocraft.com/WVFossils/Carboniferous_climate.html
==============
Dumb Scientist says:
November 18, 2013 at 4:48 pm
“As before, Henry’s Law just won’t allow 5C of warming to release that much CO2”
OH WOW, …. have you told Budweiser and Coors about that ….. because they have a really big problem trying to keep the CO2 “bottled up” until a person is ready to drink their beer?
Ferdinand Engelbeen says:
November 19, 2013 at 2:31 am
It is near impossible to separate the warming influence on CO2 levels from the influence of increasing CO2 levels on temperature during a glacial-interglacial transition, as there is a huge overlap between the two. But the detailed measurements at Epica Dome C for the last tranistion already give a clue:
=======================
Your interesting claims conflict with the available peer-reviewed literature, which have studied glacial-interglacial transitions and concluded that CO2 has acted as a greenhouse gas with a climate sensitivity of 2.2K – 4.8K per doubling of atmospheric CO2:
http://www.nature.com/nature/journal/v491/n7426/full/nature11574.html
=======================
Ironically, the only way to determine the ~0.9K bare no-feedbacks sensitivity is to use a model. The only climate sensitivity that ancient climate data can tell us about is the sensitivity in the real world.
Sorry, the “model” used to estimate the temperature effect of 2xCO2 is based on the direct absorbance of CO2 over the complete column of air (70 km) as calculated by Hitran and Modtran, based on the line by line spectrum of CO2 and other GHGs in the atmosphere at different pressures (as measured in laboratories). Nothing to do with the speculation by GCM’s…
=======================
Hitran and Modtran are models which calculate the temperature effect of 2xCO2 while pretending that everything else stays constant. As the planet warms, water vapor doesn’t evaporate, ice sheets don’t melt, the permafrost doesn’t decompose. The bare no-feedbacks sensitivity only exists in models.
On the other hand, paleoclimate data tell us about the real-world “Earth system sensitivity” which includes all these real feedbacks.
=======================
Ferdinand Engelbeen says:
November 19, 2013 at 3:09 am
That is the Bern model. But the Bern model only may have merit if 3000-5000 GtC is released by humans…
=======================
Feel free to link to a peer-reviewed paper giving a CO2 lifetime significantly lower than the one I linked.
Dumb Scientist:
The equilibrium climate sensitivity (TECS) is backed out of paleoclimate data through Bayesian parameter estimation. Required for this process is a prior probability density function. This function suffers from a lack of uniqueness with consequential violation of the law of non-contradiction.
The posterior probability density function that is generated from an arbitrarily selected prior probability density function is not necessarily a true proposition and the claim that is stated by it is insusceptible to being statistically validated in view of the nonobservability of the equilibrium temperature. Thus, TECS is not a scientifically or logically viable concept.
The alternative to TECS is to build climatological models upon a foundation of independent events, some of them observed. This alternative has yet to be taken by the climatological establishment. While it remains untaken, we will lack a scientific or logical basis for attempting to regulate the climate through controls on carbon dioxide emissions.
Samuel C Cogar says:
November 19, 2013 at 7:54 am
Dumb Scientist, your cited reference explicitly states, to wit: … “…..we estimate ….by modeling ….comparing our calculations …Our estimates are broadly consistent with …and indicate that ….is highly unlikely ….We conclude that …has probably been a robust feature …” What you have therein, Dumb Scientist, is nothing more than “weazelworded” rhetoric being touted as a means of justifying the author’s “junk science” CAGW agenda.
=======================
On November 14, you said “One can actually trust the “fossilized stomata record” because those plants were actually there at the time ….. and were “measuring” the CO2 ppm that was available for their use.”
So we have observations of CO2 concentrations over time, from fossilized stomata and ice cores, etc. More recently, we have observations of solar/volcanic/etc. forcing, and observations of surface temperatures and observations of ocean heat content.
1. Can these observations be used to learn about the climate?
If you don’t think so, then science is impossible.
2. Is there any way to use these observations to learn about the climate without using equations or physics (i.e. a model)?
No. Science = models. Anyone who doesn’t like models doesn’t like science. The observations we have are useless without physics.
Terry Oldberg says:
November 19, 2013 at 8:56 am
The equilibrium climate sensitivity (TECS) is backed out of paleoclimate data through Bayesian parameter estimation. Required for this process is a prior probability density function. This function suffers from a lack of uniqueness with consequential violation of the law of non-contradiction.
==================
The default prior is uniform for all climate sensitivities. I wish you luck in publishing your new climate model.
Dumb Scientist:
Thanks for authoring the stimulating talking points and for taking the time to respond to my message. According to AR4, two prior PDFs are in common use by global warming climatologists. One is uniform. As I recall, the other is normally distributed. Neither function is logically or scientifically justified.
As the uniform distribution function is non-informative, in the circumstance in which it is also unique it is logically justified. In the situation in which it is not unique there is a logically objectionable violation of the law of non-contradiction. In relation to question of the posterior PDF of the equilibrium climate sensitivity, the uniform prior lacks uniqueness.
By the way, while I favor a logically justified and scientific method of construction for those climatological models that are used in regulating greenhouse gas emissions and though we lack such models, currently there is not a market for such a model. Thus, I am not about to build one. In climatology a variant of Gresham’s law is operative in which bad models drive out good ones.
I see that the Dumb Scientist is getting a much-needed education. Maybe some of the nonsense he’s picked up from SkS and similar blogs will be deconstructed by the knowledge available here.
Dumb Scientist says:
November 18, 2013 at 3:45 pm
“Where’s the wormhole that’s hiding the 30 billion metric tons of CO2 pollution we emit each year?”
————————
Now we can assume the above figure is correct in that humans are CURRENTLY emitting 30 billion metric tons of CO2 pollution into the atmospheres each year …… but it would be foolish and/or asinine to assume they have been doing the same for the past 55 years, since 1958, which “marks” the start of the Mona Loa ‘Keeling Curve’ atmospheric CO2 record.
Now the Mona Loa data that is plotted on the Keeling Curve graph shows a STEADY and CONSISTENT ….. 1 ppm to 2 ppm yearly increase in atmospheric CO2 for each of said 55 years. Which by the way, was many years before anyone decided to calculate human emissions of CO2 from their burning of fossil fuels.
=========
Then: Dumb Scientist says:
November 18, 2013 at 10:46 pm
“…… we expect that 17– 33% of the fossil fuel carbon will still reside in the atmosphere 1 kyr from now, decreasing to 10– 15% at 10 kyr, and 7% at 100 kyr. The mean lifetime of fossil fuel CO2 is about 30– 35 kyr.”
———————-
SO, they EXPECT that much (17– 33%) to remain resident in the atmosphere …. year after year, HUH?
If we do the math ….. then 17% to 33% of 30 billion metric tons is equal to 5.1 to 9.9 billion tons of CO2 that is remaining resident in the atmosphere …. year after year.
And if we also do the math, given the average mass of the atmosphere and the current CO2 @ur momisugly 396 ppm ….. then each one (1) ppm of CO2 in the atmosphere is equal to about 5 billion metric tons of CO2.
OH MY, MY, …. what a coincidence that is, …. that 1 ppm to 2 ppm yearly increase of CO2 on the KC graph …… is equivalent to …… the aforesaid EXPECTED 17% to 33% of human emissions that is remaining resident in the atmosphere …. year after year. And Mother Nature doesn’t like coincidences, ya know.,
The maximum CO2 ppm in 1960 was 320.03 which was an increase of 1.74 ppm from 1959.
And a 1.74 ppm increase in CO2 in 1959 is equal to 8.7 billion tons of CO2, …. or 29% of the 30 billion tons of human emissions.
Only one problem, humans were not emitting 30 billion metric tons of CO2 each year during the 1950’s and 1960’s due to their burning of fossil fuels.
cheers
Samuel C Cogar says:
November 19, 2013 at 11:19 am
“…… we expect that 17– 33% of the fossil fuel carbon will still reside in the atmosphere 1 kyr from now, decreasing to 10– 15% at 10 kyr, and 7% at 100 kyr. The mean lifetime of fossil fuel CO2 is about 30– 35 kyr.”
———————-
SO, they EXPECT that much (17– 33%) to remain resident in the atmosphere …. year after year, HUH? …. what a coincidence that is, …. that 1 ppm to 2 ppm yearly increase of CO2 on the KC graph …… is equivalent to …… the aforesaid EXPECTED 17% to 33% of human emissions that is remaining resident in the atmosphere …. year after year. And Mother Nature doesn’t like coincidences, ya know.,
==============================
To repeat, “17– 33% of the fossil fuel carbon will still reside in the atmosphere 1 kyr from now“. Around the year 3000, in other words. Applying this fraction to the 1900s implies that you’re referring to a hypothetical CO2 pulse in the year ~900. Is that the case?
==============================
The maximum CO2 ppm in 1960 was 320.03 which was an increase of 1.74 ppm from 1959. And a 1.74 ppm increase in CO2 in 1959 is equal to 8.7 billion tons of CO2, …. or 29% of the 30 billion tons of human emissions. Only one problem, humans were not emitting 30 billion metric tons of CO2 each year during the 1950’s and 1960’s due to their burning of fossil fuels.
==============================
NOAA calculates the CO2 growth rate from 1959-1960 at 0.94 ppm/year, not 1.74.
http://www.esrl.noaa.gov/gmd/ccgg/trends/
Dumb Scientist says:
“Science = models.”
And that, folks, is why ‘Dumb Scientist’ is so deluded. He believes computer models over empirical observations.
Correction:
Science = testable, empirical [real world] evidence.
The IPCC’s models are always wrong. GISS models are always wrong: GISS dishonestly alters the historical record.
So who should we believe? The real world? Or those always-wrong models, and the NASA/GISS-diddled temperature record?
Dumb Scientist says:
November 19, 2013 at 8:41 am
“So we have observations of CO2 concentrations over time, from fossilized stomata … ”
————————
Absolutely NOT, … those were not “observations over time”. They are/were direct measurements of “date and atmospheric CO2 ppm” at the time said fossilized plants were experiencing new growth of their foliage.
Dumb Scientist, the fossil record of plants and their evolutionary taxonomy is probably 20 times more extensive than the fossil record of animals.
And that is what makes it possible to use “pollen grains” to date archeological sites and animal fossils.
“No. Science = models. Anyone who doesn’t like models doesn’t like science. The observations we have are useless without physics.”
Dumb Scientist, cease with your obfuscations. And don’t be lecturing me on what is or isn’t science, I’m from the old school when actual, factual science was being taught. Iffen you require a model to “learn science” …. then you best give it up and select another profession.
And “HA”, Charles Goodyear and Thomas Edison were both “without physics” and they managed to do OK.
Samuel C Cogar says:
November 19, 2013 at 12:29 pm
“So we have observations of CO2 concentrations over time, from fossilized stomata … ”
————————
Absolutely NOT, … those were not “observations over time”. They are/were direct measurements of “date and atmospheric CO2 ppm” at the time said fossilized plants were experiencing new growth of their foliage.
=======================
That’s what I meant. Fossilized stomata directly measured CO2 concentration, and there are many fossilized stomata at different times. So they can tell us what the CO2 concentration was at different points in time. If that’s not what you meant then perhaps we’re talking about different things.
Dumb Scientist says:
November 19, 2013 at 8:33 am
which have studied glacial-interglacial transitions and concluded that CO2 has acted as a greenhouse gas with a climate sensitivity of 2.2K – 4.8K per doubling of atmospheric CO2
and
On the other hand, paleoclimate data tell us about the real-world “Earth system sensitivity” which includes all these real feedbacks.
Yes, I have read the initial report by J. Hansen about the models that can’t go out of an ice age without the help of CO2 and other greenhouse gases.
But what do we know about one of the most important feedbacks: clouds? Hardly anything. 2% change in cloud cover is equivalent to a CO2 doubling… What was the change in cloud cover during a glacial-interglacial transition?
And the feedback of water vapour? Completely absent in current times in the higher troposphere: it is going down, not up. Again cloud feedback is positive in the models, negative in reality.
And last but not least, the several last years sensitivity studies all push climate sensitivity down to 1.5 K / 2xCO2, so that even the IPCC did lower its range at the lowe side.
Feel free to link to a peer-reviewed paper giving a CO2 lifetime significantly lower than the one I linked.
No link necessary: a little process knowledge is sufficient:
The calculation of an e-fold time for a near linear system (which the CO2 budget is) is quite easy: divide the height of the disturbance from the equilibrium by the observed reaction to that disturbance. Which in this case is 230 GtC/yr / 4.5 GtC/yr or about 51 years.
That is the real world, not the modelled world.
The reaction speed didn’t change over 160 years of time and there is no sign of saturation (it even seems to accellerate…), except for the ocean surface, which is already saturated at 10% of the change in the atmosphere (the Revelle factor).
The Bern model gives many parallel decay rates in different reservoirs, including a part that stays in the atmosphere forever. The real world shows no sign of saturation for the decay rate in the deep oceans and vegetation. The latter even has no saturation bound (as we still use coal as result of ancient CO2 levels…) and goes on forever, as long as CO2 levels are above equilibrium…
Ferdinand Engelbeen says:
November 19, 2013 at 3:07 pm
… what do we know about one of the most important feedbacks: clouds? Hardly anything. 2% change in cloud cover is equivalent to a CO2 doubling… What was the change in cloud cover during a glacial-interglacial transition? And the feedback of water vapour? Completely absent in current times in the higher troposphere: it is going down, not up. Again cloud feedback is positive in the models, negative in reality.
=======================
Estimating feedbacks using climate models requires very detailed, high-resolution models. In that case, one needs to ask all the questions you’re asking.
But estimating feedbacks by comparing ancient temperatures to ancient CO2 levels and orbital forcings already has all those feedbacks built into the ancient temperature record. However cloud cover changed during glacial-interglacial transitions, the orbital forcing, CO2 amplification, cloud feedback, water vapor feedback, sea-ice albedo feedback, ice sheet feedback, carbon cycle/permafrost feedback, etc… all added together to produce the ancient temperature record.
That’s why I consider paleoclimate data very informative. All those feedbacks are already in the ancient temperature record. In fact, the PALAEOSENS paper I linked earlier compared the climate sensitivities from climate models to those from paleoclimate data. In order to produce an apples-to-apples comparison, they had to remove feedbacks that were present in the paleodata but weren’t being simulated by the climate models.
http://www.nature.com/nature/journal/v491/n7426/full/nature11574.html
=======================
And last but not least, the several last years sensitivity studies all push climate sensitivity down to 1.5 K / 2xCO2, so that even the IPCC did lower its range at the lowe side.
=======================
The lower limit for the Charney sensitivity has been 1.5 K / 2xCO2 since the 1979 Charney report:
http://www.atmos.ucla.edu/~brianpm/download/charney_report.pdf
If we can agree that the lower limit scientists have been using for over 30 years is reasonable, that’s great news.
Made an error:
230 GtC/yr / 4.5 GtC/yr
is of course
230 GtC / 4.5 GtC/yr which gives the e-fold decay rate of an excess amount of CO2 in years, if no further disturbance occurs.
Dumb Scientist says:
November 19, 2013 at 3:51 pm
The lower limit for the Charney sensitivity has been 1.5 K / 2xCO2 since the 1979 Charney report:
The lower limit of the Charney report and of the IPCC was 1.5 K / 2xCO2 (2 K in the FAR), but with a “best estimate” of 3 K / 2xCO2. The recent empirical evidence is at ~1.8 K “best estimate” with an upper limit of ~3 K and the IPCC doesn’t give a “best estimate” in their latest report.
http://www.earth-syst-dynam-discuss.net/4/785/2013/esdd-4-785-2013.html
That’s why I consider paleoclimate data very informative. All those feedbacks are already in the ancient temperature record.
I do too find them very informative too. Therefore the end of the Eemian is very interesting, as that shows the impact of CO2 alone, whithout overlap of temperature and CH4. Which shows that sensitivity for CO2 is very low.
Moreover, it is also a matter of attribution: all models attribute a lot of power to CO2 (including feedbacks), depending of the (human) aerosols – CO2 balance. If the impact of human aerosols is huge, then the sensitivity of temperature for CO2 (including feedbacks) is huge and vv. Models with high and low sensitivity both can reproduce the 1945-1975 stop in warming, but the models with the highest sensitivity are those with the highest deviation from reality today. But even the models with the lowest sensitivity now are out of the 95% range. Which means that the sensitivity anyway is below 2 K / 2xCO2. How much will be clear from the length of the “pauze”…
Ferdinand Engelbeen says:
November 20, 2013 at 12:15 am
The recent empirical evidence is at ~1.8 K “best estimate” with an upper limit of ~3 K and the IPCC doesn’t give a “best estimate” in their latest report.
========================================
Again, if we can agree that the 1.5 K / 2xCO2 lower limit scientists have been using for over 30 years is reasonable, that’s great news.
========================================
I do too find them very informative too. Therefore the end of the Eemian is very interesting, as that shows the impact of CO2 alone, whithout overlap of temperature and CH4. Which shows that sensitivity for CO2 is very low.
========================================
Citation?
“Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3–4°C for a 4 W m−2 CO2 forcing.”
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3785813/
That link is for the transition into the Eemian, but I’d be interested in seeing a paper that calculates a “very low” sensitivity for the end of the Eemian.