Analysis of a carbon forecast gone wrong: the case of the IPCC FAR

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on January 31, 2020 by curryja |

by Alberto Zaragoza Comendador

The IPCC’s First Assessment Report (FAR) made forecasts or projections of future concentrations of carbon dioxide that turned out to be too high.

From 1990 to 2018, the increase in atmospheric CO2 concentrations was about 25% higher in FAR’s Business-as-usual forecast than in reality. More generally, FAR’s Business-as-usual scenario expected much more forcing from greenhouse gases than has actually occurred, because its forecast for the concentration of said gases was too high; this was a problem not only for CO2, but also for methane and for gases regulated by the Montreal Protocol. This was a key reason FAR’s projections of atmospheric warming and sea level rise likewise have been above observations.

Some researchers and commentators have argued that this means FAR’s mistaken projections of atmospheric warming and sea level rise do not stem from errors in physical science and climate modelling. After all, emissions are for climate models an input, not an output. Emissions depend largely on economic growth, and can also be affected by population growth, intentional emission reductions (such as those implemented by the aforementioned Montreal Protocol), and other factors that lie outside the field of physical science. Under this line of reasoning, it makes no sense to blame the IPCC for failing to predict the right amount of atmospheric warming and sea level rise, because that would be the same as blaming it for failing to predict emissions.

This is a good argument regarding Montreal Protocol gases, as emissions of these were much lower than forecasted by the IPCC. However, it’s not true for CO2: the over-forecast in concentrations happened because in FAR’s Business-as-usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world. In fact, real-world CO2 emissions were probably higher than forecasted by FAR’s Business-as-usual scenario. And the only reason one cannot be sure of this because there is great uncertainty around emissions of CO2 from changes in land use. For the rest of CO2 emissions, which chiefly come from fossil fuel consumption and are known with much greater accuracy, there is no question they were higher in reality than as projected by the IPCC.

In the article I also show that the error in FAR’s methane forecast is so large that it can only be blamed on physical science – any influence from changes in human behaviour or economic activity is dwarfed by the uncertainties around the methane cycle. Thus, errors or deficiencies in physical science are to blame for the over-estimation in CO2 and methane concentration forecasts, along with the correspondent over-estimation in forecasts of greenhouse gas forcing, atmospheric warming, and sea level rise. Human emissions of greenhouse gases may indeed be unpredictable, but this unpredictability is not the reason the IPCC’s projections were wrong.

Calculations regarding the IPCC’s First Assessment Report

FAR, released in 1990, made projections according to a series of four scenarios. One of them, Scenario A, was also called Business-as-usual and represented just what the name implies: a world that didn’t try to mitigate emissions of greenhouse gases. In FAR’s Summary for Policymakers, Figure 5 offered projections of greenhouse-gas concentrations out to the year 2100, according to each of the scenarios. Here’s the panel showing CO2:

I’ve digitized the data, and the concentration in the chart rises from 354.8ppm in 1990 to 422.75 by 2018; that’s a rise of 67.86 ppm. Please notice that slight inaccuracies are inevitable when digitizing, especially if it’s a document, like FAR, that was first printed, then scanned and turned into a PDF.

For emissions, the Annex to the Summary for Policymakers offers a not-very-good-looking chart; a better version is this one (Figure A.2(a) page 331, the Annex to the whole report):

Some arithmetic is needed here. The concentrations chart is in parts per million (ppm), whereas the emissions chart is in gigatons of carbon (GtC); one gigaton equals a billion metric tons. But the molecular mass of CO2 (44) is 3.67 times bigger than that of carbon (12). Using C or CO2 as the unit is merely a matter of preference – both measures represent the same thing. The only difference is that, when expressing numbers as C, the figures will be 3.67 times smaller than when expressed as CO2. This means that, while one ppm one ppm of CO2 contains approximately the weight 7.81 gigatons of CO2 of said gas, if we express emissions as GtC rather than GtCO2 the equivalent figure is 7.81 / 3.67 = 2.13.

Under FAR’s Business-as-usual scenario, cumulative CO2 emissions between 1991 and 2018 were 237.61GtC, which is equivalent to 111.55ppm. Since concentrations increased by 67.86ppm, that means 60.8% of CO2 emissions remained in the atmosphere.

Now, saying that a given percentage of emissions “remained in the atmosphere” is just a way to express what happens in as few words as possible; it’s not a literally correct statement. Rather, all CO2 molecules (whether released by humankind or not) are always being moved around in a very complex cycle: some CO2 molecules are taken up by vegetation, are others released by the ocean into the atmosphere, and so on. There is also some interaction with other gases; for example, methane has an atmospheric lifespan of only a decade or so because it decays into CO2. What matters is that, without man-made emissions, CO2 concentrations would not increase. Whether the CO2 molecules currently in the air are “our” molecules, the same ones that came out of burning fossil fuels, is irrelevant.

And that’s where the concept of airborne fraction comes in. The increase in concentrations of CO2 has always been less than man-made emissions, so it could be said that only a fraction of our emissions remains in the atmosphere. Saying that “the airborne fraction of CO2 is 60%” may be technically incorrect, but it rolls off the keyboard more easily than “the increase in CO2 concentrations is equivalent to 60% of emissions”. And indeed the term is commonly used in the scientific literature.

Anyway, we’ve seen what FAR had to say about CO2 emissions and concentrations. Now let’s see what nature said.

Calculations regarding the real world

Here I use two sources on emissions:

  • BP’s Energy Review 2019, which has data up to 2018.
  • Emission estimates from the Lawrence Berkeley National Laboratory. These are only available until 2014.

BP counts only emission from fossil fuel combustion: the burning of petroleum, natural gas, other hydrocarbons, and coal. And both sources are in very close agreement as far as emissions from fossil fuel combustion are concerned: for the 1991-2014 period, LBNL’s figures are 1% higher than BP’s. The LBNL numbers also include cement manufacturing, because the chemical reaction necessary for producing cement releases CO2; I couldn’t find a similarly authoritative source with more recent data for cement.

There is also the issue of flaring, or burning of natural gas by the oil-and-gas industry itself; these emissions are included in LBNL’s total. BP’s report does not feature the word “flaring”, and it seems unlikely they would be included, because BP’s method for arriving at global estimates of emissions is by aggregating national-level data on fossil fuel consumption. Now, I’ll admit I haven’t emailed every country’s energy statistics agency to be sure of the issue, but flared gas is by definition gas that did not reach energy markets; it’s hard to see why national agencies would include this in their “consumption” numbers, and many countries would have trouble even knowing how much gas is being flared. For what it’s worth, according to LBNL’s estimate flaring makes up less than 1% of global CO2 emissions.

For concentrations, I use data from the Mauna Loa Observatory. CO2 concentration in 1990 was 354.39ppm, and by 2014 this had grown to 398.65 (an increase of 44.26ppm). By 2018, concentrations had reached a level of 408.52 ppm, which meant an increase of 54.13 ppm since 1990.

It follows that the airborne fraction according to these estimates was:

  • In 1991-2014, emissions per LBNL were 182.9GtC, which is equivalent to 85.88 ppm. Thus, the estimated airborne fraction was 44.26 / 85.88 = 51.5%
  • In 1991-2018, emissions according to BP were 764GtCO2, equivalent to 97.82ppm. We get an airborne fraction of 54.13 / 97.82 = 55.3%

Unfortunately, there is a kind of emissions that aren’t counted either by LBNL or BP. So total emissions have necessarily been higher than estimated above, and the real airborne fraction has been lower – which is what the next section is about.

Comparison of FAR with observations

This comparison has to start with two words: land use.

Remember what we said about the airborne fraction of CO2: it’s simply the increase in concentrations over a given period, divided by the emissions that took place over that period. If you emit 10 ppm and concentrations increase by 6ppm, then the airborne fraction is 60%. But if you made a mistake in estimating emissions and those had been 12ppm, then the airborne fraction in reality would be 50%.

This is an issue because, while we know concentrations with extreme accuracy, we don’t know emissions nearly that well. In particular, there is great uncertainty around emissions from land use: carbon released and stored due to tree-cutting, agriculture, etc. The IPCC itself acknowledged in FAR that estimates of these emissions were hazy; on page 13 it provided the following emission estimates for the 1980-89 period, expressed in GtC per year:

  • Emissions from fossil fuels: 5.4 ±5
  • Emissions from deforestation and land use: 1.6 ± 1.0

So, even though emissions from fossil fuels were believed to be three-and-a-half times higher than those from land use, in absolute terms the uncertainty around land use emissions was double that around fossil fuels.

(FAR didn’t break down emissions from cement; these were a smaller share of total emissions in 1990 than today, and presumably were lumped in with fossil fuels. By the way, I believe the confidence intervals reflect a 95% probability, but haven’t found any text in the report actually spelling that out).

Perhaps there was great uncertainty around land-use emissions back in 1990, but this has now been reduced? Well, the IPCC’s Assessment Report 5 (AR5) is a bit old now (it was published in 2013), but it didn’t look like uncertainty had been reduced much. More specifically, Table 6.1 of the report gives a 90% confidence interval for CO2 emissions from 1980 to 2011. And the confidence interval is the same interval in every period: ± 0.8GtC/year.

Still, it’s possible to make some comparisons. Let’s go first with LBNL: for 1991-2014, emissions according to FAR’s Business-as-usual scenario would be 196.91GtC, which is 14.17GtC more than LBNL’s numbers show. In other words: if real-world land use emissions over the period had been 14.17GtC, then emissions according to FAR would have been the same as according to LBNL. That’s only 0.6GtC/year, which is well below AR5’s best estimate of land use emissions (1.5GtC/year in the 1990s, and about 1GtC/year in the 2000s).

For BP, emissions of 764.8GtCO2 convert to 208.58GtC. Now, to this figure at a minimum we’d have to add cement emissions from 1991-2014, which were 7.46GtC. By 2014 emissions from cement were well above 0.5GtC, so even a conservative estimate would put the additional emissions until 2018 at 2GtC, or 9.46GtC in total. This would mean BP’s figures, when adding cement production, give a total of 218.04GtC. I don’t consider flaring here, but according to LBNL those emissions were only about 1GtC.

Therefore BP’s fossil-fuel-plus-cement emissions would be 19.57 GtC lower than the figure for FAR’s Business-as-usual scenario (237.61GtC). For BP’s emissions to have matched FAR’s, real-world land-use emissions would have needed to average 0.7 GtC/year. Again, it seems real-world emissions exceeded this rate, and indeed the figures from AR5’s Figure 6.1 suggest total emissions for 1991-2011 alone were around 25GtC. But just to be clear: it is only likely that real-world emissions exceeded FAR’s Business-as-usual scenario. The uncertainty in land-use emissions means one can’t be sure of that.

I’ll conclude this section by pointing out that FAR didn’t break down how many tons of CO2 would come from changes in land use as opposed to fossil fuel consumption, but its description of the Business-as-usual scenario says “deforestation continues until the tropical forests are depleted”. While this statement isn’t quantitative, it seems FAR did not expect the apparent decline in deforestation rates seen since the 1990s. If emissions from land use were lower than expected by FAR’s authors, yet total emissions appear to have been higher, the only possible conclusion is that emissions from fossil fuels and cement were greater than FAR expected.

The First Assessment Report greatly overestimated the airborne fraction of CO2

The report mentions the airborne fraction only a couple of times:

  • For the period from 1850 to 1986, airborne fraction was estimated at 41 ± 6%
  • For 1980-89, its estimate is 48 ± 8%

So according to the IPCC itself, the airborne fraction of CO2 in observations at the time of the report’s publication was 48%, with a confidence interval going no higher than 56%. But the forecast for the decades immediately following the report implied a fraction of 60 or 61%. There is no explanation or even mention of this discrepancy in the report; the closest the IPCC came is this line:

“In model simulations of the past CO2 increase using estimated emissions from fossil fuels and deforestation it has generally been found that the simulated increase is larger than that actually observed”

Further evidence of FAR’s over-estimate of the airborne fraction comes from looking at Scenario B. Under this projection, CO2 emissions would slightly decline from 1990 on, and then make a likewise slight recovery; in all, annual emissions over 1991-2018 would be on average lower than in 1990. But even under this scenario CO2 concentrations would reach 401 ppm by 2018, compared with 408.5ppm in reality and 422ppm in the Business-as-usual scenario.

So real-world CO2 emissions were probably higher than under the IPCC’s highest-emissions scenario, yet concentrations ended up closer to a different scenario in which emissions declined from their 1990 level.

The error in the IPCC’s forecast of methane concentrations was enormous

In this case the calculations I’ve done are rougher than for CO2, but you’ll see it doesn’t really matter. This chart is from FAR’s Summary for Policymakers, Figure 5:

From a 1990 level just above 1700 parts per billion (ppb), concentrations reach about 2500 ppb by 2018. Even in Scenario B methane reaches 2050 ppb by that year. In the real world concentrations were only 1850 ppb. In other words:

  • The increase in concentrations in Scenario B was about two-and-a-half times larger than in reality
  • For Scenario A, the concentration increase was five or six times bigger than in the real world

The mismatch arose because methane concentrations were growing very quickly in the 1980s, though a slowdown was already apparent; this growth slowed further in the 1990s, and essentially stopped in the early 2000s. Since 2006 or so methane concentrations have been growing again, but at nowhere near the rates forecasted by the IPCC.

Readers may be wondering if perhaps FAR’s projections of methane emissions were very extravagant. Not so: the expected growth in yearly emissions between 1990 and 2018 was about 30%, far less than for CO2. See Figure A.2(b), from FAR’s Annex, page 331:

There’s an obvious reason the methane miss is even more of a head-scratcher. One of the main sources of methane is the fossil fuel industry: methane leaks out of coal mines, gas fields, etc. But fossil fuel consumption grew very quickly during the forecast period – indeed faster than the IPCC expected, as we saw.

It’s also interesting that the differences between emission scenarios were smaller for methane than for CO2. This may reflect a view on the part of the IPCC (which I consider reasonable) that methane emissions are less actionable than those of CO2. If you want to cut CO2 emissions, you burn less fossil fuel: difficult, yet simple. If by contrast you want to reduce methane emissions, it probably helps to reduce fossil fuel consumption, but there are also significant methane emissions from cattle, landfills, rice agriculture, and other sources; even with all the uncertainty around total methane emissions, more or less everybody agrees that non-fossil-fuel emissions are a more important source for methane than for CO2. And it’s not clear how to measure non-fossil-fuel emissions, so it’s far more difficult to act on them.

CO2 and methane appear to account for most of the mistake in FAR’s over-estimate of forcings

Disclosure: this is the most speculative section of the article. But as with land-use emissions before, it’s a case in which one can make some inferences even with incomplete data.

Let’s start with a paper by Zeke Hausfather and three co-authors; I hope the co-authors don’t feel slighted – I will refer simply to “Hausfather” for short.

Hausfather sets out to answer question: how well have projections from old climate models done, when accounting for the differences between real-world forcings and projected forcings? This is indeed a very good question: perhaps the IPCC back in 1990 projected more atmospheric warming than has actually happened only because its forecast of forcing was too aggressive. Perhaps the IPCC’s estimates of climate sensitivity, which is to say how much air temperature increases as a response to a given level of radiative forcing, were spot on.

(Although Hausfather’s paper focuses on atmospheric temperature increase, the over-projection in sea level rise has been perhaps worse. FAR’s Business-as-usual scenario expected 20 cm of sea level rise between 1990 and 2030, and the result in the real world is looking like it will be about 13 cm).

Looking at the paper’s Figure 2, there are three cases in which climate models made too-warm projections, yet after accounting for differences in realized-versus-expected forcing this effect disappears; the climate models appear to have erred on the warm side because they assumed excessively high forcing. Of the three cases, the IPCC’s 1990 report has arguably had the biggest impact on policy and scientific discussions. And for FAR, the authors estimate (Figure 1) that forecasted forcing was 55% greater than realized: the trend is 0.61 watts per square meter per decade, versus 0.39 in reality. Over the 1990-2017 period, the difference in trends adds up to 0.59 watts per square meter.

Now, there is a lot to digest in the paper, and I hope other researchers dig through the numbers as carefully as possible. I’m just going to assume the authors’ calculations of forcing and temperature increase are correct, but I want to mention why a calculation like this (comparing real-world forcings with the forcings expected by a 1990 document) is a minefield. Even if we restrict ourselves to greenhouse gases, ignoring harder-to-quantify forcing agents such as aerosols, there are at least three issues which make an apples-to-apples comparison difficult. (Hausfather’s supplementary Information seems to indicate they didn’t account for any of this — they simply took the raw forcing values from FAR))

First, some greenhouse gases simply weren’t considered in old projections of climate change. The most notable case in FAR may be tropospheric ozone. According to the estimate of Lewis & Curry (2018), forcing from this gas increased by 0.067w/m2 between 1990 and 2016, the last year for which they offer estimates (over the last decade of data forcing was still rising by about 0.002w/m2/year). Just to be sure, you can check Figure 2.4 in FAR (page 56), as well as Table 2.7 (page 57). These numbers do not include tropospheric ozone, but you’ll see the sum of the different greenhouse gases featured equals the total greenhouse forcing expected in the different scenarios. The IPCC did not account for tropospheric ozone at all.

Second, the classification of forcings is somewhat subjective and changes over time. For example, the depletion of stratospheric ozone, colloquially known as the ‘ozone hole’, has a cooling effect (a negative forcing). So, when you see an estimate of the forcing of CFCs and similar gases, you have to ask: is it a gross figure, looking at CFCs only as greenhouse gases? Or is it a net figure, accounting for both their greenhouse effect and their impact on the ozone layer? In modern studies stratospheric ozone has normally been accounted for as a separate forcing, but I’m not sure how FAR did it (no, I haven’t read the whole report).

Finally, even when greenhouse gases were considered and their effects had a more-or-less-agreed classification, our estimates of their effect on the Earth’s radiative budget changes over time. For the best-understood forcing agent, CO2, FAR estimated a forcing of 4 watts/m2 if atmospheric concentrations doubled (the forcing from CO2 is approximately the same each time concentration doubles). In 2013, the IPCC’s Assessment Report 5 estimated 3.7w/m2, and now some studies say it’s actually 3.8w/m2. These differences may seem minor, but they’re yet another way the calculation can go wrong. And for smaller forcing agents the situation is worse. Methane forcing, for example, suffered a major revision just three years ago.

Is there a way around the watts-per-square-meter madness? Yes. While I previously described climate sensitivity as the response of atmospheric temperatures to an increase in forcing, in practice climate models estimate it as the response to an increase in CO2 concentrations, and this is also the way sensitivity is usually expressed in studies estimating its value in the real world. Imagine the forcing from a doubling of atmospheric CO2 is 3.8w/m2 in the real world, but some climate model, for whatever reason, produces a value of 3w/m2. Obviously, then, what we’re interested in is not how much warming we’ll get per w/m2, but how much warming we’ll get from a doubling of CO2.

Thus, for example, the IPCC’s Business-as-usual forecast of 9.90 w/m2 in greenhouse forcing by 2100 (from a 1990 baseline) could instead be expressed as equivalent to 2.475 doublings of CO2 (the result of diving 9.90 by 4). Hausfather’s paper, or a follow-up, could then apply this to all models. Just using some made-up numbers as an illustration, it may be that FAR’s Business-as-usual forecast expected forcing between 1990 and 2017 equivalent to 0.4 doublings of CO2, while in reality the forcing was equivalent to 0.26 doublings. This would still mean the difference in forcings was about 55%, meaning FAR overshot real forcings by around 55%; however, this would be easier to interpret than a simple w/m2 measure.

Now, even with all these caveats, one can make some statements. First, there are seven greenhouse gases counted by FAR in its scenarios, but one of them (stratospheric water vapor) is created through the decay of other (methane). I haven’t checked if water vapor forcing according to FAR was greater than in the real world, but if that happened the blame lies on FAR’s inaccurate methane forecast; in any case stratospheric H2O is a small forcing agent and did not play a major role in FAR’s forecasts.

Then there are three gases regulated by the Montreal Protocol, which I will consider together: CFC-11, CFC-12, and HCFC-22. That leaves us with four sources to be considered: CO2, methane, N2O, and Montreal Protocol gases. In previous sections of the article we already saw CO2 and methane, so let’s turn to the two remaining sources of greenhouse forcing. I use 2017 as the finishing year, for comparison with Hausfather’s paper. The figures for real-world concentrations and forcings come from NOAA’s Annual Greenhouse Gas Index (AGGI)

For N2O, Figure A.3 in FAR’s page 333 shows concentrations rising from about 307ppb in 1990 to 334 ppb by 2017. This is close to the level that was observed (2018 concentrations averaged about 332 ppb). And even a big deviation in the forecast of N2O concentration wouldn’t have a major effect on forcing; FAR’s Business-as-usual scenario expected forcing of only about 0.036w/m2 per decade, which would mean roughly 0.1w/m2 for the whole 1990-2017 period. Deviations in the N2O forecast may have accounted for about 0.01w/m2 of the error in FAR’s forcing projection – surely there’s no need to keep going on about this gas.

Finally, we have Montreal Protocol gases and their replacements: CFCs, HCFCs, and in recent years HFCs. To get a sense of of their forcing effect in the real world, I check NOAA’s AGGI and sum the columns for CFC-11, CFC-12, and the 15 minor greenhouse gases (almost all of that is HCFCs and HFCs). The forcing thus aggregated rises from 0.284w/m2 in 1990 to 0.344 w/m2 in 2017; in other words, forcing from these gases between these years was 0.06 w/m2.

Here’s where Hausfather and co-authors have a point: the world really did emit far smaller quantities of CFCs and HCFCs than FAR’s Business-as-usual projection assumed. In FAR’s Table 2.7 (page 57), the aggregated forcing of CFC-11, CFC-12 and HCFC-22 rises by 0.24w/m2 between 2000 and 2025. And the IPCC expected accelerating growth: the sum of the forcings from these three gases would then increase by 0.28w/m2 between 2025 and 2050.

A rough calculation of what this implies for forcing between 1990 and 2017 now follows. In 2000-2025 FAR expected Montreal Protocol gases to account for 0.0096 w/m2/year of forcing; multiplied by the 27 years that we’re analysing, that would mean 0.259w/m2. However, forcing was supposed to be slower over the first period than later, as we’ve seen; Table 2.6 in FAR’s page 54 also implies smaller growth in 1990-2000 than after 2000. So I round the previously-calculated figure down to 0.25w/m2; this is probably higher than the actual increase FAR was forecasting, but I cannot realistically make an estimate down the last hundredth of a watt, so it will have to do.

If FAR expected 1990-2017 forcing from Montreal Protocol gases of 0.25w/m2, that would mean the difference between the real world and FAR’s Scenario A was 0.25 – 0.06 = 0.19w/m2. I haven’t accounted here for these gases’ effect on stratospheric ozone, as it wasn’t clear whether that effect was already included in FAR’s numbers. If stratospheric ozone depletion hadn’t been accounted for, then the deviation between FAR’s numbers and reality would be smaller.

Readers who have made it to this part of the article probably want a summary, so here it goes:

  • Hausfather estimates that FAR’s Business-as-usual scenario over-projected forcings for the 1990-2017 period by 55%. This would mean a difference of 0.59 w/m2 between FAR and reality.
  • Lower-than-expected concentrations of Montreal Protocol gases explain about 0.19 w/m2 of the difference. With the big caveat that Montreal Protocol accounting is a mess of CFCs, HCFCs, HFCs, stratospheric ozone, and perhaps other things I’m not even aware of.
  • FAR didn’t account for tropospheric ozone, and this ‘unexplains’ about 0.07 w/m2. So there’s still 0.45-0.5 w/m2 of forcing overshoot coming from something else, if Hausfather’s numbers are correct.
  • N2O is irrelevant in these numbers
  • CO2 concentration was significantly over-forecasted by the IPCC, and that of methane grossly so. It’s safe to assume that methane and CO2 account for most or all of the remaining difference between FAR’s projections and reality.

Again, this is a rough calculation. As mentioned before, an exact calculation has to take into account for many issues I didn’t consider here. I really hope Hausfather’s paper is the beginning of a trend in properly evaluating climate models of the past, and that means properly accounting for (and documenting) how expected forcings and actual forcings differed.

By the way: this doesn’t mean climate action failed

There is a tendency to say that, since emissions of CO2 and other greenhouse gases are increasing, policies intended to reduce or mitigate emissions have been a failure. The problem with such an inference is obvious: we don’t know whether emissions would have been even higher in the absence of emissions reductions policies. Emissions may grow very quickly in an economic boom, even if emission-mitigation policies are effective; on the other hand, even with no policies at all, emissions obviously decline in economic downturns. Looking at the metric tons of greenhouse gases emitted is not enough.

Dealing specifically with the IPCC’s First Assessment Report, its emission scenarios used a common assumption about future economic and population growth; however, the description is so brief and vague as to be useless.

“Population was assumed to approach 10.5 billion in the second half of the next century. Economic growth was assumed to be 2-3% annually in the coming decade in the OECD countries and 3-5 % in the Eastern European and developing countries. The economic growth levels were assumed to decrease thereafter.”

So it’s impossible to say the amount of emissions FAR expected per unit of economic growth or population growth. The question ‘are climate policies effective?’ can’t answered by FAR.

Conclusions

The IPCC’s First Assessment report greatly overestimated future rates of atmospheric warming and sea level rise in its Business-as-usual scenario. This projection also overestimated rates of radiative forcing from greenhouse gases. A major part of the mis-estimation of greenhouse forcing happened because the world clamped down on CFCs and HCFCs much more quickly than its projections assumed. This was not a mistake of climate science, but simply a failure to foresee changes in human behaviour.

However, the IPCC also made other errors or omissions, which went the other way: they tended to reduce forecasted forcing and warming. Its Business-as-usual scenario featured CO2 emissions probably lower than those that have actually taken place, and its forcing estimates didn’t include tropospheric ozone.

This means that the bulk of the error in FAR’s forecast stems from two sources:

  • The fraction of CO2 emissions that remained in the atmosphere was much higher than has been observed, either at the time of the report’s publication or since then. There are uncertainties around the real-world airborne fraction, but the IPCC’s figure of 61% is about one third-higher than emission estimates suggest. As a result, CO2 concentrations grew 25% more in FAR’s Business-as-usual projection than in the real world.
  • The methane forecast was hopeless: methane concentrations in FAR’s Business-as-usual scenario grew five or six times more than has been observed. It’s still not clear where exactly the science went wrong, but a deviation of this size cannot be blamed on some massive-yet-imperceptible change in human behaviour.

These are purely problems of inadequate scientific knowledge, or a failure to apply scientific knowledge in climate projections. Perhaps by learning about the mistakes of the past we can create a better future.

Data

This Google Drive folder contains three files:

  • BP’s Energy Review 2019 spreadsheet (original document and general website)
  • NOAA’s data on CO2 concentrations from the Mauna Loa observatory (original document)
  • My own Excel file with all the calculations. This includes the raw digitized figures on CO2 emissions and concentrations from the IPCC’S First Assessment Report.

The emission numbers from LBNL are available here. I couldn’t figure out how to download a file with the data, so these figures are included in my spreadsheet.

NOAA’s annual greenhouse gas index (AGGI) is here. For comparisons of methane and N2O concentrations in the real world with the IPCC’s forecasts, I used Figure 2.

The IPCC’s First Assessment Report, or specifically the part of the report by Working Group 1 (which dealt with the physical science of climate change), is here. The corresponding section of Assessment Report 5 is here.

159 thoughts on “Analysis of a carbon forecast gone wrong: the case of the IPCC FAR

  1. The article by Alberto Zaragoza Comendador is based on the faulty assumption that increased levels of CO2 are somehow a hazard to life on Earth and not the result of Man’s efforts to improve the life of all those people living on Earth and are resulting in a significant greening of the planet and a lot of happy plants as well.

  2. I am weary of endless discussion on far into the future climate models, you are playing right into the hands of those who thinks CO2 is a dangerous molecule, that must be beaten down and bottled up. Every time you bring them up, their eyes lights up to defend the indefensible, since it is PSEUDOSCIENCE!!!

    Why the infatuation of these junk far into the future climate models, is it boredom, Cabin Fever, or chest beating?

    This bullcrap has been going for 30 years now, ever since that joke called the IPCC started posting their highly selective meta analysis set of emission and climate scenarios (Un testable wild guesses), that supports a particular pre set belief.

    I gave up on trying to make sense of non falsifiable climate models years ago, it is why I hardly read postings like this anymore, since it is a WASTE OF TIME!

    • Sunsettommy,
      I agree with you completely! I’ve been following this awarded site for more than a dozen years but articles like this have been repeated over and over, they’re simply a useless exercise. Calculation and endless numbers only serve to generate post after post each arguing their merits as the poster sees them. Arguing about greenhouse effect, blackbody radiation, re-radiation, calculations and mis-calculations etc. has failed. This approach accomplishes nothing! It’s the equivalent to arguing what song the Titanic band should play next.
      Actual science has been left behind in an ever-circling eddy never to escape, never making an impact. The purveyors of “climate change” have leapt forward with dire scenarios of doom, phony data, daily MSM and Weather Channel support and now podcasts that bombard us endlessly. The pseudo-religion of climate change has embedded the notion in the public psyche that it’s an existential crisis that must be addressed immediately.
      It’s unrewarding and failing strategy to argue science because the public has no ear for it. The climate change lobby has ceased using truth and science choosing instead to tell lies and instill fear. It’s venal propaganda at it’s sleazy “best”. No, I haven’t given up but a new, effective approach is required.

      • This is a brilliant site but the generally accepted thrust is that the GHG effect is dominant. Through reading countless posts and comments (non-data research/piggybacking) I have come to the following conclusions:

        – The GHG effect is demonstrated in the laboratory but does not hold for the real world
        – The laws of thermodynamics are sacrosanct:
        – Heat energy flows from hot to cold being the major stumbling block to ‘forcings’
        – A cold body cannot heat a warm one
        – A body cannot heat itself, even with perfect IR reflection
        – EM radiation spectra form discrete-frequency continua (Plank curves)
        – EM frequency energies are not cumulative and do not interfere (unless constrained)
        End of the GHG effect.

        Warming? Exists but not due to the pathway ‘directed’ by the IPCC. I was sceptical of the CFC effect on the Ozone layer until recently; however, I now understand the mechanism of this Anthropogenic mistake. Humans, the Sun, (through its storms and cycles), Volcanoes and (again Human) thermonuclear weapons tests have all contributed to the destruction of Ozone and the resultant variation in the extent of the Ozone layer. With less Ozone in the Stratosphere, more UV energy reaches the Earth’s surface and heats the land and sea (to 900m) at 48 times the energy level of the so-called GHG effect.

        So, instead of railing against CO2 (which is only good for the planet), maybe we should all be looking at ways of further protecting the tiny amount of Ozone that exists.

        • Likewise, I would like to add my appreciation for the WUWT site. I have been a long time fan of this wonderful site and love the contributions in the comments.
          As a light weight in this whole sorry mess, I bow to the superior knowledge of the many other contributors here.
          However, I have still yet to find someone credible who can explain how the temperature of the Warmer Earth, can be raised by adding energy from the cooler Atmosphere to it.To the best of my knowledge, it is a physical impossibility. Plain and simple. No semantic trickery. This is what is being claimed by a greenhouse effect. I have always had issues with accepting erroneous terminologies like GHE, because it facilitates the false narrative behind the scaremongering.
          If the temperature of the Earth is rising, it cannot be because of energy reflected back from the cooler atmosphere.

          Best regards to all.
          Eamon.

        • “YUP”, AngryScot, … it’s a fact, ….. the majority hereon actually believe the GHG effect is dominant.

          And here is my “2 cent” critique of the article.

          Excepted from article:

          This means that, while one ppm of CO2 contains approximately the weight 7.81 gigatons of CO2 of said gas, if we express (human) emissions as GtC rather than GtCO2 the equivalent figure is 7.81 / 3.67 = 2.13 (GtC).

          Under FAR’s Business-as-usual scenario, cumulative (human) CO2 emissions between 1991 and 2018 were 237.61GtC, which is equivalent to 111.55ppm. (237.61GtC / 2.13 GtC = 111.55ppm)

          Since concentrations increased by 67.86 ppm, that means 60.8% of CO2 emissions (41.26 ppm) remained in the atmosphere.

          But, but, but …… if the 1991 Mauna Loa CO2 was 359.13 ppm and it increased by 67.86 ppm, then atmospheric CO2 would have been 426.99 ppm if all had remained therein. But if only 41.26 ppm remained, then 2018 atmospheric CO2 would have been 400.39‬ ppm, which is 10.85 ppm less than actual.

          HA, ”close” only counts in the game of ‘horseshoes’.

          Also claimed is that the estimated cumulative human CO2 emissions between 1991 and 2018 were 237.61GtC, ….. meaning that atmospheric CO2 increased by 111.55 ppm.

          Whereas the measured Mauna Loa CO2 data – 1991 @ 359.13 ppm .. minus .. 2018 @ 411.24 ppm … means that atmospheric CO2 actually increased by 52.11‬ ppm.

          So, which one is to be believed as factual, …… the 52.11‬ ppm increase, …. the 67.86 ppm increase … or the 111.55 ppm increase?

          • Sam wrote: ” current “global warming” means, infers and or implies …. the global average near-surface air temperature .. which is a “pie-in-the-sky” phony bologna dream which has never been and never will be correctly calculated. And even if it could be, it wouldn’t be of any importance, any more than last week’s newspaper is, or last year’s average college basketball scores.

            Nonsense. More than a dozen groups have determined the rise in near-surface air temperature over the 1.5 centuries (though the uncertainty is greater in the first half-century.) Climategate forced Phil Jones to release most of the confidential records CRU had obtained from various governments and used to create the temperature record used by the IPCC. Within weeks, skeptical bloggers had reprocessed the released data by improved methods they had been discussing online for years, and confirmed that the warming reported by HadCRU was about right. Eventually a group of skeptics who had been publicly critical of CRU (BEST) obtained funding from the Koch brothers to collect far more climate records than used by CRU and process them by a superior method (kriging). They reported slightly more warming over land than HadCRU and were able to determine that the warming rate away from large population centers was no lower than the rate for the planet as a whole (which includes cities with UHI). RSS and UAH currently claim that the whole atmosphere has been warming about as fast as the surface (about 75% as fast according to UAH.) Despite your claims warming HAS BEEN calculated by numerous groups and methods with similar trends. A few Luddites (such as Tony Heller, banned at WUWT) persist in not correcting for seasonal warming using temperature anomalies, not weighting thermometer readings by area, and not recognizing the problems posed by artifacts in the data (TOB in the US) and thereby creating doubts in the gullible.

            There are legitimate questions about the meaning of a rise in GMST anomaly, given much larger seasonal changes in temperature we experience and the variability of weather within a season. In the continental US, mean average temperature rises about 1 degC for every 100 miles you move south. Twentieth century warming is equivalent to moving about 100 miles south – a negligible change in my opinion. And the climate in any location in a particular season (summer for example) varies more than 1 degC from year to year. So no one has experienced GW that is greater than the natural variability that they have experienced during their lifetime. Which is one reason why the focus of propaganda has changed from AGW, to “climate change”, to over-publicized changes in extreme weather.

            Nevertheless, the IPCC is predicting that unrestrained emissions of CO2 might produce warming nearly comparable to the difference between glacials and interglacials. That is a big difference. In the case of temperature, the change might be similar to moving south from Chicago to Oklahoma City. That would certain cause challenges for corn farmers in Iowa. In the case of sea level, the change from glacial to interglacial was about 20 m/degC, a rate of change should diminish as ice caps retreat toward the poles and each degree of latitude involves less surface area. Evaporation potentially increases by 7%/degC, creating more stress in many areas of the western US and China (where most of the water formerly flowing in major rivers is used before it can reach the ocean). These are a few changes that are more important than “last year’s college basketball scores”.

            Is moderating these changes worth the cost of reducing the amount of fossil fuel we burn? (Fossil fuel isn’t going to last forever.) Would we be successful in reducing GLOBAL emissions if we tried? How fast does temperature increase with CO2 emission? Will 50% of emissions continue to disappear into sinks? These are great questions – that will never be addressed by those who reject the concept of radiative forcing.

          • BCBuster concluded: “Is moderating these changes worth the cost of reducing the amount of fossil fuel we burn? (Fossil fuel isn’t going to last forever.) Would we be successful in reducing GLOBAL emissions if we tried? How fast does temperature increase with CO2 emission? Will 50% of emissions continue to disappear into sinks? These are great questions – that will never be addressed by those who reject the concept of radiative forcing.”

            The changes in temperature (reported and recorded at two decimal place precision?) cannot be moderated by lowering the amount of CO2 we add by burning carbon. All that does is leave carbon in the ground. And because it could not be done instantly we would keep adding CO2 in smaller amounts until the absurd goal of net-zero was ever reached. Certainly not by 2050. By that time the atmospheric burden could be 500 ppm or more. The CCS industry is devoted to a capture-and-store technology to provide geological ‘sinks’ for CO2 both at the source and directly from the air. This technology cannot yet safely store even one ppm (7.8 gigatons), never mind the huge amounts that would be necessary to make a difference in ‘forcing’ the climate. Bioenergy (algae and trees) is temporary because of aerobic respiratory recycling. The entire scheme is promoted by a few with constantly adjusted data and models designed to provide scary forecasts and Steven Schneider-type pronouncements. The costs of trying are both social and economic, devastating to the eight billion stakeholders who would suffer dramatically.

        • Looking back on the CFC ozone depletion theory, it was never tested in the real atmosphere. And the evidence being used to support the theory overlooked the fact that from 1960 to 1979 stratospheric ozone was actually rising. When the “depletion” stopped dropping around 1985 it was back to the 1960 level… about 300 Dobson units. The ozone hole is a temporary seasonal event. Models using ozone depletion are not factoring these things in.

          • Yes, the “ozone” panic was just more junk science. Convenient for the people who manufactured Freon whose patent was expiring, though…

            Follow the money, as usual.

          • Broadlands and AGW: Why do people spread rumors on the Internet that an ozone hole was normally present in Antarctica before it was discovered in the 1980s? The first paper reporting the presence of an ozone hole (see link below) clearly shows that no ozone hole was present at Halley in 1957 or in the decade that followed. Spring and fall ozone levels were about the same until about 1970 and became unambiguously different in the late 1970’s.

            https://eesc.columbia.edu/courses/v1003/readings/Farman.etal.1985.pdf

            Other ozone measurements elsewhere above Antarctica are consistent with a decrease in ozone beginning in the late 1970, despite the fact that the ozone layer was subject of concern long before the hole was first reported in 1985. There may have been a smaller, less dramatic, natural ozone hole at some places and times (after volcanic eruptions which inject chlorine into the stratosphere?) before the late 1970’s, but the available data (assuming it is correct) certainly shows a dramatic change occurred in the 1980’s and continues to this day.

            https://ozonewatch.gsfc.nasa.gov/facts/history_SH.html

            So far, there has been no significant decrease in the size of the ozone hole, which is perfectly sensible given that the the amount of CFC-12 in the stratosphere peaked about 2005, has only fallen 5% since then, and remains far higher than in the 1980’s when the hole was first discovered. The ozone hole apparently wasn’t expected to “close” before mid-century at the earliest. See Wikipedia.

            In 2007, Pope reported a photolysis rate for Cl2O2 that was too low to be consistent with the standard mechanism for Antarctic ozone depletion. That news made Science and Nature, despite widespread skepticism. Over the next few years, a half-dozen papers reported that Pope was wrong, including a new paper by Pope himself.

            https://agupubs.onlinelibrary.wiley.com/doi/full/10.1002/2013GL058626

            Anyone have a meaningful source claiming that chlorine radicals from CFCs don’t cause the ozone hole? Tony Geller (he is banned here) and Lyndon LaRouche’s EIR Journal don’t count. Happer and Seitz has expressed some doubts in the past, but not in the past decade as best I can tell.

          • BCBuster… As I wrote, the level of column ozone was rising from about 1957 to 1979. It peaked and began to drop until 1984, at which point it returned to its 1960 level. There was no long-term depletion.

            Rood, R.B. (1986) Geophys. Res. Letters, v. 13, p. 1244, GLOBAL OZONE MINIMA IN THE HISTORICAL RECORD:
            “The magnitude and structure of the global total ozone minimum between 1958 and 1962 is similar to that observed between 1979 and 1983.” “…when the TOMS and SBUV instruments were put into operation in 1978 the global total ozone was near a historical maximum.”

            The AREA occupied by the ozone hole (its size) is controlled by the wind-driven polar vortex and has nothing to do with CFC chlorine.

            “Arlin Krueger, Mark Schoerbel, Paul Newman and Richard Stolarski of NASA’s Goddard Space Flight Center in Greenbelt, Maryland, suggest that ‘the maximum area of the hole is currently limited by factors such as the area of the cold air pool within the polar vortex, rather than by the availability of chlorine in the stratosphere’ (Geophysical Research Letters, vol 19, p 1215).”

          • Broadlands: Thanks for the reply and link to Rood (1986) posted below. Unfortunately, Rood is discussing very small changes average GLOBAL levels of ozone (which fluctuates with the solar cycle, volcanos, and CFCs), while the “ozone hole” which began to appear over Antarctica only in the spring around 1980 was much bigger LOCAL phenomena.

            http://climate-action.engin.umich.edu/Rood_Papers_Archive/1986_Rood_Historical_Ozone_Minima_GRL_1986.pdf

            If you look at figure 1 of Rood, the global changes in ozone are a few Dobson units in about 300. The change observed at Halley, Antarctic over about a decade was about 100 Dobson units! These are totally different phenomena.

            We were conducting a natural experiment on our planet by emitting CFCs that are stable enough to reach the stratosphere where UV can photolyze the C-Cl bond and release chlorine radicals. In the 1970’s, Molina showed that chlorine radicals could catalytically decompose ozone. The danger that this could happen on a global scale was over-hyped until the processes that consume those chlorine radicals were better characterized. Observations showed that there had been no significant change in ozone at many locations around the planet. Then a large change in ozone over Antarctica in the spring was detected, a phenomena that was later associated with the Antarctica polar vortex which contains clouds of nitric acid trihydrate that can serve as a reservoir for chlorine radicals in a manner Molina never anticipated. Nearly all ozone at some altitudes and locations was being decomposed!

            At that point the world prudently chose to stop emitting more CFCs as a precautionary measure, because the CFCs we had and were continuing would be around for a century. Some other phenomena besides polar stratospheric clouds might enable chlorine radicals from CFCs to become more effective at decomposing ozone. Love it or hate it; this was the precautionary principle at work. Since replacing CFCs with other gases that would decompose more quickly in the lower atmosphere was relatively cheap and easy, the world chose to ban most uses of CFCs. According to the best information we had at the time, producing more CFCs (business-as-usual) wouldn’t have a major effect on the ozone layer where most people live. Wearing sunscreen is far more valuable than banning CFCs.

            Now the precautionary principle is being applied to emissions of CO2, which also will remain in our atmosphere for a long time. Banning or reducing the emission of CO2 – unlike banning CFCs – isn’t relatively cheap and easy. Other sources of power are currently more expensive and worse, non-dispatchable. Despite skeptics unable to overcome their biases, the scientific principles are well-established: 1) chlorine radicals from CFCs destroy ozone. 2) CO2 interferes with radiative cooling to space. The key question is how MUCH of a change will rising CFCs and CO2 produce. It is perfectly sensible to be a lukewarmer who believes warming from rising CO2 has been over-estimated, just like the amount of ozone destruction was overhyped. I don’t think policymakers were fooled about the demonstrated danger of CFCs, but a relatively cheap replacement for CFCs was developed after Molina’s work was published. It is stupid to be a “denler”: to pretend CFCs have nothing to do with the seasonal Antarctic ozone hole and that radiative forcing from CO2 is not warming the planet. It is fair to say that both ozone and temperature have varied for reasons that have nothing to do with man and to question what fraction of the observed change in both is due to man. In the case of CO2, the uncertainty in projected warming is large, the cost of replacing fossil fuels is painfully high right now, and the never ending exaggeration from the left makes it hard for realists to see the big picture.

            “How much” is the right answer to many question that absolutists want to answer yes or no: We can pretend that doubling the minimum wage won’t cost any jobs, but the real issue is how many minimum wage jobs will be lost, cutting off the first rung on the ladder up from poverty. We can pretend that personal or government deficit spending doesn’t matter, but it will when people doubt our ability to repay with real dollars, and interest rates rise. We can pretend that defense spending isn’t critical, and it isn’t until there is a war. Weakness can make a war more likely. We can pretend that an expanded social safety net doesn’t reduce the incentive to work, but the real issue is how many stop working given what level of support. YOU can pretend that CFC and CO2 have no impact on ozone and temperature, but the real is how big that impact is. The most optimistic assumptions I can make say doubling CO2 will cause 1-2 degC of warming and there has been nearly 1 degC of warming in the past half century from all causes.

          • BCBuster asserts: “YOU can pretend that CFC and CO2 have no impact on ozone and temperature, but the real is how big that impact is. The most optimistic assumptions I can make say doubling CO2 will cause 1-2 degC of warming and there has been nearly 1 degC of warming in the past half century from all causes.”

            A correction? ” there has been nearly 1 degC of warming in the past half century from all causes.”
            No.. since the early 20th century when it was 14.0°C and the two hemispheres were NH 14.6°C and SH 13.4°C respectively. And where is each today? The globe is 14.83°C, the NH plus the SH, divided by two?

            I already answered the impact…with respect to CO2 and global mean temperature and what can be done about it:

            What this in-depth analysis reveals is the fact that the amount of CO2 currently being emitted (and that already present in the atmosphere) cannot be removed in the amounts needed to make a difference to the Earth’s climate. The accuracy of the emission values or their sources is irrelevant. Reducing carbon fuel CO2 emissions does not lower any of that. The arithmetic in the article shows that just ONE ppm of oxidized carbon is 7.8 billion metric tons. It should be obvious many more ppms than that would be required. Some say enough to return the climate to 350 ppm. That’s 500 gigatons. And most of that CO2 must be captured, transported and safely stored…permanently. Not a likely possibility, and certainly not be 2050.

            Again, the chlorine from CFCs has not destroyed, depleted any significant stratospheric ozone PERMANENTLY…not in the polar regions and not in the rest of the world. That’s how big the impact has been.

          • Broadlands wrote and provided a reference: …the level of column ozone was rising from about 1957 to 1979. It peaked and began to drop until 1984, at which point it returned to its 1960 level. There was no long-term depletion.

            Rood, R.B. (1986) Geophys. Res. Letters, v. 13, p. 1244, GLOBAL OZONE MINIMA IN THE HISTORICAL RECORD:
            “The magnitude and structure of the global total ozone minimum between 1958 and 1962 is similar to that observed between 1979 and 1983.” “…when the TOMS and SBUV instruments were put into operation in 1978 the global total ozone was near a historical maximum.”

            Root (1986) was concerned with changes of a few Dobson units in the average GLOBAL level of ozone of about 300 Dobson units. At Halley, Antarctica, changes growing to 100 (and later 200) Dobson units were observed during the Antarctic spring beginning around 1980. The global process is referred to as ozone depletion and the Antarctic phenomena as an ozone hole. The data clearly show no ozone hole was present at Halley for about two decades after observations began in 1957.

            Broadlands continued: “The AREA occupied by the ozone hole (its size) is controlled by the wind-driven polar vortex and has nothing to do with CFC chlorine.”

            Observations show that the rapid destruction of Antarctic ozone at altitudes and locations where the hole is forming is tightly associated unusually large amounts of ClO (and its dimer ClOOCl). ClO is the smoking gun linking sources of chlorine atoms to ozone depletion in the Antarctic spring.

            Ozone destruction by chlorine radicals within the Antarctic vortex: The spatial and temporal evolution of ClO‐O3 anticorrelation based on in situ ER‐2 data
            J. G. Anderson W. H. Brune M. H. Proffitt
            First published: 30 August 1989
            https://doi.org/10.1029/JD094iD09p11465

            Abstract: In situ O3 and ClO data obtained from the ER‐2 aircraft are used to define the chemical evolution of the Antarctic vortex region from August 23 to September 22, 1987. Initial conditions are characterized at aircraft flight altitude (18 km) by highly amplified ClO mixing ratios (800 parts per trillion by volume (pptv)) within a well‐defined “chemically perturbed region” (CPR) poleward of the circumpolar jet, within which ozone exhibits limited erosion (∼15%) in middle to late August. Within this CPR, ozone decays consistently throughout the course of a 10‐flight series, such that by late September, 75% of the O3 has disappeared within the region of highly amplified ClO concentrations (which reached 500 times normal levels at ER‐2 cruise altitude). As this ozone depletion develops, O3 and ClO exhibit dramatic negative correlation on isentropic surfaces, obtained as the aircraft passed through the edge of the CPR. Taken in conjunction with an analysis of the mechanisms defining the rate of catalytic O3 destruction, it is concluded that ClO is an essential constituent in the catalytic destruction of ozone within the vortex. Therefore it is concluded that the observed disappearance of ozone within the Antarctic vortex would not have occurred in the absence of global chlorofluorocarbon release.

            Broadlands continued: “Arlin Krueger, Mark Schoerbel, Paul Newman and Richard Stolarski of NASA’s Goddard Space Flight Center in Greenbelt, Maryland, suggest that ‘the maximum area of the hole is currently limited by factors such as the area of the cold air pool within the polar vortex, rather than by the availability of chlorine in the stratosphere’ (Geophysical Research Letters, vol 19, p 1215).”

            If this were the WHOLE story, then there would have been an ozone hole in 1957. Krueger et al should have been aware of the anti-correlation between ozone and ClO when your quote was written Was this quote taken out of context? The amount of CFCs in the stratosphere increased dramatically in the 1970’s, but the chlorine atoms created by the photo-decomposition of CFCs are usually found in inert ClONO2 and HCl, not ClO. The springtime Antarctic polar vortex is unusual because it is the site of: 1) polar stratospheric clouds, 2) unusually high concentrations of ClO and ClOOCl, and 3) the rapid loss of ozone. Now that the Pope et al controversy is settled, laboratory studies agree that the observed amount ClOOCl can liberate enough chlorine atoms in sunlight to account for the destruction of ozone that is observed in Antarctica. (I do find the mechanisms by which polar stratospheric clouds create high concentrations of ClO vague.) Although CH3Cl and HCl are naturally occurring sources of chlorine in the stratosphere, the main source of the chlorine atoms in ClO and ClOOCl are man-made CFCs.

          • Mr. BCBuster: Confusing total column ozone with the temporary seasonal ozone hole is part of your problem? As is well-known and documented, the latter is controlled by ultra-cold conditions and a bizarre theory of heterogeneous chemistry devised by Rowland himself because the homogenous theory used for the rest of the globe failed to work. The Molina-Rowland theory was predicated on the concept that inorganic chlorine in the middle to upper stratosphere would undergo a catalytic cycle with chlorine monoxide that would destroy ozone. When testing this idea in the real world it was repeatedly seen that the maximum ozone destruction was taking place at and below 25 km, not where the theory said it would. The two were separated in space. All this is a matter of record, as is the rise in total column ozone up to its peak in 1978-79. There was no long-term depletion of ozone, in or out of the ozone hole.

          • Broadlands wrote: “After all, since pre-industrial time CO2 has risen 48% but global mean temperatures only 6%.”

            It doesn’t make much sense to compare the percent change in CO2 with the percent change in (absolute) temperature. That is like adding apple and oranges. If we approximate the Earth as a graybody and do a little mathematics:

            W = eoT^4
            dW/dT = 4eoT^3 = 4W/T
            (dW/W) = 4*(dT/T)

            dW/W and dT/T are just the percent changes in outgoing radiation and temperature. A 1% change in temperature would be expect to produce a 4% change in outgoing radiation. (It is simple to test this relationship by plugging in some values.)

            If you are willing to trust radiative transfer calculations to convert a doubling of CO2 into a 3.6 W/m2 forcing, then you can convert a percent change in outgoing radiation into a predicted percent change in temperature. In this case, the prediction is for a planet that behaves like a graybody – one that has no feedbacks that effect outgoing radiation. However it is a simple place to start.

          • If oxidized carbon is the planet’s “control knob” it hasn’t been very effective lately. The global mean temperature (±0.5°C) has risen from 14.0°C to 14.83°C while our added CO2 has helped increased the natural background amount from ~280 ppm to ~415 ppm. And because of this “dramatic” change (a few hundred years) we are supposed to capture and safely store a few billion tons? Enough to bring us all back to the climate of 1987 when CO2 was 350 ppm?? 500 gigatons! A forecast gone wrong?

          • Broadlands wrote: “If oxidized carbon is the planet’s “control knob” it hasn’t been very effective lately. The global mean temperature (±0.5°C) has risen from 14.0°C to 14.83°C while our added CO2 has helped increased the natural background amount from ~280 ppm to ~415 ppm. And because of this “dramatic” change (a few hundred years) we are supposed to capture and safely store a few billion tons? Enough to bring us all back to the climate of 1987 when CO2 was 350 ppm?? 500 gigatons! A forecast gone wrong?”

            The IPCC hasn’t endorsed a target of 350 ppm and there are some serious problems with Hansen’s publications on this subject. On the other hand, there apparently is enough coal to burn that we might reach something like 1000 ppm may be possible (after petroleum and natural gas become more expensive to obtain). Somewhere between these extremes might be prudent. “Business-as-usual” – with the shift to natural gas and very modest amounts of cheaper renewables and current levels of nuclear (which could increase rather than fall) – isn’t going to take us to anything like 1000 ppm of CO2. We don’t know what role improving technology will play, but changes in energy infrastructure take a long time. If you are fully convinced that rising CO2 doesn’t cause any warming, then there is no reason to invest resources in new technology or even pay a little more today for electricity generation with lower emissions.

            For an optimistic look that doesn’t require ignoring any settled science, see this post by Nic Lewis:

            https://judithcurry.com/2018/12/11/climate-sensitivity-to-cumulative-carbon-emissions/

          • Buster believes… “The IPCC hasn’t endorsed a target of 350 ppm and there are some serious problems with Hansen’s publications on this subject.” Agree. So why are all the politicians and policy-makers asking us to lower our carbon fuel emissions to net-zero ASAP? They must have some ppm goal, some target for some reason. Reducing emissions? That does not lower the ppm CO2 we have already added. We couldn’t even go back to 400 ppm. That is 15 times 7.8 billion metric tons to capture and safely store in the next 29 years. Add that to the ~37 billion tons we are adding each year and it adds up…not just to billions of tons of oxidized carbon but “tons” of $$$ with societal and economic chaos along the way. Not a nice plan for the future.

          • Broadlands wrote: “Buster believes… “The IPCC hasn’t endorsed a target of 350 ppm and there are some serious problems with Hansen’s publications on this subject.” Agree. So why are all the [liberal] politicians and policy-makers asking us to lower our carbon fuel emissions to net-zero ASAP? …”

            We are a pretty polarized society. If a conservative politician were to admit that CO2 causes some warming, he would be attacked by his peers as a heretic – even if he believed emission reductithe cost of most emission reductions were much higher than the benefits. The same thing would happen to a liberal politician who admitted that most economists think warming over the past century has been net beneficial, and the astronomical cost of limiting warming to another 0.5 or even 1.0 K doesn’t make sense. The other party is the enemy, not fellow Americans with different opinions with whom we must compromise to government effectively.

            Some affluent liberals feel guilty because they believe that our unsustainable capitalist economy has depleted the planet of resources and permanently ruined the environment – ensuring that their descendants will be worse off than they are and ending the American Dream. They are willing to pay any price to ensure that rising CO2 emissions won’t make the challenges faced by their poorer descendants even greater. Most of the people on the planet expect economic growth and technological innovation to provide a more affluent lifestyle for themselves and their descendants. Making huge sacrifices today in hopes of improving the lives of descendants they expect to be much richer doesn’t make a lot of sense.

        • AngryScot writes:

          “The GHG effect is demonstrated in the laboratory but does not hold for the real world.”

          Alarmists want you to believe that the GHE, which is enhanced by risings GHGs, is caused simply by absorption of thermal IR. However, GHGs cause our planet to have a GHE ONLY BECAUSE the temperature of the atmosphere decreases with altitude. For this reason, no one has demonstrated the existence of a GHE in the laboratory. The interactions of GHGs with the thermal IR that carries heat from our climate system to space have been thoroughly characterized in the laboratory. The methods by which we calculate heat transfer through the atmosphere (radiative transfer calculations, especially Schwarzschild’s equation) show excellent agreement between predictions and observations at all wavelengths under a wide variety of conditions (but I’m not aware of an accessible article reviewing this agreement.) Try reading:

          https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer

          AngryScot adds: “The laws of thermodynamics are sacrosanct
          – Heat energy flows from hot to cold being the major stumbling block to ‘forcings’
          – A cold body cannot heat a warm one
          – A body cannot heat itself, even with perfect IR reflection”

          All of which are true. However, according to the rarely studied field of statistical mechanics, the laws of thermodynamics have been demonstrated to be a CONSEQUENCE of large numbers of molecules and photons following the laws of quantum mechanics. The laws of thermodynamics don’t prevent a photon from being emitted by the colder atmosphere and absorbed by the warmer surface BECAUSE an individual molecules are not “hotter” or “colder”. Individual molecules have kinetic energy, but not a temperature. In thermodynamics, temperature is proportional to the mean kinetic energy of a large GROUP of colliding molecules, with the kinetic energy of any one molecule in the group changing dramatically with every collision (about 10^9 times per second). Thermodynamic heat flux is the NET energy flux between two large groups of colliding molecules following the laws of quantum mechanics and that flux always flows from warmer to colder. If you add the opposing fluxes in the KT energy balance diagram, you will also find that heat – the net energy flux – is always from hot to cold. Kinetic energy and photons are transferred between individual molecules without any limitations based on the kinetic energy of each molecule. Since most engineers and many scientists don’t need to understand the molecular origin of the laws of thermodynamics, it is not surprising many are confused by DLR and how GHGs make our climate system warmer than it would be without them. The alarmists want you to things are so simple that even a politician like Al Gore or an adolescent like Greta Thunberg can properly inform technically competent citizens about the fundamental basis of climate change. Skeptics want to jump on the mistakes they make without ever fully understanding the science involved.

          AngryScot continues: “EM radiation spectra form discrete-frequency continua (Plank curves)”

          The Figure hopefully is shown below shows the spectrum of thermal IR shining down from the night sky at two radically different locations on the planet and the dotted line shows what Planck’s Law predicts we should observe. This figure, which was once shown here at WUWT, comes from an inexpensive textbook for meteorologists by Petty called “A First Course in Atmospheric Radiation”, which doesn’t discuss climate change.

          http://wattsupwiththat.files.wordpress.com/2011/03/gw-petty-fig-8-1.jpg

          PLANCK’S LAW CLEARLY DOES NOT APPLY TO THE THERMAL IR EMITTED BY GHG’S IN OUR ATMOSPHERE THAT REACHES THE SURFACE (DLR). Observations show that law also doesn’t apply to OLR reaching space and cooling the planet (or even reaching a plane high in the atmosphere). The problem is that Planck’s law was derived by assuming that radiation is in equilibrium with a world composed of quantized oscillators. Something like this equilibrium exists inside dense materials that omit almost like ideal blackbodies, but it doesn’t exist at many wavelengths and altitudes in our atmosphere. (Absorption and emission at strongly absorbed wavelengths does approach equilibrium and atmospheric radiation has blackbody intensity at those wavelengths.)

          Einstein devised the accepted quantum theory for the interaction of radiation and matter: Einstein coefficients for the probability of a molecule absorbing a photon of a particular wavelength (A12) and coefficients for an excited molecule emitting a photon (A21) and for the stimulated emission of a photon by the presence of other photons (B21). (Stimulated emission is what makes lasers possible.) Quantum mechanics predicts the existence of excited rotational and vibrational states of molecules that absorb and emit thermal IR and excited electronic states that absorb and emit visible radiation. As with statistical mechanics and the molecular origins of thermodynamics, most scientists and engineers deal with dense materials that emit like blackbodies and absorb according to Beer’s Law. They are never introduced to the full quantum theory that starts with Einstein coefficients, which become absorption cross-sections and the Schwarzschild equation for radiation transfer in the lower atmosphere. Schwarzschild’s equation which simplifies to Planck’s Law (for emission) when equilibrium exists between absorption and emission and to Beer’s Law (for absorption) when a powerful light source overwhelms emission.

          AngryScot continues: “EM frequency energies are not cumulative and do not interfere (unless constrained) End of the GHG effect.”

          Coherent radiation exhibits interference, but thermal IR traveling through our atmosphere is not coherent. Even when interference occurs, radiation is a form of energy and energy is conserved. The energy that is missing where “waves” destructively interfere is found where waves constructively add.

      • Yes Russel, it was said years ago here but I can’t remember by whom, bring science to the climate non-debate, is like bringing a knife to a gun fight.

        Despite the alarmists always claiming the authority of “the science” ( as though such a broad body of knowledge can be addressed as a single object with a single truth ) , they running a religious cult, not a science convention.

        We are up against two generations of indoctrination and arguing objective science in our little corner of the internet, while interesting and worthy, seems sadly ineffective,

    • Sunsettommy: While you are entitled to consider climate models, AIT, and much of the other propaganda put out by alarmists to be pseudoscience, radiative transfer calculations are not. I’m weary of trying to explain that there are better reasons to be concerned about rising CO2. Radiative transfer calculations are based on quantum mechanics and laboratory spectroscopy measurements on GHGs, with the most important measurements having being made long before the hype about AGW. Experiments in our atmosphere have demonstrated that these calculations accurately predict how much thermal infrared energy is flows through our atmosphere under various conditions at various wavelengths. This is validated, settled science. These calculations are the reason that we believe that a doubling of CO2 would reduce the rate of radiative cooling to space by about 3.5 W/m2 – if nothing else changed. Such a reduction of radiative cooling to space (radiative forcing) combined with the law of conservation of energy is all one needs to predict that rising CO2 will cause our planet to warm, but it doesn’t tell us where it will warm, nor tell us how much or fast it will warm.

      FWIW, simple calculations show that 1 W/m2 of power would be enough to produce 0.2 K/year of warming of the atmosphere and a 50 m mixed layer of the ocean (the portion mixed by surface winds that warms every summer) – ASSUMING all of the heat remained in this compartment. Over a decade or more, 1 W/m2 amounts to a NON-TRIVIAL amount of heat. One doesn’t need impossible-to-validate climate models to become infatuated with wanting to know just how much warming will occur (after removing the assumption that all the heat remains in the surface compartment)!

      Since the IPCC began to put out “b***c***” from climate models 30 years ago, most MEASUREMENTS of mean global temperature have increased at an average rate of about 0.2 degC/decade (and a total of nearly 1 degC over the past half-century). Perhaps you weren’t paying attention. OK, UAH shows “only” 0.14 degC/decade. A group of skeptics funded by the Koch brothers reported the same rate of warming in rural areas far from UHI. ARGO has shown the ocean is warming almost everywhere, taking up about 0.7 W/m2 over the last decade. (Increased radiative cooling to space due to global warming apparently has negated the rest of current forcing of about 2.5 W/m2.) Of course, we can’t know how much natural climate variability added to OR subtracted from the warming produced by rising CO2. The authors of the FAR admitted that the changes observed through 1990 were comparable in magnitude to natural changes seen in the past, but the three decades of additional GLOBAL warming observed since 1990 have made the total change in the past half-century increasingly unprecedented. (Warming observed in a Greenland ice core is not “global warming”.)

      • BC Buster – February 2, 2020 at 1:52 am

        This is validated, settled science. These calculations are the reason that we believe that a doubling of CO2 would reduce the rate of radiative cooling to space by about 3.5 W/m2 – if nothing else changed.

        BCBuster, why don’t you and yours just quit screwing around and prove it, ….. prove the “reduced cooling” via an actual, factual, verifiable experiment.

        Such as stated below (quoting my verbiage of 10 years or so ago), …. to wit:

        Why is it that everyone persists in hashing, re-hashing and re-re-re-RE-HASHING the same ole, same ole “Climate Sensitivity to atmospheric CO2” question?

        Why is it that no one wants to perform a physical experiment to prove or disprove said “sensitivity” claim?

        It wouldn’t take much money, maybe a couple thousand dollars, max.

        Just build two (2) identical size frameworks, ……. out of 1/2″ white PVC plastic pipe, ……. with the dimensions of 20 x 10 x 8 feet square, ……. outside in an area where each will be subjected to the same environmental conditions (Sunshine, darkness, rain, wind), ……. place temperature sensing thermocouples inside of them which are connected to an external located recording device, ……… then cover them “air tight” (top, bottom and sides) with 4 mill clear plastic sheeting, …. and then, late at night, inject enough CO2 in one of the structures to increase its 400+- ppm of CO2 to say 800 ppm.

        Then, when the night time temperatures in both structures stabilize and reads the same, …….. say at 3 AM, start recording the temperatures in each structure …… and again record said temperatures every hour on the hour (or every half hour, or ten minutes) ……. for the next 24, 48 or whatever hours.

        And if CO2 is the “global warming” gas that all the proponents of AGW claims it is, then when the Sun rises in the morning and starts shining on the structures, the temperature in the structure containing 800 ppm CO2 ……. should start increasing sooner and faster and reach a greater temperature than in the other structure ….. and when the Sun starts setting in the afternoon, the temperature inside the structure with 800 ppm CO2 should remain higher than it is in the other structure until later in the night.

        And if it doesn’t, …… then the CO2 causing AGW claims are totally FUBAR … and the re-hashing of the “sensitivity” thingy should cease among learned individuals.

        But, iffen that “sensitivity” thingy is of utmost importance at insuring one’s PJE, ….. then recognizing the actual, factual science is the least of their concerns.

        Cheers

        • Sam: Thanks for the reply. You asked: “why don’t you and yours just quit screwing around and prove it, ….. prove the “reduced cooling” via an actual, factual, verifiable experiment.”

          Since you asked for a REAL EXPERIMENT, Figure 1 from the following publication (Harries 2001) shows the change in OLR that was observed from space between 1970 and 1997. The biggest reduction in radiative cooling to space that can be seen here is due to rising CH4 at 1305 cm-1. There is your EXPERIMENTAL PROOF that a rising GHG reduces radiative cooling to space. (A correction published later noted that the IMG and ISIS labels were switched. OLR really did go down.)

          http://www.grandkidzfuture.com/the-climate-problem/ewExternalFiles/Harries%202001%20GHG%20forcing%20change.pdf

          Alarmists want the public to believe radiative forcing arises simply from absorption of thermal IR by CO2, but the process is more complicated than that. For more than a year, I was deeply frustrated because I KNEW that doubling CO2 would double both absorption by and emission from CO2. OLR should be unchanged by doubling CO2 – at least to a first approximation! THE EXPERIMENT YOU PROPOSE ABOVE SHOULD SHOW NO EFFECT FROM DOUBLING CO2 FOR EXACTLY THIS REASON. It wasn’t until I saw Schwarzschild’s equation and understood radiative transfer calculations that I fully accepted the concept of radiative forcing. The GHE arises from the absorption and temperature-sensitive emission by GHGs interacting with the temperature gradient in the atmosphere. If temperature didn’t fall with altitude, there would be no GHE! (The concept of a rising characteristic emission altitude is also useful.) You have every right to be confused by and object to the many oversimplifications promoted by alarmists.

          I personally find this paper relatively unpersuasive, because it omits the main CO2 band at 667 cm-1! Photons from the center of that band that reach the satellite are actually emitted by CO2 in the stratosphere, where it has become colder, partially due to destruction of ozone. The real world is complicated! Radiative forcing from rising CO2 arises mostly from the shoulders of the main absorption band, not the center and you can see a little of that at 700 cm-1. From my perspective, the one thing that is clearly worth noting in this paper is Figure 1b, which shows that the observed difference between 1970 and 1997 almost perfectly matches the difference predicted using radiative transfer calculations at hundreds of data points. Radiative transfer calculations work! (Calculated radiative forcing is real. )

          The chaotic atmosphere and ocean of our planet are an extremely difficult place to do definitive experiments. Warming is occurring at a rate of about 0.2 degC/decade. An El Nino (caused by chaotic fluctuations in ocean currents and winds) can cause more than a decade’s worth of “global warming” in about six months, and that warming usually disappears in the following six months. We don’t fully understand why we experienced a LIA and an MWP. Radiative forcing is increasing about 0.35 W/m2/decade, a very small change in the 240 W/m2 of OLR carrying heat to space. Making stable measurements of changes of less than 1%/decade is extremely challenging, if not impossible. Seasonal changes GMST (3.5 K that disappears when we calculate temperature anomalies) and seasonal changes in OLR and reflected SWR are more than 10X bigger than the change in temperature and radiative forcing over a decade. Even worse, radiative forcing is causing warming and warming is causing the planet to increase its emission of thermal IR, negating some of the imbalance at the TOA caused increasing forcing. Causation is ambiguous. And we can’t forget clouds. There is nothing wrong with being skeptical about the limited and dubious observational evidence for forced warming! Our climate system is really complicated.

          Scientific understanding and progress are usually made by testing hypotheses using CAREFULLY CONTROLLED EXPERIMENTS in the laboratory – and then applying that knowledge to more complicated systems. The interactions between radiation and GHGs have been studied in the laboratory for nearly a century and completely understood in terms of the theory of quantum mechanics. All of that information has been incorporated into “radiative transfer calculations”. Most skeptics are unfamiliar with this work. Wikipedia has a good article on how such calculations are done. You can try your own simplified radiative transfer calculations using the teaching tool at the other link below. (Start by looking up from the ground, 0 km, at an atmosphere than contains various amounts of a single GHG. The results may be very non-intuitive.)

          https://en.wikipedia.org/wiki/Schwarzschild%27s_equation_for_radiative_transfer
          http://climatemodels.uchicago.edu/modtran/

          Radiative transfer calculations have been validated by showing they are consistent with what we observe in the atmosphere. Take, for example, the full spectrum and total intensity of DLR shining down to the surface from GHGs in the atmosphere on a clear night. In this case, we need measure and input the temperature, density and composition (humidity) of the atmosphere overhead at all altitudes into a program that automates radiative transfer calculations. The output will do a great job of predicting the spectrum we observe. These calculations can also predict what would observed if CO2 were doubled and nothing else changed. However, they can’t predict how the temperature, density and composition of the atmosphere will change in response to the changed heat flux through the atmosphere. Both convection and radiation carry heat through the atmosphere, but only radiation allows heat to escape to space. Radiative transfer calculations can tell you how radiative cooling to space will change from rising CO2 alone, but you need an AOGCM to forecast how the atmosphere will respond to rising CO2. AOGCMs aren’t very good, contain numerous adjustable parameters that are “tuned”, and can’t be validated. On the other hand, radiative transfer calculations come from quantum mechanics and careful laboratory studies AND have been validated in our atmosphere.

          Hope this helps.

          • BCBuster – February 3, 2020 at 7:40 pm

            Since you asked for a REAL EXPERIMENT, Figure 1 from the following publication (Harries 2001) shows the change in OLR that was observed from space between 1970 and 1997.

            Buster, you need to educate yourself on the difference between a “study” and an “experiment”.

            A study is conducted in an ”open” system of unknown parameters whereby things are observed with potential causes being expressed to explain observations

            An experiment is conducted in a ”closed” system of known parameters whereby things are observed with potential causes being expressed to explain observed results.

            BCBuster “The biggest reduction in radiative cooling to space that can be seen here is due to rising CH4 at 1305 cm-1. There is your EXPERIMENTAL PROOF

            Buster, you just proved me correct. The greater the density of “radiant” gases in the atmosphere, …. the longer the atmosphere will remain warm. The radiant gases just don’t radiate their energy vertically, …… ya know.

            And Buster, did you not read the “title” of your cited study which specifically states, to wit:

            Increases in greenhouse forcing inferred from the outgoing longwave radiation spectra of the Earth in 1970 and 1997.

            Inferrred is their only logical answer simply because of all the “unknown” parameters.

            BCBuster “ For more than a year, I was deeply frustrated because I KNEW that doubling CO2 would double both absorption by and emission from CO2.

            OLR should be unchanged by doubling CO2 – at least to a first approximation!

            THE EXPERIMENT YOU PROPOSE ABOVE SHOULD SHOW NO EFFECT FROM DOUBLING CO2 FOR EXACTLY THIS REASON.

            “Right”, ….. doubling CO2 should … potentially ….. double both absorption by and emission from CO2.

            “Wrong”, ….. OLR should decrease in the “near term” (daylight hours) by doubling CO2, ….. but increase in the “long term” (both daylight and nighttime hours)

            And “Wrong again”, ….. my experiment will prove just like stated above, …. ”the greater the density of “radiant” gases in the atmosphere, …. the longer the atmosphere will remain warm

            Cheers

        • Sam wrote: “The radiant gases just don’t radiate their energy vertically, …… ya know.”

          Of course. Emission in all directions is measured in terms of intensity (W/m2/sr) while climate is usually concerned with flux perpendicular to a surface measure in W/m2. When radiative transfer calculations are performed, any emission in the upward direction is decomposed into a component in the +z direction and components in horizontal directions and any downward emission is decomposed into components in the -z direction and components in horizontal directions. The +z component adds to OLR and the -z component adds to DLR. The horizontal components approximately cancel and don’t contribute to radiative cooling to space (OLR) or warming of the surface (DLR). This is sometimes called the two-stream approximation and is a standard part of radiative transfer calculations used in climate science – which you would know if you bothered to read the links I provided concerning this subject. (DLR and OLR are not coherent sources of radiation and therefore don’t interfere.)

          I personally don’t give a D about the differences between a “study” and an “experiment”. Let’s use the language of science: The paper I cited was testing the theory that rising GHGs reduce the rate of radiative cooling to space. That theory was tested by looking at the difference in spectra of OLR observed from space after 27 years of rising GHGs. A reduction was observed – exactly at the wavelengths and in the amounts predicted by radiative transfer calculations.

          Yes, the authors did say they “inferred” that radiative forcing had changed. Unlike those too lazy to learn about radiative transfer calculations and why they should be regarded settled science, the authors of this paper used radiative transfer calculations to understand the cause of the difference they found in this challenging experiment. Increasing aGHGs were not the only only thing that could have caused changes in the spectra that were obtained. The planet as a whole was also roughly 0.5 degC warmer, making absolute humidity higher (a feedback, not a forcing). The authors took re-analysis data from 1970 and 1977 from the exact times that spectral data were being collected so they knew the actual temperature, pressure, and humidity everywhere on the path OLR followed from the surface to the satellite detector and used that information to predict the complicated changes that were expected to be observed if rising aGHGs produced radiative forcing. Since the observed changes agreed with those predicted from radiative transfer calculations with rising GHGs, the authors inferred that radiative forcing had indeed been observed changing.

          You may remember that I carefully explained why our climate system was a challenging place to do experiments that illustrated how radiation and GHGs interacted. We understand how radiation and GHGs interact from decades of carefully controlled experiments by physicists and chemists, but you and other skeptics stepped out for a beer when that subject was being taught. A poor analogy would be for you to point to a truss bridge – a radiative transfer calculation – and ask challenge the settled science that torque is force times distance and challenge the safety of the bridge. I met your challenge for evidence that rising GHGs slow down radiative cooling to space, and you are whining that the evidence was merely “inferred”. Learn the basic science of radiation and matter obtained from decades of laboratory experiments! Read the Wikipedia article about Schwarzschild’s equation for radiation transfer or better still, Petty’s cheap textbook “A First Course on Atmospheric Radiation” intended for meteorologists.

          Sam wrote: “Buster, you just proved me correct. The greater the density of “radiant” gases in the atmosphere, …. the longer the atmosphere will remain warm.

          The correct physics comes from Schwarzschild’s equation for radiation transfer, where dI is the change in the spectral intensity of incoming radiation of spectral intensity I upon passing an incremental distance, ds, through a medium of absorbing/emitting molecules with density n, and absorption cross-section o:

          dI = emission – absorption
          dI = n*o*B(lambda,T)*ds – n*o*I*ds

          B(lambda,T) is Planck’s function and T is the local temperature of the medium. When dI is positive, the radiation is getting stronger (from more emission than absorption) and the medium is getting cooler. If the emission term is negligible (which it is in a laboratory spectrophotometer), integration gives Beer’s Law for absorption. If absorption and emission are equal, dI is zero, I = B(lambda,T), and you have radiation of blackbody intensity independent of GHG density (n). In the atmosphere, emission is significant (so we can’t use just Beer’s Law) and absorption and temperature dependent-emission are often not in equilibrium (so we can’t use Planck’s Law). One look at OLR from space proves that Planck’s Law is inadequate.

          What happens if the density (n) of a GHG in the layer is doubled? The change (dI) DOUBLES, WHETHER the change is positive or negative, but remains zero if zero.
          Picture DLR starting downward from the TOA where I = 0. The absorption term is zero and there is some emission, so DLR gets more intense as it travels downward. Now lets consider a layer in the middle of the atmosphere. The emission term adds the same amount to both OLR and DLR. The absorption term usually subtracts more from OLR and less from DLR, because DLR photons are emitted from higher in the atmosphere where it is colder and emission is weaker, while OLR photons are emitted from lower in the atmosphere where it is warmer and emission is stronger. So OLR gets weaker as it travels upward and DLR gets stronger as it travels downward – USUALLY. However, at wavelengths where the absorption cross-section is effectively zero given the local density of the GHG (n*o near zero), dI is negligible and changes in density have no effect. And it the absorption cross-section is really large, then photons don’t travel far enough between absorption and emission for the temperature to have changed. 90% of 667 cm-1 photons are absorbed by CO2 within 1 m near the surface and there is no temperature change between where a photon is emitted and absorbed and therefore dI = 0 and a doubling of GHG density (n) would be irrelevant. (Some say absorption is saturated in these circumstances, but absorption is actually negated by an equal amount of emission, dI is 0, I = B(lambda,T) and radiation at 667 cm-1 has blackbody intensity despite any changes in CO2 density (n). The emission from a traditional blackbody per unit surface area is independent of its thickness and number of emitting molecules in the object.

          Engineers must learn the basic physics of torque (levers) before they can say anything intelligent about a truss bridge. You must learn the physics of the interactions between GHGs and radiation in an atmosphere with a temperature gradient before you can say anything intelligent about the origins of the GHE and radiative forcing. Or trust the online Modtran calculator that will perform simple radiative transfer calculations for you.

          As for the crude experiment you proposed, Tyndall did an experiment in the 1850s which demonstrated that CO2 and water vapor blocked the transmission of thermal infrared. Conservation of energy demands that the absorbing gas get warmer.

          https://en.wikipedia.org/wiki/John_Tyndall

          • BCBuster – February 5, 2020 at 7:33 am

            The horizontal components approximately cancel and don’t contribute to radiative cooling to space (OLR) or warming of the surface (DLR).

            Buster, you are still talking “trash” to me, ……. SO STOP IT.

            Claiming that thermal “heat” energy is polarized, apparently having negative and positive attributes that can cancel one another out, ……… is as asinine and imbecilic as one can be nurtured.

            Thus, the remainder of your LENGTHLY commentary ……. should be taken with “a grain of salt”.

          • Sam: I’m sorry you are incapable of understanding that radiation traveling horizontally doesn’t move heat from the atmosphere to space or to the surface. It only moves heat within the atmosphere. Nor capable of understanding that radiation traveling horizontally to the east is approximately cancelled by radiation traveling horizontally to the west, producing no NET flux of HEAT. (Do you understand the difference between W/m2/sr and W/m2 and that sr stands for stenradians?)

            A few centuries ago, Galileo deduced that the Earth circled the Sun, but based on the common-sense wisdom of earlier thinkers, the Pope punished him and banned his ideas as heresy for centuries. However, Galileo sparked the Scientific Revolution, which lead to the Enlightenment, and the American experiment in Democracy. Democracy depends on the idea that YOU, I and all Americans are all capable of learning enough about the truth to elect better leaders than monarchies, dictatorships, theocracies, etc.

            Unfortunately, our thinking is deeply compromised by confirmation bias, our inability to assimilate and remember facts that conflict with our deepest beliefs. And today, our deepest beliefs are constantly reinforced by various social and conventional media echo chambers and spun by highly effective political propaganda.

            And even our smartest experts are subject to ego and hubris. Steven Schneider, one of the founders of the IPCC, famously said

            “as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts. On the other hand, we are not just scientists but human beings as well. And like most people we’d like to see the world a better place, which in this context translates into our working to reduce the risk of potentially disastrous climatic change. To do that we need to get some broadbased support, to capture the public’s imagination. That, of course, entails getting loads of media coverage. So we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.”

            What ethical scientist has the right to mislead the public about science (real heresy) and tell ME what to think. I don’t give a D*** how YOU DECIDE to make the world a better place; I’m merely trying to share with you what ethical scientists have learned about the narrow subject of how radiation interacts with GHGs. …The reason why the host of this blog and thousands of other skeptics believe in radiative forcing. These original ethical scientists weren’t today’s climate scientists compromised by political agendas; they were chemists, physicists, aeronautical engineers and meteorologists working decades ago trying to understand how radiation and molecules interacted (quantum mechanics), how to design spacecraft that would radiatively cool fast enough through the upper atmosphere, and how to forecast tomorrow’s weather. Climate scientists later used and refined their work to calculate what they called radiative forcing. There are a thousand good reasons why you should be skeptical of the IPCC consensus, but radiative forcing isn’t one of them.

            Best wishes in your struggle with confirmation bias and your journey to enlightenment.

          • I am following your thread with interest and have a couple of questions (from an interested amateur):

            1. You state that the East/West emitted (CO2) radiation annihilates (probably the wrong term 😱) and heats the atmosphere (thereby increasing CAPE). Surely, therefore, the up/down radiation does the same. That would mean 2 things: hardly any radiation would reach the ground to heat it and nearly all emitted energy (8% of the total energy being emitted from Earth) would result in increased CAPE?

            2. You used the East/West simplification but, as the horizon is about -20° at the tropopause, then the cone of interest for DLR is 140° and not 180°? How would this affect the calculations?

            3. The main window of interest for CO2 is 14 to 16μm with 15μm as the main absorbing wavelength. What is the emitted wavelength/energy?

            Thanks, just trying to learn!

          • With the upsurge in youngsters requiring psychological help to overcome their fear of CAGW, and including, of course, her Holiness, St Greta, I have been wondering who should be held responsible for this extreme form of child abuse – Schneider would appear to be our man!

          • Who should be held responsible? Those who abandoned the 37 year-long “global cooling” scare to begin a global warming assault in 1975. Without these climate modelers offering up scary futures there would be no climate change frenzy. After all, since pre-industrial time CO2 has risen 48% but global mean temperatures only 6%. Everything else has been model-driven forecasts, dire predictions. Don’t blame the teen-age puppets, blame the puppeteers. Steven Schneider is dead.

          • AngryScot, me thinks BC Buster actually believes his noted “East” and ”West” emitted IR just keeps a going around and around the earth, bypassing one another without any interference.

            Like the Energizer Pink Bunny rabbit, ….. they just keep going, an going, an going, ……

            Buster should study the science ……. instead of mimicking, paraphrasing and plagiarizing the verbiage of others.

          • AngryScot wrote: “I am following your thread with interest and have a couple of questions (from an interested amateur)”

            Thanks for being interested and wading through long explanations.

            1. You state that the East/West emitted (CO2) radiation annihilates (probably the wrong term 😱) and heats the atmosphere (thereby increasing CAPE). Surely, therefore, the up/down radiation does the same. That would mean 2 things: hardly any radiation would reach the ground to heat it and nearly all emitted energy (8% of the total energy being emitted from Earth) would result in increased CAPE?

            When light of a single wavelength (or other waves) comes from a SINGLE source and take two or more paths to your eye or a detector, a pattern of interference is observed – with waves combining to amplify or cancel each other in various locations. This convinced physicists that light consisted of electromagnetic waves. However, interference is ONLY observed with radiation from a single source (coherent light). The LWR emitted by the surface of the planet and by GHGs in the atmosphere is NOT coherent; it is spread over many wavelengths and doesn’t exhibit interference (or annihilation). Heat is released when photons are absorbed by molecules, NOT when photons annihilate each other. (If you get your Internet signal through fiber optic cable, the signal is carried by (visible) photons traveling in BOTH directions through miles of cable without photons being annihilated.)

            When the excited state created by absorption is relaxed by collisions with neighboring molecules, some say the photon has been “thermalized”, converted to kinetic energy = higher temperature. (Almost all excited states are relaxed by collisions long before they can emit a photon, but a small equilibrium fraction of GHGs kept excited by collisions.)

            The photoelectric effect convinced physicists that light is actually composed of individual particles (photons). Electrons, photons and other fundamental particles exhibit wave-like behavior or “wave-particle duality” by following the strange laws of quantum mechanics.

            The temperature in your house can depend on how much heat your furnace puts out and how much heat is escaping out an open window. The temperature of an object (including the atmosphere) changes when the energy received and lost differ. We say the difference adds to or subtracts from the internal energy of the object. When GHGs in a particular location are emitting LWR, that location is losing energy that ends up being absorbed by the surface or elsewhere in the atmosphere or escaping to space. However, GHGs in the same location absorb the LWR emitted elsewhere. Since temperature drops with altitude in the troposphere and since emission increases with temperature, GHGs typically absorb more LWR from below than above, but they emit equally in all directions. (This produced the GHE.) Since there is little horizontal temperature variation, the horizontal NET TRANSFER of heat is negligible. More importantly, when we are discussing radiation escaping to space or being absorbed by the surface, the horizontal components of radiative flux are can be ignored.

            FWIW, some people are confused when confronted by radiation traveling from the cooler atmosphere to the warmer surface. The Second Law of Thermodynamics applies only to the NET FLUX (between two large groups of molecules), not to the behavior of individual photons and molecules (which don’t have a “temperature”). More LWR flows from the warmer surface to the cooler atmosphere than vice versa, so the 2LoT is obeyed by the net flux.

            AngryScot continued: 2. You used the East/West simplification but, as the horizon is about -20° at the tropopause, then the cone of interest for DLR is 140° and not 180°? How would this affect the calculations?

            The curvature of the Earth is small compared with the thickness of the atmosphere, but the software used to calculate radiation transfer could take curvature into account. 50% atmosphere of the atmosphere lies below 5 km and about 80% below 12 km. The radius of the Earth is almost 4000 km. Envision a right triangle with vertices at: 1) the center of the Earth, 2) a point 10 km above the surface, and 3) a point on the surface so the line 2-3 is tangent to the surface and intersects a radius at 90 deg. 4000/4010 is the arcsine of the angle (86 deg) between vertical and the tangent line. If correct, a GHG radiating from 10 km above the surface technically radiates towards the surface over an angle of 172 degrees and towards space over 188 deg. The third side of the triangle is 283 km, so a photon emitted 4 deg downward from 10 km needs to travel 283 km to just skim the surface and another 283 km to reach 10 km. Photons emitted by the surface at wavelengths in the “atmospheric window” where GHGs don’t absorb might travel hundreds of kilometers, but GHGs in the atmosphere emit most strongly at the same wavelengths they absorb most strongly. The average photon escaping to space is emitted from a GHG at 5 km and usually would need to be emitted closer to vertical than horizontal to escape. As best I can tell, for the curvature of the Earth to make a difference, photons emitted need to travel many tens or hundreds of kilometers between emission and absorption (or escape) and few do so. Textbooks talk about neglecting curvature by using the “plane parallel approximation”, something that makes sense to me after working out these details.

            At most altitudes in the troposphere, GHGs radiate away more energy than they absorb. That is why latent heat is convected from below. Our atmosphere is too opaque to thermal infrared for all of the heat delivered by SWR to escape to space via radiation with an average temperature of 288 K. The surface would need to be about 350 K if radiation were the only way the surface cooled. (At 350 K, the lapse rate would be grossly unstable.) As the opacity of the atmosphere diminishes with altitude/density, thermal IR is able to transmit more of the energy from SWR. At and above the tropopause, convection is no longer needed.

            AngryScot continued: 3. The main window of interest for CO2 is 14 to 16μm with 15μm as the main absorbing wavelength. What is the emitted wavelength/energy?

            GHGs emit at exactly the same wavelengths they absorb and the same constant (cross-section) is used to calculate both absorption and emission. The amount of absorption by a given amount of GHG is usually a constant fraction of the incoming radiation, while the amount of emission depends on roughly the 4th power of temperature.

            The best place to learn about radiation transfer is to use the online Modtran calculator at the link below. I suggest you start by “looking up” from 0 km (the surface) at the LWR shining down from an atmosphere containing a single GHG. (Set the other to zero.) Start with 0 ppm (see nothing) and 1 ppm of CO2. If you are looking at a “tropical atmosphere”, the strongest wavelength has an intensity equal to a blackbody at 290 K. The atmosphere is 1-2 km above the 300 K tropical surface is about 290, so the average photon arrive at the surface comes from roughly that altitude. Keep doubling until you until you get to CO2 levels relevant to climate. Then try the other GHGs one at a time. Then move to simulating OLR (radiative cooling to space) using the normal mode of “looking down” from 70 km (the “top of the atmosphere” or TOA). In this mode, some of the photons you see are emitted by the surface and others come from GHGs in the atmosphere (but now you know which ones are coming from GHGs). Pay attention to the temperature vs altitude scale on the right and the blackbody curves and you will have some idea of where the photons escaping to space are emitted.

            http://climatemodels.uchicago.edu/modtran/

            AngryScot wrote: Thanks, just trying to learn!

            Due to the diversity of contributors and commenters at WUWT, you can pick up a lot of dubious information here. ScienceofDoom.com is a blog dedicated to exploring to the physics of climate that was recommended and read by Steve McIntyre at ClimateAudit. The blog name and the host’s insistence on sticking to textbook physics create mistaken illusion that the host is a supporter of the IPCC consensus. After years of reading, I can’t say for sure what the host really thinks about climate change, but I learned a lot of reliable science backed up by references that can be checked. If you really want to learn, start there.

          • BCBuster – February 7, 2020 at 3:38 pm

            GHGs emit at exactly the same wavelengths they absorb and the same constant (cross-section) is used to calculate both absorption and emission.

            When atmospheric N2, O2, H2O and CO2 come in direct contact with the solar irradiated “hot” earth surface they absorb thermal “heat” energy from said “hot” surface.

            Said “direct contact absorption” is the primary means/method of “heating up” the near-surface atmosphere.

            1st question: What is the wavelength (frequency) of the energy that aforesaid molecules absorbed?

            When the aforesaid gases that absorbed “heat” energy directly from the “hot” surface come in direct contact with “colder” air molecules, “heat” energy will be transferred (conducted) from the former to the latter.

            2nd question: What is the wavelength (frequency) of the energy transferred (conducted) from one air molecule to another molecule?

            If any of the aforesaid energy absorbing air molecules are H2O or CO2, they will radiate that newly absorbed energy.

            3rd question: What is the wavelength (frequency) of the energy that they radiate ….. and is it the same wavelength (frequency) that was originally conducted from the “hot” surface?

        • BCBuster wrote: “GHGs emit at exactly the same wavelengths they absorb and the same constant (cross-section) is used to calculate both absorption and emission. ”

          Sam replied: “When atmospheric N2, O2, H2O and CO2 come in direct contact with the solar irradiated “hot” earth surface they absorb thermal “heat” energy from said “hot” surface. Said “direct contact absorption” is the primary means/method of “heating up” the near-surface atmosphere. 1st question: What is the wavelength (frequency) of the energy that aforesaid molecules absorbed?

          According to the kinetic theory of gases, the temperature of a large group of colliding molecules is proportional to their mean kinetic energy. The average kinetic energy possessed by a molecule is kT, were k is Boltzmann’s constant. However, individual molecules can also contain internal energy in their rotation, in vibration between atoms, and in electronic excited states. Kinetic energy is continuous, but the later three states are quantized.

          Photons with an energy equal to the energy difference between two internal states are absorbed or emitted when the rotational, vibrational or electronic state of a molecule changes. The frequency of the photon is given by E = hv, where E is the energy difference between the internal states. For things with mass, kinetic energy is E = 0.5mv2 expressed in joules. FREQUENCY IS A MEASURE OF ENERGY ONLY USED WITH MASSLESS PHOTONS. (Sometimes people will refer to the energy DIFFERENCE between two states in terms of the frequency of the photon emitted or absorbed moving between those states.)

          Kinetic energy is exchanged when two molecules collide, but photons are not released by collisional exchange of kinetic energy. Molecular collisions also can result in the excitation or relaxation of internal vibrational and rotational states (and excitation of electronic states above about 1000 K). In most of the atmosphere, collisional exchange energy between internal states occurs much faster than they absorb or emit a photon, creating a Boltzmann distribution [exp(-E/kT)] of excited states. When a Boltzmann distribution exists (also know as Local Thermodynamic Equilibrium or LTE), emission of photons depends only on temperature, not the absorption of photons.* The excited states created by absorption of photons are normally “relaxed” by collisions many orders of magnitude faster than it takes to emit a photons. For example, it takes the average excited CO2 molecule 1 second to emit a photon, and there are about 10^9 collisions per second at STP. So millions of CO2 molecules are excited and then relaxed by collisions near the surface for every one that emits a photon.

          * We have created a few special devices where LTE doesn’t exist and the usual rules don’t apply: LED and fluorescent lights that emit visible light without being as hot as the sun, lasers, and microwave ovens. These special device cause confusion about how most things behave.

          Since molecules are constantly colliding and exchanging energy, it is somewhat pointless to talk about the kinetic energy of a molecule in the same way we talk about the energy/frequency of a photon. Instead, we talk about the average kinetic energy in a large group of colliding molecules – temperature. The 2LoT demands heat flow from a hotter large group of colliding molecules to a colder group. It doesn’t place any limits on what individual molecules and photons do, they don’t have a “temperature”.

          Sam continued: When the aforesaid gases that absorbed “heat” energy directly from the “hot” surface come in direct contact with “colder” air molecules, “heat” energy will be transferred (conducted) from the former to the latter. 2nd question: What is the wavelength (frequency) of the energy transferred (conducted) from one air molecule to another molecule?

          Partly answered above. The conduction of heat by gas molecules colliding with the surface is what meteorologists and climate scientists call transfer of sensible heat. However, gas molecules travel a very short distance between collisions, so conduction can only transfer sensible heat a short distance above the surface. The thermal conductivity of air is about 0.02 W/m/K, so a sensible heat flux of 20 W/m2 can be conducted over only 1 mm if the difference in temperature between the surface and air is only 1 K. Bulk motion of air (convection, wind, turbulence) is required to move heat (or water vapor) further above the surface. Therefore sensible heat and latent heat fluxes vary in proportion to the speed of the wind – which must take over after roughly the first millimeter of upward heat flux.

          Sam continued: If any of the aforesaid energy absorbing air molecules are H2O or CO2, they will radiate that newly absorbed energy. 3rd question: What is the wavelength (frequency) of the energy that they radiate ….. and is it the same wavelength (frequency) that was originally conducted from the “hot” surface?

          So the big picture is that the molecules in the atmosphere can have any kinetic energy and that temperature is proportional to their mean kinetic energy. Frequency has nothing to do with temperature/kinetic energy. All forms of energy are distributed among the quantized internal vibrational and rotational energy states of molecules by collisions, creating a Boltzmann distribution of excited states. These states EMIT AND ABSORB photons of ONLY certain wavelengths. A wavelength-dependent absorption “cross-section” quantifies the propensity of a molecule to both absorb and emit photons of a particular wavelength. (In other fields, this concept called an absorption coefficient or molar extinction coefficient.) Although the same constant applies to both processes, the intensity of incoming radiation determines how many photons are absorbed (a percentage) and emission varies with temperature (via the Boltzmann distribution of energy in excited states).

          My “big picture” of the atmosphere differs somewhat from the one Planck had in mind when he postulated radiation IN EQUILIBRIUM with “quantized oscillators” and derived Planck’s Law (initially to explain the radiation emitted by hot black cavities). Planck postulated a Boltzmann distribution of energy over excited states, but he assumed oscillators covering all wavelengths. The excited states of the molecules in the atmosphere do not cover all wavelengths; nor do they always emit and absorb rapidly enough to produce equilibrium. (A typical education ignores these limitations). In dense materials like liquids and solids, the vibrational and rotational states of molecules are perturbed by motions of their neighbors into a continuum of excited states, allowing them to emit radiation of near blackbody intensity continuously over all wavelengths. The emission lines for GHGs are slight broadened by pressure and temperature, but they don’t create a smooth continuum.

          • BCBuster, I asked for simple answers to my questions ……. and you responded with an hour-long weazelworded lecture without ever stating the frequency facts.

            “DUH”, because of atmospheric collisions of air molecules …… it is entirely possible that 90% of the satellite detected radiation being emitted by atmospheric CO2 and H2O was entirely absorbed from other gas molecules and not from any surface radiation..

            Buster, here is a thermal image of “horizontally” directed IR, please explain what happens to it.

            Meaning, is the detected IR coming directly from the source (house) …. or is it being absorbed and re-emitted 50, …. 100 times before the IR sensor records it?

          • Sam wrote: “I asked for simple answers to my questions ……. and you responded with an hour-long weazelworded lecture without ever stating the frequency facts.

            When someone discusses heat in the atmosphere in terms of frequency, they need lots of help with fundamentals.

            Sam wrote: “DUH”, because of atmospheric collisions of air molecules …… it is entirely possible that 90% of the satellite detected radiation being emitted by atmospheric CO2 and H2O was entirely absorbed from other gas molecules and not from any surface radiation.

            You are about right: About 40 W/m2 of radiation emitted by the surface (average 390 W/m2) escapes directly to space through the atmospheric window (wavelengths not absorbed by GHG). The rest comes from GHGs in the atmosphere that are excited by collisions. Our atmosphere is relatively opaque to most thermal IR near the surface. The surface can emit at wavelengths GHGs don’t absorb, but GHGs can only emit at wavelengths they absorb. Fortunately, water vapor drops from more than 10,000 ppm near the surface to 3 ppm at the tropopause, and the density of the well-mixed GHGs drops a factor of two for every 5 km increase in altitude. So the atmosphere gets more transparent with altitude. Unfortunately, the higher you go, the fewer and colder GHGs to radiatively cool to space. The average photon that escaped to space is emitted from an altitude of about 5 km. Some people like to say that adding GHGs to the atmosphere raises the altitude from which the average photon escaping to space is emitted, and GHGs at that colder altitude emit fewer photons.

            Sam wrote: Buster, here is a thermal image of “horizontally” directed IR [from a house] please explain what happens to it.

            Just like thermal IR emitted by the surface, only 10% of the thermal IR emitted by the house is in wavelengths that are not absorbed by GHGs (on a vertical path through the atmosphere to space). Some of that 10% might make it through much longer “horizontal path” that reaches space because of the curvature of the Earth. (Radiation travels in a straight line, so it can’t “circle” the planet as you suggested). The bulk of the horizontal thermal infrared in your picture will be absorbed by GHGs in the atmosphere or by the house next door or trees, etc. And the atmosphere and the house next door and the trees are also emitting thermal IR back at the house, but slightly less brightly because they are colder. The wavelengths detected by these thermal imagers are at higher frequencies so they are more sensitive to differences in temperature. That is why the poorly insulated window appears so much brighter than anything else. And the differences are much greater when it is very cold outside. We live in a world where everything is shining thermal infrared at everything else, including the atmosphere.

            Sam wrote: “Meaning, is the detected IR coming directly from the source (house) …. or is it being absorbed and re-emitted 50, …. 100 times before the IR sensor records it?”

            Of course not. The window wouldn’t appear brighter than the walls if all of the photons were being absorbed by GHGs on the way to the imager. You should be smart enough to figure that out for yourself. However, at wavelengths most strongly absorbed by CO2, 90% of the photons are absorbed within 1 m. The photons emitted at that wavelength that reached the camera did come from CO2 in the air and they have an intensity appropriate for a blackbody with the same temperature as the air, not the window.

          • Frank – February 9, 2020 at 3:37 pm

            “Meaning, is the detected IR coming directly from the source (house) …. or is it being absorbed and re-emitted 50, …. 100 times before the IR sensor records it?”

            Of course not. The window wouldn’t appear brighter than the walls if all of the photons were being absorbed by GHGs on the way to the imager. You should be smart enough to figure that out for yourself.

            Of course not, ….. what, …. Frank?

            Is the detected IR coming directly from the source (house) …. or is it being absorbed and re-emitted by air molecules?

            And don’t be talking trash, …… the window appears brighter simply because the IR from inside the house is transmitted through it unimpeded, …… whereas not so with the partially insulated walls.

            And Frank, ……. getta clue, …… there is absolutely no way for you to determine it the IR passing thru the window to the camera …… came directly from the window or was absorb and re-emitted by a radiant gas molecule(s).

        • Sam replied: “Is the detected IR coming directly from the source (house) …. or is it being absorbed and re-emitted by air molecules? And don’t be talking trash, …… the window appears brighter simply because the IR from inside the house is transmitted through it unimpeded, …… whereas not so with the partially insulated walls.”

          Glass is totally opaque to thermal IR, so none of the thermal IR comes from inside the house. Heat from inside the house is conducted through the glass and thermal IR is emitted from the outside surface of the glass. That is why modern windows have a thin coating of transparent material that has a low emissivity for thermal IR. And why most modern windows are double-pane glass with a layer of air between them to reduce conduction through the glass.

          Sam continued: “And Frank, ……. getta clue, …… there is absolutely no way for you to determine it the IR passing thru the window to the camera …… came directly from the window or was absorb and re-emitted by a radiant gas molecule(s).”

          Sam, get a clue. Everyday, laboratory instruments make millions of measurements of the amount of radiation of a particular wavelength that passes through (or is absorbed by) a sample. That data is converted into the concentration of a molecule of interest. For example, when you have a blood sample analyzed, many of the analyses are done using absorption of light. And it is just as easy to go from a known concentration of a molecule to predicting how much radiation it will absorb. I’ve personally done this thousands of times working in a laboratory by hand. The scientists studying GHGs in the laboratory have measured their absorption over the full temperature and pressure range found in the atmosphere, because absorption lines are broadened by higher pressure and temperature. One paper I read used an instrument that repeatedly bounced a laser beam off mirrors at the ends of the sample chamber so the path traveled by the radiation was 0.2 km (200 m) long, so they could collect data on very dilute samples The data for all GHGs has been combined into a database accessed by dozens of computer programs that can predict radiation transfer in the atmosphere. The predictions of those programs have been VALIDATED by experiments in the atmosphere. You can use a simple version of such a program at:

          http://climatemodels.uchicago.edu/modtran/

          Measuring the emission of thermal IR by GHGs in the laboratory is challenging because every surface in a laboratory is also emitting thermal IR. For years, I was skeptical about how well we understood emission, until I learned why the Einstein coefficients for absorption and emission were linked to each other via the Planck function for blackbody radiation.

          Of course, scientists would able to predict what fraction of the photons of various wavelengths emitted by the window in your picture will reach the camera and what fraction of the photons arriving at the camera will have come from GHGs. This is the only part of climate science that is actually “settled science”. The long list of what we don’t know is the reason why dozens of parameters in climate models must be tuned.

          • Frank – February 10, 2020 at 9:33 pm

            Glass is totally opaque to thermal IR, so none of the thermal IR comes from inside the house. Heat from inside the house is conducted through the glass and thermal IR is emitted from the outside surface of the glass. That is why modern windows have a thin coating of transparent material that has a low emissivity for thermal IR. And why most modern windows are double-pane glass with a layer of air between them to reduce conduction through the glass.

            Frank, ….. you know just enough about the above subject to …. uh, …. make you dangerous.

            Please note the fact that the word “thermal” means ……… “relating to heat”.

            The thin transparent coating applied to window glass is to reflect (prevent) part of the IR from entering the house thru the window (summertime) and/or ….. to reflect (prevent) part of the IR from exiting the house thru the window (wintertime).

            The double-pane glass (thermal pane windows) prevents the “molecule-to-molecule” conduction of thermal “heat” energy through the partial vacuum between the two panes of glass.

        • Sam wrote: Frank, ….. you know just enough about the above subject to …. uh, …. make you dangerous.

          I would agree with you that many people only know enough to believe and distribute dangerous misconceptions. Hopefully I left the “dangerous” category long ago. Having directed my own research for decades, I have a fair amount of experience at judging the quality of the information and data I have accumulated and deciding whether it is reliable enough to provide a solid foundation for moving forward. The trick is to constantly question what you think you know and what you are hearing; and then to figure out whose information usually survives close scrutiny. Unfortunately, there wasn’t much useful information about where radiation forcing comes from and how it is calculated. I’m now up to six textbooks, because I am not fully satisfied with the explanation of Schwarzschild’s equation in any one. Grant Petty’s “A First Course in Atmospheric Radiation” is inexpensive an covers what most people should know.

          Sam continued: Please note the fact that the word “thermal” means ……… “relating to heat”.

          Yes, but the phrase “thermal IR” means the main wavelengths emitted by objects at ambient temperature. In the case of climate, that would be from about 200-310K. “Near IR” are wavelengths emitted by the sun longer than visible light and a decent fraction of SWR is in the near IR.

          Sam wrote: “The thin transparent coating applied to window glass is to reflect (prevent) part of the IR from entering the house thru the window (summertime) and/or ….. to reflect (prevent) part of the IR from exiting the house thru the window (wintertime).”

          Kirckhoff’s Law says that absorptivity equals emissivity. So the glass marketed as low-emissivity glass (or “low-e glass”) has a coating that emits far less thermal IR than normal for glass (the e term in W = eoT^4 and W is what is seen by your thermal imager). The same coating reflects/scatters more incoming thermal IR from neighboring objects including DLR from the atmosphere. This keeps the outside of the glass cooler than it would be if incoming thermal IR were absorbed by the glass. Even without a low-e coating glass doesn’t transmit thermal IR.

          • @ Frank –

            So the glass marketed as low-emissivity glass (or “low-e glass”) has a coating that emits far less thermal IR than normal for glass (the e term in W = eoT^4 and W is what is seen by your thermal imager).

            Frank, concerning your noted “low-e glass”, …… neither the coating nor the glass “emits” thermal IR.

            The thermal IR either passes unobstructed thru the glass ….. or a part or portion of the thermal IR is reflected by the coating like a mirror reflects visible light.

        • BCBuster [incorrectly] wrote: “seasonal variations in CO2 are different at different places around the planet. There is no seasonal change in Antarctica, because there is little seasonal increase in plant growth and decay near Antarctica.”

          Sam replied: Here is the Mauna Loa’s Keeling Curve Graph equivalent of the South Pole Antarctic “seasonal variations” …… https://cdiac.ess-dive.lbl.gov/trends/co2/csiro/CSIROCO2SOUTHPOLE.JPG

          And ps, Buster, …. there VERY, VERY little seasonal increase in plant growth and decay in Barrow Alaska, but the monthly mean CO2 ppm profile is greater than the Mauna Loa profile as denoted, to wit:

          https://plot.ly/~vball3ams/174/monthly-mean-co2-concentration-ppm-point-barrow-alaska.png

          Frank continues: Thanks for correcting my mistake. At the South Pole, there is roughly a seasonal sine curve with a peak-to-trough amplitude of 2 ppm superimposed on a long-term trend of about +2 ppm/yr which is [presumably] observed everywhere on the planet. At Mauna Loa, the amplitude is about 6-7 ppm and at Barrow about 12 ppm.

          However, my main point was that all of these seasonal changes in CO2 involve a forcing change much too small to produce a detectable warming! It was stupid of me to divert your attention from this key point by mentioning Antarctica. Furthermore, the heat capacity of the atmosphere and mixed layer of the ocean are too large for a modest forcing to cause appreciable warming or cooling during [short] seasonal swings in CO2. A simple calculation shows that a 1 W/m2 imbalance at the TOA only provides enough power to warm the ocean+mixed layer at a rate of 0.2 K/yr. (As the planet warms, it will emit more LWR to space and that imbalance will shrink if forcing remains constant). Seasonal changes of hundreds of W/m2 in incoming solar radiation produce average seasonal warming of perhaps 7 K (more over land and less over ocean). As roughly calculated above, the forcing from a 20 ppm increase in CO2 is 0.26 W/m2, about 1000-fold smaller than seasonal changes in solar forcing. Radiative forcing predicts that 100 ppm changes in CO2 cause changes of about 1 degC within about 5-10 years. Our failure to observe a temperature change associated with seasonal changes in CO2 which likely average more 20-fold smaller tell us NOTHING about the credibility of the predictions of radiative forcing.

          Furthermore, when we calculate temperature anomalies, we removed ALL seasonal effects on temperature – including any effect caused by seasonal changes in CO2!

          What is needed is for EVERYONE to have more skepticism about the science they read at blogs that oppose OR support the consensus on climate change (which is often highly distorted by activists). As Feynman famously wrote in Cargo Cult Science, “the first principle is YOU MUST NOT FOOL YOURSELF – and you are the easiest person to fool”. That is because all humans are subject to confirmation bias. We retain information that agrees with our preconceptions and fail to assimilate information that conflicts with our deeply held beliefs. You failed to ask yourself if seasonal changes in CO2 SHOULD cause significant seasonal changes in temperature. Instead you jump to the conclusion that the absence of such effects prove CO2 has no effect. (If you haven’t ever read it, Cargo Cult Science is the best investment of 10 minutes anyone could make.)

          http://calteches.library.caltech.edu/51/2/CargoCult.pdf

          I personally continuously test my preconceptions by reading skeptical blogs and seeing if any of their arguments (like yours) survive scrutiny.

          The same applies to climate scientists, of course, who are under immense political pressure to follow Schneider’s mantra: make the world a better place by getting lots of publicity by offering scary scenarios, making simplified dramatic statements and [worst of all] making little mention of any doubts they might have. This is a recipe that is certainly leads to confirmation bias. However, climate scientists have been willing to engage (in scientific journals, but usually not in public) the arguments of skeptics such as Lindzen, Christie, Spenser, and especially recently Lewis and Curry (on ECS).

          Our planet – with its chaotic weather and climate and the difficulty of making accurate measurements over the long periods of time it takes for CO2 to change – is a lousy place to study the effect of CO2 on climate (especially for amateurs). Radiative forcing from radiative transfer calculations provides a reliable foundation for beginning any investigation into climate change. A radiative forcing of 3.6 W/m2 from doubled CO2 deposits – over decades – a lot of energy in our climate system.

      • “These calculations are the reason that we believe that a doubling of CO2 would reduce the rate of radiative cooling to space by about 3.5 W/m2 – if nothing else changed.”

        The part in bold type (my emphasis) being the whole point. Here in the real world, “other things” DO change, and those “feedbacks” are net-negative. Which we know because no “runaway effect” has ever occurred, even at far higher CO2 levels, in the past, AND because far higher CO2 levels than today could not PREVENT massive DECREASES in the Earth’s temperature in the past.

        So the Earth’s climate history shows us a climate that is essentially indifferent to the atmospheric CO2 level, period. And to the extent there is a correlation, it runs in reverse, with temperature driving atmospheric CO2, NOT the other way around.

        • AGW is Not Science claims that “the Earth’s climate history shows us a climate that is essentially indifferent to the atmospheric CO2 level, period”

          You can also find an AGU talk by Richard Alley that explains essentially all change in climate for at least the past 500 million years on changes in CO2. I’d venture to say that Professor Alley is a little more familiar with this data than you are.

          https://www.youtube.com/watch?v=RffPSrRpq_g

          In truth, much of the proxy data for CO2 isn’t very reliable, and it is changing. You are completely correct to say that correlation is not causation: warming oceans certainly release CO2 in addition to the theory that CO2 causes warming. Apparently scientists now have much less confidence that the air bubbles trapped in the Vostok ice core are about 800 years younger than the surrounding ice. Diffusion might have occurred for several millennia before the bubbles were trapped. I have decided Professor Alley must be wrong about the Miocene – a period when there was an ice cap on Antarctica and alligators in the Arctic. How can GLOBAL temperature be controlled only by CO2 when the two polar regions were behaving so differently? (:))

          Alarmists don’t care whether you believe that rising CO2 causes AGW for sound reasons or for dubious reasons. AGW is a religion for many, and all they care about is whether or not you are a believer. IMO, paleoclimatology is a lousy reason for believing CO2 must cause warming. However, as I have discussed nearby, radiative transfer calculations that predict a 3.6 W/m2 reduction in radiative cooling to space are based on quantum mechanics, careful laboratory experiments and have been shown to make accurate predictions about observed radiation transfer in our atmosphere. This is the “settled science” that says rising CO2 MUST cause SOME warming.

          According to the law of conservation of energy, if an object starts losing less heat than it receives, it must start warming. (In a sense, rising CO2 decreases the emissivity of our planet (LWR) without effecting its absorptivity of SWR.) What happens next? Eventually our planet will warm until incoming and outgoing radiation are equal. Does it take 1K, 2K, 3K or 4K of surface warming for the planet to emit (LWR) and reflect (SWR) an additional 3.6 W/m2 to space? (The answer to this question must take into account the effects of Planck, VW, LR, cloud and surface albedo feedbacks produced by warming. It also must take into account that a doubling of CO2 might reduce radiative cooling to space by an amount somewhat different from 3.6 W/m2 on a warmer planet.) If so ECS will be 1K, 2K, 3K or 4K.

          • BCBuster – February 3, 2020 at 9:38 pm

            I’d venture to say that Professor Alley is a little more familiar with this data than you are.

            Buster, you can choose your mentors that you believe are “experts”, …… but you can’t choose the scientific facts that you want.

        • Indifferent? Geological and geochemical studies on late Eocene sediments reveal that during the Tertiary period atmospheric CO2 was more than double today’s values and the climate was mild. the pH of the oceans was much lower but the carbonate plankton thrived. When the CO2 dropped polar ice sheets began to form…

          Nature 461, 1110-1113 (22 October 2009)
          “Atmospheric carbon dioxide through the Eocene–Oligocene climate transition
          Paul N. Pearson, Gavin L. Foster, Bridget S. Wade
          Geological and geochemical evidence indicates that the Antarctic ice sheet formed during the Eocene–Oligocene transition 33.5–34.0 million years ago. Modelling studies suggest that such ice-sheet formation might have been triggered when atmospheric carbon dioxide levels fell below a critical threshold of ~750 p.p.m.v. During maximum ice-sheet growth, pCO2 was between 450 and 1,500 p.p.m.v., with a central estimate of 760 p.p.m.v.”

          • Broadlands wrote: “Geological and geochemical studies on late Eocene sediments reveal that during the Tertiary period atmospheric CO2 was more than double today’s values and the climate was mild. the pH of the oceans was much lower but the carbonate plankton thrived. When the CO2 dropped polar ice sheets began to form…”

            Before ice sheets began to form in Antarctica the climate there was “mild”, but many might say the rest of the planet was too warm and sea level was too high. You are correct that plankton and the food chain below them did fine at lower pH, and there were coral reefs (probably fewer and at higher latitudes than today). The key question is whether the important species dependent on CaCO3 present today will able to adapt the next a century or whether most will be wiped out until species more tolerant of low pH evolve by random mutation (a process likely to require hundreds or thousands of centuries). If the genes needed to tolerate low pH and warmer water are still in the gene pool and can spread fast enough.

            I’m pretty disgusted with many of the graphs showing the relationship between CO2 and temperature in various eras. Any graph without a log scale for CO2 (preferably log2(CO2/280), ie number of doublings from 280 ppm) is lying about the importance of high levels of CO2. And any graph that doesn’t show the uncertainty in proxy records for CO2 (and temperature) is lying too. Here are some of the better ones:

            http://parisbeaconofhope.org/figures/parisbeaconofhope_2016-fig1.01_after.png
            Wikipedia ???

            https://www.ipcc.ch/site/assets/uploads/2018/02/Fig5-02-820×1024.jpg
            AR5 WG1 Figure 5.2

            http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-6-1.html
            AR4 WG1

          • The pH of the Earth’s oceans and connected seas varies widely, geographically, monthly, seasonally . It is absurd to believe that a local change in CO2 is going to affect biomineralization globally.

            A.G. Mayor, actually studied pH in more than one place to show how much it can vary

            https://www.jstor.org/stable/pdf/984531.pdf

            CO2 has more than doubled in the past and the biosphere did just fine…it’s called limestone, the result of biomineral deposition of calcite and aragonite.

    • Agreed, and I’m also sick of this naked bullshit:

      “What matters is that, without man-made emissions, CO2 concentrations would not increase.”

      Since they are not measuring all of the “sources” and “sinks” for atmospheric CO2, this statement is pure nonsense. Further, at least one “source” that can be readily inferred from the ice core reconstructions is that some of the increase in atmospheric CO2 levels today is just an echo of the Medieval Warm Period (based on the ~800 year time lag for CO2 levels following a temperature rise), which has nothing to do with man-made emissions. Add to that the effect of temperature on CO2 levels, and you have TWO similar, but separate, sources of “rising” CO2 levels that have nothing to do with “man-made emissions.”

      • This was posted earlier: 33 years ago a paper with the title “Carbon Dioxide and People” was published. The authors (Newell & Marcus) plotted the annual Mauna Loa CO2 values against global population. The statistical correlation was almost perfect between the sum total of human activities (population) and atmospheric CO2. The problem? There were never any “blips” on the charts to indicate natural inputs… global volcanic events, ENSOs? The strong positive correlation remains even today. This seems to say that natural sources can be added to, which should be obvious. The stable carbon isotopes should tell us. They seem to support natural sources being about 75% and man-made 25%.

      • All of the current sources have been considered by the CDIAC. Put them all together and compare them with the stable carbon isotopic data and the answer is that about 25% is man-made and the rest is natural. The evidence that humans have added CO2 is unassailable. Read the paper by Newell and Marcus… ‘Carbon Dioxide and People’.

        • Broadlands: You could look at the 270-280 ppm of CO2 that has been found in Antarctic ice cored for the last 10 millennia and conclude that the release and uptake CO2 were in equilibrium during the Holocene. We know that a colder ocean will absorb more CO2 (and did so during the last ice age), but the four centuries of the LIA had almost no impact on CO2. Unfortunately, it takes a long time for snow to be compressed into air bubble with ice, so ice cores don’t provide a record of the initial rise of CO2 during the early industrial revolution.

          Scattered measurements of CO2 on land provide during these years are unreliable measure of global CO2 because combustion, respiration and photosynthesis disturb local CO2 concentration. Keeling chose to measure CO2 on top of Mauna Loa at night because trade winds constantly carried in fresh well-mixed air and the downdrafts at night originated from much higher in the atmosphere. (No measurements are made during the day when on-shore winds sweep air up the mountain.) By that time (early 1960s), CO2 had risen from 280 to 330 ppm and it has since risen above 410 ppm.

          If ALL of this increase did not come from anthropogenic sources, where did it come from – after 10 millennia of stability? Well, warming of the ocean certainly could have caused some CO2 to outgas, but there hadn’t been much global warming by the early 1960s. The descent into the LIA and exit from the LIA presumably involved a larger temperature change than we had experienced from 1900 to 1960 and CO2 barely increased. So it seems absurd to postulate warming cause 50 ppm of CO2 to outgas from the ocean before Keeling began measuring.

          During the last ice age, global temperature was about 6 degC colder and CO2 was 100 ppm below pre-industrial 280 ppm. That is a change of 17 ppm/degC and have experience about 1 degC of warming. Unfortunately it takes about a millennium for the meridional overturning current to bring CO2 rich water from bottom of the ocean to the surface where it can outgas CO2 under low pressure. So a change of 17 ppm/degC is what one might expect to observe a millennium after a temperature change – explaining why the LIA didn’t cause CO2 levels to drop appreciably.

          The 97/98 El Nino and 98/99 La Nina produced about 0.3 degC of warming and cooling. If you look at the Mauna Loa data, CO2 rose 3 ppm (rather than the usual 2 ppm) during the El Nino and only 1 ppm during the La Nina. Perhaps this 1 ppm represents CO2 outgassing from and uptake into the mixed layer of the ocean (roughly top 50 m) that is stirred by the wind and warms and cools with the seasons.

          We have burned enough fossil fuel to have caused CO2 to rise from 280 ppm to about 540 ppm, so there is no reason to assume that ANY significant fraction of the rise to 410 ppm is due to natural processes. Half of the CO2 man has emitted is missing – driven into the ocean and sinks on land because CO2 was higher than 280 ppm – the level where the natural processes that release and take up large amounts of CO2 we in balance.

          There are other lines of evidence from uptake of C14 after atomic bomb testing and from the absence of any C14 in fossil fuels that can produce misleading results, but I don’t remember those debates. Common sense and ten millennia of stable CO2 in ice cores tells me that CO2 doesn’t change “naturally”.

          • BCBuster: Before there were humans burning carbon for energy atmospheric CO2 rose and declined naturally. The process involves the carbon cycle…a balance between photosynthesis and aerobic respiration with carbon burial involved, both as organic biomass and biological carbonate sediments. What’s left over is in the ocean-atmosphere and biosphere system. Photosynthetic production of oxygen has increased the percentage ratio of oxygen to CO2 to about 525 to one…as huge amounts of carbonate have been buried geologically along with “fossil fuel”.

            So..how much has come from human activity?

            The DOE’s Carbon Dioxide Information Analysis Center (CDIAC) in an FAQ calculates from fossil fuel combustion data that up to the year 2000 anthropogenic contribution was 14% for fossil fuels.

            Here is their argument… source: [http://cdiac.ornl.gov/faq.html#Q7].

            “Anthropogenic CO2 comes from fossil fuel combustion, changes in land use (e.g., forest clearing), and cement manufacture. According to Houghton and Hackler, land-use changes from 1850-2000 resulted in a net transfer of 154 PgC to the atmosphere. During that same period, 282 PgC were released by combustion of fossil fuels, and 5.5 additional PgC were released to the atmosphere from cement manufacture. This adds up to 154 + 282 + 5.5 = 441.5 PgC, of which 282/444.1 = 64% is due to fossil-fuel combustion. Atmospheric CO2 concentrations rose from 288 ppmv in 1850 to 369.5 ppmv in 2000, for an increase of 81.5 ppmv, or 174 PgC. In other words, about 40% (174/441.5) of the additional carbon has remained in the atmosphere, while the remaining 60% has been transferred to the oceans and terrestrial biosphere. The 369.5 ppmv of carbon in the atmosphere, in the form of CO2, translates into 787 PgC, of which 174 PgC has been added since 1850. From the second paragraph above, we see that 64% of that 174 PgC, or 111 PgC, can be attributed to fossil-fuel combustion. This represents about 14% (111/787) of the carbon in the atmosphere in the form of CO2.”

            Using the CDIAC numbers, the percentage derived from land-use (deforestation) would be about 35% released (154/444.1) with about 8% remaining (61/787). The total CO2 in the atmosphere derived from all anthropogenic sources then becomes 14% plus 8% = about 22%.

            (2) Another, independent, calculation uses stable carbon isotope data. It agrees reasonably well with the CDIAC. The carbon isotope data show that median delta C-13 decreased from minus 7.598 permil in 1981 to minus 8.081 permil in 2002 at Mauna Loa. Over the same period of time Mauna Loa TOTAL CO2 increased from 340.11 ppmv to 373.16 ppmv, an increase of 33.05 ppmv or 9.72%. The average C-13 value for fossil fuel CO2 is about minus 27 permil. Thus, if the CDIAC is correct a calculation from these data might tell us.
            Calculation: 9.72% x -27 = -2.62 x 14% = -0.367. Add this to -7.6 yields – 7.97….not far from -8.081 permil. A “best fit” number? Calculation: 9.72% x -27 = -2.62 x 18.3% = -0.481. Added to -7.6 (in 1981) yields -8.081 which equals C-13 in 2002… exactly.

            Thus, there are two independent approaches that yield ~14% and ~18% respectively, with an average of ~16%. Adding this average to the ~8% derived for land-use yields ~24%. CDIAC numbers yield ~22%. Therefore, a little less than a quarter of the CO2 present in the atmosphere remains from global anthropogenic sources (people) and a bit more than three-quarters is natural?

        • Broadlands wrote: “The DOE’s Carbon Dioxide Information Analysis Center (CDIAC) in an FAQ calculates from fossil fuel combustion data that up to the year 2000 anthropogenic contribution was 14% for fossil fuels.”

          There is a difference between the CAUSE of the rise in atmospheric CO2 and the SOURCE of the CO2 molecules in the atmosphere. Different sources have different ratios of carbon isotopes and that is what CDIAC has analyzed. An analogy might help.

          Let’s consider a family with $280,000 of savings (280 ppm) and a father that earns $10,000/month and expenses that total $10,000 month. Just like CO2 levels during the Holocene, their savings remains has remained stable, because income (natural emissions of CO2) and expenses (natural uptake of CO2) are equal. When the children get old enough to go to school, the mother goes to work (and humans begin to burn fossil fuel). At first, the mother makes only $2,000/month and the value of their savings account starts to rise. What is the CAUSE of their rising savings account? I HOPE you will agree that even though the father is the SOURCE of 83% of the family income, but the CAUSE – the thing that changed – of the rise in their savings account is the mother going to work.

          Now suppose this family keeps their money in cash in their dryer. When the mother and father earn money they write their initials on the bills, put them in the dryer, and give it a spin to mix the bills. (These initials are analogous to the carbon isotopes, and the dryer and atmosphere mix CO2 and bills thoroughly.) When they pay expenses, they grab bills at random from the dryer. With time, their savings will rise and the percentage of bills with the mother’s initials will slowly rise to 17%. The mother’s work (burning fossil fuel) would be the SOURCE of 17% of the money (CO2), the fraction of bills with the wife’s initials would be growing but will not have reached 17%, but her work would be the entire CAUSE of the rise in their savings.

          To continue the analogy, their extra wealth encourages the family to spend an extra $1000/month (and higher levels of CO2 in the atmosphere speed up photosynthesis and Henry’s Law says that the amount of CO2 in the top of the ocean will rise in proportion to the partial pressure of CO2 in the atmosphere). Now, $2000/month of the mother’s income in marked bills goes into the dryer, eventually 17% of those bills will have her initials on them, but money stored in the dryer is rising at only $1000/month.

          This is analogous to the situation when Keeling started monitoring CO2 on Mauna Loa in the 1960s: CO2 was 330 ppm, we were burning enough fossil fuel to increase CO2 by 2 ppm/yr ($2000/month) and CO2 in the atmosphere was rising only 1 ppm/yr ($1,000/month increase in savings). Today CO2 is 410 ppm, we are burning enough fossil fuel for CO2 to rise 4 ppm/yr (the mother is earning more money), CO2 in the atmosphere is rising 2 ppm/yr, 14% of the CO2 molecules in the atmosphere have isotopic ratios consistent with a fossil fuel SOURCE (the mother’s initials on the bills), but the burning of fossil fuels is the CAUSE – factor that changed – of the rise in CO2.

          Knowing the mother’s income, the amount of money in the dryer, and the percentage of bills with the mother’s initials, we can make and test a hypotheses about the father’s income and total family expenses (unknown natural emission and natural and enhanced uptake of CO2). With perfectly accurate numbers, we can solve a mathematical equation for these unknowns. In practice, fluctuation in spending and uncertainty in data limit our ability to precisely deduce these unobserved flows of money. And if the father’s unobserved income and unobserved normal expenses could change, we wouldn’t be able to deduce anything. Ten millennia of stable CO2 in ice cores strongly suggests that natural processes (the fathers income and normal family expenses) didn’t change in the 20th century.

          The real world is more complicated. The father and mother have wallets where they temporarily store bills, the dryer (or cookie jar) at home, and several bank accounts. These reservoirs of money have different sizes and the money flows between them at different rates. Photosynthesis can take up CO2 quickly, but some parts of plants decompose or are eaten in a year while others last decades and some are permanently buried. It takes a millennium for more or less CO2 dissolved in a cooler or warmer ocean according to Henry’s law to circulate through the bottom of the ocean, but shallower loops exist. Parts of the ocean are saturated with calcium carbonate, which is slowing raining precipitate onto shallow ocean floors, being subducted by tectonic plates, and is returned to the atmosphere as CO2 by volcanos. It is easy to observe CO2 in the atmosphere, but these other reservoirs for carbon and the rate of flux between them are poorly understood. Supposedly 7%/yr of the C14 released from the biggest atomic bomb test in the atmosphere disappeared from the atmosphere, but some of that C14 was taken up by plants and soil and began returning to the atmosphere at the same time slower and larger sinks like the ocean continue to remove C14. We see only the net change in C14. Likewise, carbon from the fossil fuels we are burning is flowing into and out of a complicated series of reservoirs. The IPCC’s Berne model is simply a set of exponential equations adjusted to match the changes we have observed and projections of Earth System models with reservoirs of assumed sizes and flow rates connecting them. The Berne model doesn’t tell us that, even if we reach 1000 ppm of CO2, CO2 will fall below 400 ppm as CO2 is transported into the deep oceans within about a millennium, but other studies do.

          Hope this helps.

          • BCBuster can suppose all sorts of analogies, but he is right about one thing: “Different sources [of CO2] have different ratios of carbon isotopes” Exactly, and that’s what I analyzed, not the CDIAC. Obviously, most of the CO2 is natural (been cycled around a long time) the rest has been provided by humans oxidizing both the buried carbon and recent carbon for energy for hundreds of years. The evidence that humans have added CO2 from all sources is unassailable. Read the paper by Newell and Marcus… ‘Carbon Dioxide and People’. Update their numbers. The very high correlation remains.

          • Broadlands: The whole point of my analogy was to demonstrate that known the SOURCE of the CO2 or the money (bills with the mother or father’s initials to show who earned the money) doesn’t tell you the CAUSE of the change in CO2 or the family’s savings!

          • BCBuster: The source of the carbon humans have oxidized is mostly fossil but some is recent. It has been added to whatever the background amount is that has been recycled over geological time. The “fingerprint” is recorded in the biologically fractionated stable carbon isotopic values of what is present in the atmosphere.

          • Broadlands – February 3, 2020 at 1:33 pm

            33 years ago a paper with the title “Carbon Dioxide and People” was published. The authors (Newell & Marcus) plotted the annual Mauna Loa CO2 values against global population. The statistical correlation was almost perfect between the sum total of human activities (population) and atmospheric CO2.

            Increases in World Population & Atmospheric CO2 by Decade

            year — world popul. – % incr. — May CO2 ppm – % incr. — avg ppm increase/year
            1940 – 2,300,000,000 est. ___ ____ 300 ppm est.
            1950 – 2,556,000,053 – 11.1% ____ 310 ppm – 3.3% —— 1.0 ppm/year
            [March 03, 1958 …… Mauna Loa — 315.71 ppm]
            1960 – 3,039,451,023 – 18.9% ____ 320.03 ppm – 3.2% —— 1.0 ppm/year
            1970 – 3,706,618,163 – 21.9% ____ 328.07 ppm – 2.5% —— 0.8 ppm/year
            1980 – 4,453,831,714 – 20.1% ____ 341.48 ppm – 4.0% —– 1.3 ppm/year
            1990 – 5,278,639,789 – 18.5% ____ 357.32 ppm – 4.6% —– 1.5 ppm/year
            2000 – 6,082,966,429 – 15.2% ____ 371.58 ppm – 3.9% —– 1.4 ppm/year
            2010 – 6,809,972,000 – 11.9% ____ 393.00 ppm – 5.7% —— 2.1 ppm/year
            2019 – 7,714,576,923 – 11.7% ____ 414.66 ppm – 5.5% —— 2.1 ppm/year
            Source CO2 ppm: ftp://aftp.cmdl.noaa.gov/products/trends/co2/co2_mm_mlo.txt

            Based on the above statistics, to wit:

            Fact #1 – in the past 79 years – world population has increased 235% (5.4 billion people) – atmospheric CO2 has increased 37.3% (112 ppm)

            Fact #2 – human generated CO2 releases have been exponentially increasing every year for the past 79 years (as defined by the population increase of 5.4 billion people).

            Fact #3 – the burning of fossil fuels by humans has been exponentially increasing every year for the past 79 years. (as defined by the population increase of 5.4 billion people).

          • The correlation between human activities (population) and atmospheric CO2 remains almost perfect. “The trend line is a quadratic curvilinear function with a correlation of .9985, the best of several tested fits to the numerical data. Others, rectilinear, logarithmic, and semi-logarithmic gave correlations well over .99.” Newell & Marcus, 1987, “Carbon Dioxide and People”. The point to be emphasized is that humanity has no technology capable of putting that CO2 ‘genie’ back in the bottle in any MEANINGFUL amounts. Nor will reducing emissions to net-zero lower it. A waste of time and $$$ to even try.

          • Nor will reducing emissions to net-zero lower it. A waste of time and $$$ to even try.

            Right you are, ….. Broadlands, …… and here is continuation of my above post, to wit:

            Fact #4 – a biyearly or seasonal cycling of an average 6 ppm of atmospheric CO2 has been steadily and consistently occurring each and every year for the past 61 years (as defined by the Mauna Loa Record and Keeling Curve Graph).

            Fact #5 – atmospheric CO2 has been steadily and consistently increasing at an average yearly rate of 1 to 2 ppm per year for each and every year for the past 61 years (as defined by the Mauna Loa Record and Keeling Curve Graph).

            Conclusions:

            Given the above statistics, it appears to me to be quite obvious that for the past 79 years (or the 61 years of the Mauna Loa Record) there is absolutely no direct association or correlation between:

            #1 – increases in atmospheric CO2 ppm and world population increases:

            #2 – the biyearly or seasonal cycling of an average 6 ppm of atmospheric CO2 and world population increases;

            #3 – the biyearly or seasonal cycling of an average 6 ppm of atmospheric CO2 and the exponential yearly increase in fossil fuel burning;

            #4 – the average yearly increase in atmospheric CO2 of 1 to 2 ppm and the exponential increase in fossil fuel burning;

            #5 – there is absolutely, positively no, per se, “human (anthropogenic) signature” to be found anywhere within the 61 year old Mauna Loa Atmospheric CO2 Record.

            #6 – this composite graph of 1979-2013 uah satellite global lower atmosphere temperatures and yearly May max CO2 accumulations is literal proof that green growing/decomposing NH biomass and/or near surface air temperatures have little to no effect whatsoever on atmospheric CO2 ppm quantities.

          • Sam.. Your chart shows little or no correlation between CO2 and global temperature. If you were to plot the monthly ENSO 3.4 equatorial Pacific sea surface temperature anomalies (Hadley database) against monthly Mauna Loa CO2 you would get the same result…no correlation! The high correlations are between population (people’s activities) and CO2. But, none between the CO2 ‘control knob’ and global temperatures. Yet, we are supposed to urgently lower our carbon fuel emissions to net-zero??

          • Broadlands – February 11, 2020 at 5:18 am

            If you were to plot the monthly ENSO 3.4 equatorial Pacific sea surface temperature anomalies (Hadley database) against monthly Mauna Loa CO2 you would get the same result…no correlation!

            Broadlands, I am absolutely positive that you will get no correlation as stated above.

            And likewise, Broadlands, ….. iffen you were to plot the monthly Miami, Florida seaside surface temperature anomalies …… against the monthly Canadian Artic seaside surface temperature anomalies …… you would get the same result…no correlation!

            The equatorial Pacific sea surface is but a small portion of the vast amount of ocean surface waters that reside in the SH.

            Broadlands, the ocean surface waters are “warming”….. as you can plainly see via this Graph of Sea surface temperatures for Australia — 1910 – 2010

            Or the Sea surface temps – 1870-2010
            https://bobtisdale.files.wordpress.com/2013/05/figure-14.png

            Or the Sea surface temps – 1880-2018
            https://www.epa.gov/sites/production/files/styles/large/public/2016-07/sea-surface-temp-figure1-2016.png

          • And all of these oscillations are natural. The current “global warming” has oscillated back and forth for over 100 years to reach plus 0.83°C (±0.5°C). Any correlation with atmospheric CO2 occurred when there were periods of warming (e.g. up to 1938) but anti-correlate when there was cooling… 1938 to 1975. The careful work of Guy Callendar on the greenhouse effect is added proof of that. The bottom line? Humans cannot even predict, much less alter the Earth’s natural variability by CO2 mitigation. Oxidized carbon weighs too much to capture and safely bury by the billions of tons needed. And there are not enough places to store it geologically.

          • Broadlands – February 12, 2020 at 4:54 am

            The current “global warming” has oscillated back and forth for over 100 years to reach plus 0.83°C (±0.5°C). Any correlation with atmospheric CO2 occurred when there were periods of warming (e.g. up to 1938) but anti-correlate when there was cooling… 1938 to 1975.

            Broadlands, GETTA CLUE, …….. current “global warming” means, infers and or implies …. the global average near-surface air temperature .. which is a “pie-in-the-sky” phony bologna dream which has never been and never will be correctly calculated. And even if it could be, it wouldn’t be of any importance, any more than last week’s newspaper is, or last year’s average college basketball scores.

            And 2ndly, near-surface air temperatures only have a minor effect on local atmospheric CO2 ppm quantities because microbial decomposition is temperature sensitive.

            The major effect on atmospheric CO2 ppm quantities, as defined by the Keeling Curve Graph, is driven by the temperature of the SH ocean water, ….. which includes a bi-yearly (seasonal) increase/decrease and a yearly average increase. Both of which can be affected by El Ninos, La Ninas, volcanic eruptions, etc. Like so, to wit:

            1989 _ 5 _ 355.89 …. +1.71 La Nina __ 9 … 350.02
            1990 _ 5 _ 357.29 …. +1.40 __________ 9 … 351.28
            1991 _ 5 _ 359.09 …. +1.80 __________ 9 … 352.30
            1992 _ 5 _ 359.55 …. +0.46 Pinatubo _ 9 … 352.93
            1993 _ 5 _ 360.19 …. +0.64 __________ 9 … 354.10
            1994 _ 5 _ 361.68 …. +1.49 __________ 9 … 355.63
            1995 _ 5 _ 363.77 …. +2.09 _________ 10 … 357.97
            1996 _ 5 _ 365.16 …. +1.39 _________ 10 … 359.54
            1997 _ 5 _ 366.69 …. +1.53 __________ 9 … 360.31
            1998 _ 5 _ 369.49 …. +2.80 El Niño __ 9 … 364.01
            1999 _ 4 _ 370.96 …. +1.47 La Nina ___ 9 … 364.94
            2000 _ 4 _ 371.82 …. +0.86 La Nina ___ 9 … 366.91
            2001 _ 5 _ 373.82 …. +2.00 __________ 9 … 368.16

          • Sam with the clue. But, the global community is very concerned. The Parisian policy-makers and politicians find it very important and they urge us to get rid of a few billion tons of CO2 so that the rise in temperature (real or imagined) will not become a real and existential ‘climate emergency’. Suggest you inform them of their pie and bologna. Give them a clue 🙂

          • Broadlands wrote: [Sam’s] chart shows little or no correlation between CO2 and global temperature. If you were to plot the monthly ENSO 3.4 equatorial Pacific sea surface temperature anomalies (Hadley database) against monthly Mauna Loa CO2 you would get the same result…no correlation! The high correlations are between population (people’s activities) and CO2. But, none between the CO2 ‘control knob’ and global temperatures. Yet, we are supposed to urgently lower our carbon fuel emissions to net-zero??”

            Annual increases in CO2 (currently about 0.5%/yr) and seasonal swings in CO2 (currently about 3% from season peak to trough) are much too small to have any effect on temperature. Some above I derived a simple no-feedbacks way to convert changes in radiation into changes in temperature. dW = 4*dT/T. A 1% change in radiation is a 0.25% change in absolute temperature before feedbacks. From radiative forcing, 280 ppm change in CO2 is 3.6 W/m2. +20 ppm CO2/decade is a +7%/decade = 0.26 W/m2/decade = 0.1%/decade decrease in OLR (240 W/m2) = 0.027%/decade increase in temperature = 0.077 degC/decade without feedbacks and 1.5-4.5 fold more with feedbacks. If you are seeking evidence that CO2 doesn’t cause the amount of warming the IPCC implies, you need to start with a change in CO2 big enough to cause a bigger change in temperature than natural variability does.

            FWIW, seasonal variations in CO2 are different at different places around the planet. There is no seasonal change in Antarctica, because there is little seasonal increase in plant growth and decay near Antarctica.

          • BCBuster – February 12, 2020 at 12:09 pm

            FWIW, seasonal variations in CO2 are different at different places around the planet. There is no seasonal change in Antarctica, because there is little seasonal increase in plant growth and decay near Antarctica.

            Buster, it would sure improve your creditability iffen you would learn to …. “put your mind in gear before putting your keyboard in motion”.

            Here is the Mauna Loa’s Keeling Curve Graph equivalent of the South Pole Antarctic “seasonal variations” …… https://cdiac.ess-dive.lbl.gov/trends/co2/csiro/CSIROCO2SOUTHPOLE.JPG

            And ps, Buster, …. there VERY, VERY little seasonal increase in plant growth and decay in Barrow Alaska, but the monthly mean CO2 ppm profile is greater than the Mauna Loa profile as denoted, to wit:

            https://plot.ly/~vball3ams/174/monthly-mean-co2-concentration-ppm-point-barrow-alaska.png

        • Broadland’s wrote: “Any correlation with atmospheric CO2 occurred when there were periods of warming (e.g. up to 1938) but anti-correlate when there was cooling… 1938 to 1975. The careful work of Guy Callendar on the greenhouse effect is added proof of that. The bottom line? Humans cannot even predict, much less alter the Earth’s natural variability by CO2 mitigation.”

          The work of Callendar and those that preceded him was deeply flawed, because surface temperature is determined by a combination of convection and radiation – not by radiation alone. If the Hadley cell and other convective phenomena that carry heat into the upper atmosphere speeded up in response to warming from CO2, the heat that built up in response to a radiative imbalance at the TOA would be found in the upper troposphere and not at the surface. Convection is the surface’s “air conditioner” that removes more than half of the solar energy absorbed by the surface.

          In the 1960’s, Manabe and Wetherald (who were building the first primitive AOGCM) recognized that the temperature at the surface and in the troposphere is controlled by “radiative-convective equilibrium”: Wherever and whenever radiative cooling fails to produce a stable lapse rate, convection would (in the long run) carry enough heat aloft to produce a stable lapse rate. Without convection, the surface would need to warm to about 340 K before radiation carried away the 165 W/m2 of heat delivered by SWR. That is why the lapse rate averages 6.5 K/km in most of the troposphere, though the lapse rate varies widely with altitude from day to day and even hour to hour. Radiative-convective equilibrium was the fundamental concept that turned everyone’s attention away from the radiative imbalance at the surface produced by rising CO2 (only about 0.8 W/m2 for 2XCO2) to the much larger radiative imbalance created at the boundary between the atmosphere and space, aka the top of the atmosphere or TOA. I’m constantly sickened by propaganda claiming that the fundamentals of AGW have been properly understood since Arrhenius. (Arrhenius’s measurements missed the 15 um absorption band of CO2 and assumed that moonlight was unchanged as it traveled through the atmosphere – an assumption Arrhenius knew was incorrect.)

          And, in the 1960s, Keeling also began measuring the concentration of CO2 in the atmosphere and proved that about half of the CO2 released by burning fossil fuels was actually accumulating in the atmosphere. The contributions of Keeling and Manabe provided a valid foundation for the concept of GHG-mediated GW.

          (Sorry for the confusion created when replying to your comment about something Sam said.)

          • BCBuster February 15, 2020 at 3:17 am

            And, in the 1960s, Keeling also began measuring the concentration of CO2 in the atmosphere and proved that about half of the CO2 released by burning fossil fuels was actually accumulating in the atmosphere.

            OH, thank you, thank you, … BCBuster, ….. for telling us that it was Charles Keeling that discovered that “magic” isotope of atmospheric CO2, …… to wit:

            The CAGW secret they don’t want you to know.

            There is a nasty ole Anthropogenic Global Warming secret about CO2 that the proponents of CAGW are not telling you. Surprise, surprise, there are actually two (2) different types of CO2.

            There is both a naturally occurring CO2 molecule and a hybrid CO2 molecule that has a different physical property. The new hybrid CO2 molecule contains an H-pyron which permits one to distinguish it from the naturally occurring CO2 molecules.

            The H-pyron or Human-pyron is only attached to and/or can only be detected in CO2 molecules that have been created as a result of human activity. Said H-pyron has a Specific Heat Capacity of one (1) GWC or 1 Global Warming Calorie that is equal to 69 x 10 -37th kJ/kg K or something close to that or maybe farther away.

            Thus, said H-pyron is very important to all Climate Scientists that are proponents of CO2 causing Anthropogenic Global Warming (CAGW) because it provides them a quasi-scientific “fact” that serves two (2) important functions: 1) it permits said climate scientists to calculate an estimated percentage of atmospheric CO2 that is “human caused” ……. and 2) it permits said climate scientists to calculate their desired “degree increase” in Average Global Temperatures that are directly attributed to human activity.

            As an added note, oftentimes one may hear said climate scientists refer to those two (2) types of CO2 as “urban CO2” and ”rural CO2” because they can’t deny “it is always hotter in the city”.

            And there you have it folks, the rest of the story, their secret scientific tool has been revealed to you.

            Yours truly, Eritas Fubar

          • “The work of Callendar and those that preceded him was deeply flawed, because surface temperature is determined by a combination of convection and radiation – not by radiation alone. ”

            BCBuster: For practical purposes it doesn’t matter how surface temperatures are determined theoretically. What matters are the trends that they take with respect to the trend in atmospheric CO2. Temperatures are determined by thermometers. Callendar used the Smithsonian’s World Weather Records, carefully chosen to minimize the urban heat island effect. The 1930s were warm. After that they cooled until 1975 while CO2 steadily rose. Anti-correlate.

  3. First, there are seven greenhouse gases counted by FAR in its scenarios, but one of them (stratospheric water vapor) is created through the decay of other (methane). I haven’t checked if water vapor forcing according to FAR was greater than in the real world, but if that happened the blame lies on FAR’s inaccurate methane forecast; in any case stratospheric H2O is a small forcing agent and did not play a major role in FAR’s forecasts.

    Two things:

    1 – The decay in methane results in both H2O and CO2.

    2 – Some folks think stratospheric H2O (SWV), “is important, being on the same order of magnitude as the global mean surface albedo and cloud feedbacks in the multi-model mean.” link

    Water vapor is important at all altitudes since its absorption (and therefore emission) bands overlap the most important CO2 15 um band, even in the dry arctic atmosphere of Alaska. link

    • commieBob – February 1, 2020 at 2:40 pm

      Water vapor is important at all altitudes since its absorption (and therefore emission) bands overlap the most important CO2 15 um band,

      commieBob, …… don’t you be frettin that “overlap” thingy …… because NASA employs some really “tricky” satellite sensors that can detect the activity of a single atmospheric molecule, …. thus they can accurately measure how many H2O molecules and how many CO2 molecules there is at any given time, in “given” portion of the atmosphere, ….. and therefore NASA can accurately determine which of said molecules are absorbing and radiating IR energy.

  4. I’ll post here what I posted also in the comments section of Curry’s blog, as it’s relevant to the issue.

    Hausfather’s paper aims to calculate the “implied TCR” of climate models. He looks at the w/m2 of forcing used in climate model simulations, and compares them with an estimate of the w/m2 of forcing that actually happened. Thus he concludes that one model over-estimated real-world forcing by 20%, another under-estimated it by 10%, or whatever. By “implied TCR” Hausfather means the warming that climate models projected not in absolute terms (ºC), but per unit of forcing (ºC / w/m2).

    The key sentence in Hausfather’s paper is:
    ‘We express implied TCR with units of temperature using a fixed value of F_2x = 3.7 W/m2’

    While Hausfather’s method sounds very reasonable, and is certainly better than simply comparing the ºC projected by a model with reality, there is a significant mistake. F_2x is the forcing that arises from a doubling of atmospheric CO2 concentration. The problem is, it makes no sense to use a fixed value of F_2x = 3.7w/m2, when in fact different models or projections assumed different values as the forcing that results from a doubling of CO2.

    To see the problem of using a fixed F_2x value, imagine that CO2 is the only forcing agent, and that a climate model makes a forecast of future temperatures based on CO2 concentrations that happen to exactly match reality. Clearly, in this case, if climate model’s TCR is correct, then modeled temperatures should match reality. But, if the model’s F_2x is 4w/m2 rather than 3.7w/m2, then Hausfather’s method will conclude that the model “overestimated” forcing by 8.1%, because 4 / 3.7 = 1.081. While this “overestimation” is true if you’re talking about w/m2, it’s meaningless in terms of climate sensitivity; the actual model TCR will be 8.1% higher than the value Hausfather reports.
    (It just happens that the First Assessment Report assumed F_2x = 4w/m2)

    The way to get around this issue and do an apples-to-apples forcing comparison between models and reality is to express forcing not in raw w/m2, but as a percentage of F_2x. Using this method, I find that the First AAR over-estimated forcing since 1990 by 40 or 45%, rather than the 55% stated by Hausfather. I haven’t repeated the exercise for other models in Hausfather’s paper, but model TCR will be under-estimated in every case in which model F_2x is above 3.7w/m2, and I believe that’s pretty much all the old climate models.

    I emailed the lead author explaining the issue, so I suppose he’s working on a correction.

    (Please notice that, even if FAR ‘only’ overestimated forcings by 40% or so, CO2 and methane may account for all or even more than all the net over-estimate. There’s also over-estimation due to the Montreal Protocol gases, of course, but it seems to be about as big the under-estimation due to tropospheric ozone and aerosols.)

    • Thanks Alberto, an interesting article, though I admit I do not have the time or concentration span to deal with more than half that amount of writing.

      For example, the depletion of stratospheric ozone, colloquially known as the ‘ozone hole’, has a cooling effect (a negative forcing).

      What is the basis of that claim? Ozone depletion cools the lower stratosphere since it interacts with incoming solar. That implies that it has the opposite effect on the lower climate.

      See fig 11 in my article here, with refs.
      https://judithcurry.com/2015/02/06/on-determination-of-tropical-feedbacks/

      Also please note that the unit of power watt ( lower case w ) is abbreviated with a capital W. Despite citing sources where it is correct, you consistently get it wrong. It does not help your credibility if you don’t even know how to write the units.

      • The Antarctic ‘ozone hole’ is a very temporary seasonal event. Ultra-cold temperatures are required. Any loss of ozone is quickly restored when the wind-driven polar vortex breaks down.

    • Posted both here and at Curry’s blog.

      Posted both here and at WUWT.

      I’ll qualify the above statement. In FAR, Hausfather’s method is wrong because it takes forcing as stated by the report’s Figure 6; this is what the Supplementary Information says, and indeed I digitized Figure 6 and the raw forcing values correspond to what Hausfather says: just over 0.6W/m2/decade, or 1.64W/m2 from 1990 to 2017. This raw forcing arises from the model’s F_2x of 4W/m2, but Hausfather then calculates an implied TCR on the basis of F_2x = 3.7W/m2. So the model’s real TCR (at least for the years involved) will be 8.1% higher than what Hausfather reprots.

      In general, old climate models used a value of 4W/m2 as F_2x. However, this does not mean that the implied TCRs calculated for all these models are wrong. I’m going out now so I don’t have time to check, I’ll try to see this in more detail later, but off the top of my head some old models did NOT provide a chart of forcings, like FAR’s Figure 6. Rather, Hausfather calculated the forcings on the basis of CO2 concentrations projected by these models. If that’s the case, then the implied TCR calculated by Hausfather will be correct, because he’s being consistent in using the same value of F_2x both to calculate forcings and to estimate TCR.

  5. “There is about 4 GtC/yr accumulating in the atmosphere. We are emitting some 9 GtC/yr. It means that oceans and terrestrial systems are net sinks.”
    This statement assumes constant natural emissions but a thorough analysis of the evolution of CO2 in the atmosphere (Salby, Berry, Harde) show that most of the rise in concentration is natural. The only evidence for nearly constant natural emissions are ice cores and there is ample evidence that those data are questionable for quantitative analysis.

    • In support of this natural effect, one must take exception to Comendador’s statement “without man-made emissions, CO2 concentrations would not increase. ” Warming from whatever source, natural or anthropogenic, drives CO2 out of the oceans. ENSO is but one example of natural warming.

      • And you have to add to the effect of current temperature rise the effect of PRIOR temperature rises, as seen in the ice core reconstructions (~800 year lag) – which means part of current CO2 level rise is attributable to the Medieval Warm Period the Climate Nazis continue to attempt to erase from history.

  6. “However, it’s not true for CO2: the overforecast in concentrations happened because in FAR’s Business as usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world.”
    The 61% of CO2 emissions is both natural emissions and human induced emissions.
    This leads to the world carbon budget and Le Quere 2018, and issues of retention time (RT).
    Judith notes that Methane décomposés into CO2 after about 10 years.
    How long does CO2 remain in the atmosphere?
    The received wisdom from many climate scientists is that the RT is more than a century.
    Many other scientists are sceptical,with Segalstad papers arguing for periods as low as five years.
    In land management terms, Freeman Dyson argues for a half life (a CO2 molecule going round the carbon cycle to be replaced by another CO2 molecule being the full life) of twelve years.
    In May last, on an ABC program in Australia ,climate scientist David Karoly opined that we know “with absolute certainty” that all of the warning of the last century is human produced.
    This was in response to a claim that Australia’s share of the present CO2 concentration of 413 ppm is ‘1.3% of 3% ( or sometimes quoted 4% i.e.annual fossil fuel contributions) of 413ppm.’
    For the US, it would be ‘17% of 3% of 413ppm.’
    His argument which has traction among mainstream climate scientists is that the rise of some 40% from 280 ppm in pre- industrial times to today’s figure is the benchmark figure for such calculations.
    So Karoly says, ‘No. It’s 1.3% of 40% of 413 ppm’.
    I have great difficulty with this proposition.

    • I didn’t seen mention of NOx. Did I miss something?

      As more biomass is used for fueling generators of various kinds, NOx increases (a lot).

      • NOx is not a greenhouse gas itself. It can contribute to an increase in ozone concentrations, but that’s counted simply as ozone forcing. I believe you’re mistaking it for N2O, which is in fact a greenhouse gas; the concentration of N2O has increased mainly due to the use of nitrogen-based fertilizers.

        Regarding the issue of the retention time for CO2 in the atmosphere that the previous commenter raised, this has been covered in WUWT many times before; the problem is that even though a CO2 molecule can be taken by the ocean, normally when that happens another CO2 molecule is released. So the CO2 molecules currently in the air may not be the same ones that man-kind emitted, but the increase in CO2 concentrations is caused by manking emissions anyway. As mentioned in the article, “the fraction of emissions that remains in the atmosphere” is a mathematical construct, not a description of what actually happens with the carbon cycle.

        A more accurate description would be “the increase in CO2 concentrations caused by man-made emissions, as a percentage of those emissions”, but that’s a mouthful.

        • Alberto,
          Thanks for your explanation.
          The ‘more accurate description’ you outline is perfectly clear.
          I am aware of the WUWT discussions of the RT issue in the past.
          Professor David Karoly’s belief that so many human produced carbon dioxide molecules have remained in the earth’s atmosphere since 1850 that they ‘swamp’ natural emissions and allow a claim that all of the warming in that time is anthropogenic remains controversial.

        • “So the CO2 molecules currently in the air may not be the same ones that man-kind emitted, but the increase in CO2 concentrations is caused by manking emissions anyway.”

          Sorry, but no. That conclusion can only be reached via the application of massive assumptions and circular logic. The fact is, except for the human fossil fuel emissions (and even those are only indirectly “measured”), we are not measuring all of the sources and sinks for atmospheric CO2, which means we don’t know all of the causes of the increase, nor which are the primary “drivers” of the net result.

          Having said that, we can identify some contributors that have nothing to do with mankind’s fossil fuel emissions; (1) the “echo” of previous warm periods as reflected in the ice core reconstructions (with ~800 year lag) from the Medieval Warm Period, and (2) the effect of (current) temperature rise. Both based on warmer surface water in the oceans (primarily, but not exclusively) and Henry’s Law.

  7. I find it very hard to believe methane stays in the atmosphere for ten years.

    The reaction is CH⁴+4O²=CO²+2H²O
    Plus a bunch of energy, because it is a strongly exothermic reaction.

    So a molecule of methane will bumble around in the atmosphere, on average for ten long years, bumping into gazillions of molecules of oxygen, before being oxidized?

    Please explain, or give a credible source.

    • Thermodynamics tells whether a reaction can happen, kinetics tells how fast it will happen. Kinetics are slow for methane oxidation at around 15C, without a catalyst.

      • Yes, Scissor, maybe.

        But has that been accurately measured, or is “10 years” yet another wild ass guess produced by fooling about on an X-box?

        • “Kinetics are slow for methane oxidation at around 15C, without a catalyst.”

          How are these “kinetics” determined? I too was suspicious of the claim that a flammable reactive molecule like methane would remain in a mixture containing 20% O2 for ten years without reacting.

  8. The fatal flaw is the assumption where Mr Comendador wrote, “What matters is that, without man-made emissions, CO2 concentrations would not increase.”

    This assumption is based on the belief in the validity of the Bern Model. The Bern Model was the basis of NASA’s CO2 simulation for 2006 that is in this Youtube video of a supercomputer run released to the public in November 2014:
    https://www.youtube.com/watch?v=x1SgmFa0r04

    OCO-2 data destroyed the Bern Model’s predictions. A detailed reading of the OCO-2 team’s papers publish in Science Mag in October 2017 and then compared with the details of what the Bern Model predicts demonstrates the failure of where the relevant sinks and sources and flux rates are across the planet.
    No one wants to talk about that failure. But the reality is the assumptions based on the Bern Model are false. Thus the belief that, “without man-made emissions, CO2 concentrations would not increase” is almost certainly incorrect.

    • Dr Ed Berry has this important paper in preprint.

      From the Abstract:
      “Human emissions through 2019 have added only 31 ppm to atmospheric CO2 while nature has added 100 ppm.”

      PREPRINT: “THE PHYSICS MODEL CARBON CYCLE FOR HUMAN CO2”
      by Edwin X Berry, Ph.D., Physics
      https://edberry.com/blog/climate/climate-physics/human-co2-has-little-effect-on-the-carbon-cycle/

      ABSTRACT
      The scientific basis for the effect of human carbon dioxide on atmospheric carbon dioxide rests upon correctly calculating the human carbon cycle. This paper uses the United Nations Intergovernmental Panel on Climate Change (IPCC) carbon-cycle data and allows IPCC’s assumption that the CO2 level in 1750 was 280 ppm. It derives a framework to calculate carbon cycles. It makes minor corrections to IPCC’s time constants for the natural carbon cycle to make IPCC’s flows consistent with its levels. It shows IPCC’s human carbon cycle contains significant, obvious errors. It uses IPCC’s time constants for natural carbon to recalculate the human carbon cycle. The human and natural time constants must be the same because nature must treat human and natural carbon the same. The results show human emissions have added a negligible one percent to the carbon in the carbon cycle while nature has added 3 percent, likely due to natural warming since the Little Ice Age. Human emissions through 2019 have added only 31 ppm to atmospheric CO2 while nature has added 100 ppm. If human emissions were stopped in 2020, then by 2100 only 8 ppm of human CO2 would remain in the atmosphere.

      • I think the answer is not as clear as Dr Berry’s paper suggests. Although I agree with some aspects of his developed theory about a bathtub analogy, there’s is the difficult problem of the CO2 proxy data (ice cores and stomata prxies) that the atmospheric CO2 during the MWP or Holocene Thermal Optimum never got above 340-360 ppm.

        • “Joel O’Bryan February 1, 2020 at 5:56 pm

          Holocene Thermal Optimum…”

          According to Loydo, it never happened.

          • I think Dr. Comendador is spot-on correct about the uncertainties of the effects of land use changes of mankind’s activities on the global carbon cycle.
            Where the disconnect occurs is probably in land-use changes as those are probably much more consequential for Carbon cycling and Carbon sequestration than even our copious fossil fuel burning.

            One of the common memes that parroted around is that the Amazon and tropical jungles are the “lungs of the biosphere.” That urban legend says, “Our tropical forests are producing copious amounts of O2 (from water) and sequestering vast amounts of carbon.”

            In light of the OCO-2 data, that is probably only half true; the oxygen part. Tropical rain forests from OCO-2 data are carbon sources not sinks.
            Oooopps.
            https://www.scientificamerican.com/article/surprisingly-tropical-forests-are-not-a-carbon-sink/
            and
            “A new NASA study provides space-based evidence that Earth’s tropical regions were the cause of the largest annual increases in atmospheric carbon dioxide concentration seen in at least 2,000 years.”
            https://www.nasa.gov/press-release/nasa-pinpoints-cause-of-earth-s-recent-record-carbon-dioxide-spike

            We actually got hints that this was true during the run of the Biosphere-2 experiment outside Tucson Arizona. Biosphere-2 is a large rain forest enclosed in a seal-able dome. The experiment failed because the CO2 levels kept rising and they had to keep opening vents to let ambient air exchange to bring down the CO2 levels. (they tried to blame concrete absorbing O2, but that was false). So that Big hint in 1996 from that was that tropical jungles are net CO2 producers. Completely at odds with the Bern Model predictions.

            Reading up on the Biosphere-2 failure is, in hindsight, why the Bern Model fails too,
            https://apnews.com/56b7e682362016c1d97f094c2cde4a46

        • Joel: Re your post
          “there’s is the difficult problem of the CO2 proxy data (ice cores and stomata proxies) that the atmospheric CO2 during the MWP or Holocene Thermal Optimum never got above 340-360 ppm”

          Remember there is a time delay of several hundred years in the ice core data – warming precedes increasing CO2 by hundreds of years (~~400-800 years?).

          • I’m always very suspicious of claims of how paleo variability compares to current measurements. This is serious apples and oranges stuff. There is simply not the temporal resolution in proxies to properly represent changes on the timescale of current measurements.

            IPCC says CO2 became significant in “latter half of 20th c.”. That may not even equate to one data point in the proxy records and we are comparing direct measurements to ‘proxy’ processes which do not reflect instantaneous values but in the case of ice cores are mulitidecadal averages due to physical changes.

            How fast do plant stomata counts change in response to atm composition? What will current ice contain in 1000 years time?

            This is part of the lie of the Marcot Shake ‘n’ Mix paper which spawned a raft of spurious “unprecedented rates of change” claims.

            In short we do not have a record of short term changes, we have smoothed and damped records which necessarily to no reflect the range of historical values.

          • Yes but this doesn’t speak so much to the time lag (or the timing of the increases and decreases) but to the “amounts” ultimately measured. The “proxy” data are not only somewhat averaged over a period of time, but, in the case of the ice cores specifically, don’t really capture the true atmospheric CO2 amount – the ice core “proxy” for CO2 level has serious issues (read The Deniers, Chapter Seven, for the details).

    • Yes the statement:

      What matters is that, without man-made emissions, CO2 concentrations would not increase.

      Is patently false.

      Regardless of the resolution of paleo models and those using plant stomata etc, The level of CO2 in the atmosphere has both risen and dropped before humans were around to affect it.

      Therefore, not only is it untrue that without man-made emissions, CO2 concentrations would not increase all reasoning based on that assumption is false.

      The baseline level of CO2 (what would it be without human industry) is not known.
      The natural absorption rate by oceans, plants etc., of atmospheric CO2 is not known
      The entire ‘greenhouse gas’ edifice is based on confirmation biased guess work hidden in highly parameterized mathematics at various levels of erudition.

      However, as is always the case, if the falsity of the base assumptions is pointed out, the mathematicians go back to arguing maths rather than justifying their assumptions.

      There is NO observational evidence of CO2 causing any ‘warming’ in the real world; only temporary observational correlations.
      (see https://www.tylervigen.com/spurious-correlations if anyone still trusts correlations)

      • Agreed. And the ice core derived “historical” CO2 levels (and degree of variation) are (both) seriously understated – the proxy has serious issues as respects CO2 concentration.

        It’s all assumptions and circular logic.

  9. The “great uncertainty around land-use emissions” as stated is accurate (pun intended). For instance:

    CO2 emissions from forest fires in Oregon surpasses all other emission sources in the state combined. Since 2002 an estimated 50 to 75 teragrams (Tg) of CO2 have been emitted annually by forest fires in Oregon.

    Note that one Tg is 10^12 grams or one million metric tonnes (tons).

    Forest fires do not volatilize all the above-ground biomass. The combustion factor can range from 10 to 50 percent.The release of CO2 from post-fire decay over the next 25 to 50 years can be as much as 2 to 9 times the incineration amount. The annual post-fire release has been increasing since 2002 and now may be as much as 100 to 300 Tg per year — in addition to the 50 to 75 Tg emitted annually via direct volatilization.

    The Oregon Department of Environmental Quality (DEQ) reports statewide greenhouse gas emissions in two ways: sector based and consumption based. Both these methods estimate the same thing: total anthropogenic emissions. The DEQ estimates that quantity to be 60 to 80 million metric tons (or Tg).

    Thus forest fire and post-fire annual emissions exceed the reported anthropogenic amounts (which do not include forest fire emissions) by a factor of 2 to 5 times.

    • In Oregon more than 65% of our forests are government owned. Our forest fires are predictable and preventable — they stem from excruciatingly bad management by government agencies whose policies are No Touch, Let It Burn, Watch it Rot.

      In Oregon our forests have significantly more above-ground biomass than the U.S. average, and range from 100 to 700 metric tonnes/hectare (Mg/ha). A reasonable, conservative estimate of the carbon content of average above-ground biomass for Oregon forested environments is 200 Mg/ha.

      When catastrophically burned almost all that carbon is emitted by direct volatilization or post-fire decay. New plants will occupy the burns, but over the next 25 to 50 years very little carbon is re-sequestered by the shrubs and herbaceous post-fire invaders. They are too small.

      With proper forest management (aka restoration forestry) as much as half the existing carbon may be removed by thinning and prescribed burning, but ample large forest trees are retained. The retained trees fix carbon at a high rate relative to burn invaders. Thus emissions would be minimized and re-sequestration maximized.

      Proper forest management could reduce net fire-caused emissions by half or more. This reduction would more than equal (offset) all other statewide anthropogenic emissions combined –not to mention all the other myriad benefits to the economy and forest resources (plants, wildlife, water quality, soils, scenery, recreation, etc.).

      PS — It would beat the heck out of the Oregon Legislature’s proposed carbon tax, which will do nothing to reduce either or any emission source.

      • Why would anyone want to reduce ‘carbon’ [sic] emissions? Current levels of carbon dioxide in the atmosphere are only just above survival levels for many plants. A drop to below 200-150 ppm could lead to extinction of all life.

      • Yes, but – what Ian W said. There is no need to reduce emissions, the effects of which are completely beneficial.

    • Yeah…
      For me, it’s:
      emissions higher than projected, temp increase lower than projected
      In no way can this be described as “worse than we thought”, yet it has been…

      • And once you actually look for how much warming can be attributed to natural factors that have nothing to do with human CO2 emissions or CO2 levels regardless of source, there is virtually nothing left to “blame” CO2 for anyway in terms of “warming,” the effects of which are positive in any event.

  10. “the over-forecast in concentrations happened because in FAR’s Business-as-usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world”

    Than has been “observed” or than has been assumed?

    Carbon cycle flows contain large uncertainties that are stated by the IPCC and then ignored in the mass balance from which the airborne fraction is deduced. When stated uncertainties are taken into account we find that the relatively small fossil fuel co2 flow cannot even be detected because the carbon cycle balances with and without fossil fuel emissions. The airborne fraction is a product of circular reasoning.

    Please see

    https://tambonthongchai.com/2018/05/31/the-carbon-cycle-measurement-problem/

    Further support for this result is provided with correlation analysis.

    https://tambonthongchai.com/2018/12/19/co2responsiveness/

  11. So what is the optimum level of CO2 in the atmosphere? Under 250 ppm is too low and over 10,000 ppm is probably too high, but what is the Goldilocks concentration for life on Earth? Has anyone ever seriously studied that subject? Shouldn’t we have a good understanding to know how to regard changes in atmospheric CO2 concentration?

    • Geological and geochemical evidence from late Eocene sediments reveals that an amount 2.5 to 3 times pre-industrial would leave the climate in a “mild” condition. Svante Arrhenius made the original forecast over 100 years ago.

    • Geological and geochemical evidence from late Eocene sediments reveals that an amount from 2.5 to 3.0 times pre-industrial (~280 ppm) occurred when the climate was in a “mild” condition. This was forecasted over 100 years ago by Svante Arrhenius and confirmed in 2009.

  12. “The IPCC’s First Assessment Report (FAR) made forecasts or projections of future concentrations of carbon dioxide that turned out to be too high.”

    I don’t know why this elementary issue causes so much trouble here, but the fact is that FAR did not make forecasts of future concentration at all. They did not know what decisions would be made in future about emissions, so they calculated subject to scenarios. Not just one, but four (A-D). They were intended to cover the range. It makes no sense to criticise them because the actual outcome was mid-range rather than top of range.

    The scenario being picked out here was described thus:

    “In the Business-as-Usual Scenario (Scenario A) the energy supply is coal intensive and on the demand side only modest efficiency increases are achieved Carbon monoxide controls are modest, deforestation continues until the tropical forests are depleted and agricultural emissions of methane and nitrous oxide are uncontrolled, For CFCs the Montreal Protocol is implemented albeit with only partial participation”

    The second scenario, B was described:
    “In Scenario B the energy mix shifts towards lower carbon fuels, notably natural gas Large efficiency increases are achieved. Carbon monoxide controls are stringent, deforestation is reversed and the Montreal Protocol implemented with full participation”

    The IPCC didn’t predict which of these would work out; it calculated the different results. In fact the Montreal protocol was implemented, CO was controlled, coal use dropped, deforestation was mixed, with progress in some places like N America, and loss in some tropics, but not complete. It makes no more sense to say that they over-predicted with scenario A than they they under-predicted with scenario D.

    • The fatal flaw in your argument Nick is the Rise of China and its CO2 emissions after 2000 (really its 2002 entry into the WTO and MFN trading status with the US). No one in 1990 foresaw the dramatic Chinese industrial rise, energy use climb-out, middle class expansion, and resulting emissions rise since 2000.

      That’s (Chinese emission rise) the first Black Swan of the Climate Scam, IMO. A black swan event that the IPCC and the UN/COP process globalists still cannot contain or account for, except to hope everyone ignores it with a complicit media/press.

      • It’s just a basic elementary point. Scenarios are not predictions. That’s why there are four of them. It is frustrating to see this dimwittery constantly recurring.

        • Come on, this is all explained in the summary. No, the IPCC doesn’t make predictions in the strict sense of the term – it makes projections dependent on a given emissions scenario. But at least for CO2 the actual emissions seem to have been higher in the real world than the IPCC’s business-as-usual scenario projected. For methane, we don’t know whether real-world emissions have been higher or lower than per the IPCC’s business-as-usual scenario, but the error in the concentration projection is so gross that it doesn’t matter.

          The inability to predict the exact amount of future greenhouse gas emissions is not the reason the IPCC’s Scenario A ended up too warm.

          • “this is all explained in the summary”
            No, the summary just described some discrepancies between what happened and some (often unstated) scenarios. But it misuses the whole status of scenarios. Basically in the scenarios the IPCC nominates a whole lot of things that it can’t predict – they are outside the realm of climate science. Mostly about human decisions. They do GCM calculations that say
            If scenario A unfolds, then a will happen
            If scenario B, then b
            If scenario C, then c
            If scenario D, then d

            All you are doing is, in various ways, saying that the IPCC is wrong because A didn’t happen. But they never said it would. The correct way to evaluate is to see which scenario did unfold, and see if the matched consequence happened. The complication is that what eventually happens will not follow one scenario exactly.

            The fact that scenario A is called Business-as-usual does not make it a prediction. It merely says this is what will happen if nothing changes. It doesn’t predict that nothing will change.

            Some examples
            “it seems FAR did not expect the apparent decline in deforestation rates seen since the 1990s”
            Scenario A did not include such a decline. The others did.

            “The IPCC’s First Assessment report greatly overestimated future rates of atmospheric warming and sea level rise in its Business-as-usual scenario.”
            That just means that Scen A is not the scenario that unfolded. More like B, C or D.

            “This means that the bulk of the error in FAR’s forecast…”
            Again, it wasn’t an error. It just means that scenario A wasn’t the one that unfolded. It was more like one of B, C or D.

        • It is a fair point that climate science makes no predictions that are able to be tested. And it is true that that which cannot be compared to reality cannot be refuted by reality.

          But surely that also means that they cannot be used the other way round too. Reality cannot be used to talk about the scenarios if and only if the scenarios cannot be used to talk about reality.

          Therefore they are useless for policy making.

          • No Nick wants it both ways the models are accurate they just don’t represent the reality because he is saying they are so stupid they can’t even create a scenario where you just project the current rate forward 🙂

            The world has basically followed a business as usual 3% growth in emissions year in and year out (a few level offs during recessions etc) but in the main it has followed that pattern.

            So as Nick is the authority which model does that represent?

          • “It is a fair point that climate science makes no predictions that are able to be tested.”
            No, they can be tested. You work out which scenario unfolded, and test what the GCMs predicted for that scenario.

            Of course, none of the scenarios are followed exactly. So you have to adjust for the variations from the nearest scenario. But that can be done.

            “Therefore they are useless for policy making.”
            They would be useless for policy making if they were absolute predictions, because that would say the outcomes don’t respond to policy. But they do. That is one use of scenarios for policy makers. If you can make scenario C happen, this is what it will achieve. But we can’t predict that you will.

        • “… It is frustrating to see this dimwittery constantly recurring…”

          It is only “frustrating” if you dodge the point – that even when you account for the issues with the scenario that was closest to actually happening, the models ran too hot.

          “…Again, it wasn’t an error…”

          The difference between the result and the model is “error” in the literal and metaphorical sense. Come on.

          “…That just means that Scen A is not the scenario that unfolded. More like B, C or D.

          “…That just means that Scen A is not the scenario that unfolded. More like B, C or D…Again, it wasn’t an error. It just means that scenario A wasn’t the one that unfolded. It was more like one of B, C or D…”

          Wow, really narrowed it down there.

          Look, if you don’t want to compare the model results because the emission scenarios A-D didn’t match reality, then STFU with the kind of garbage below which praises results for the same damned thing.

          https://moyhu.blogspot.com/2015/10/hansens-1988-predictions-revisited.html
          https://moyhu.blogspot.com/2012/02/hansens-1988-predictions-js-explorer.html

    • What you say is true, but not on point. The author shows evidence that CO2 emissions were significantly higher than the IPCC’s worst assumptions, and yet the resulting concentration of CO2 in the atmosphere and the change in temperature in the lower atmosphere were both well below their projections. Therefore, the IPCC “science” is likely wrong both on it’s predictions of future CO2 and how much forcing will result. That makes all their scenarios less than useless because so far the data disproves their hypothesis on CO2 latency and CO2 forcing. That is how science is supposed to work.

      If correct this analysis would seemingly negate the lynch pin of the climate change debate.

      For the record this should not come as a surprise to anyone that understands Dr. Frank’s paper on how ludicrous GCMs are in general.

      • “The author shows evidence that CO2 emissions were significantly higher than the IPCC’s worst assumptions”
        No, he doesn’t. The difference was small. He says:
        “But just to be clear: it is only likely that real-world emissions exceeded FAR’s Business-as-usual scenario. The uncertainty in land-use emissions means one can’t be sure of that.”
        But, more relevantly, he says that
        “CO2 concentration was significantly over-forecasted by the IPCC”
        Again doesn’t mention that he is talking only about scenario A, the highest. Other scenarios were much closer.

        Why more relevant? Because it is the concentrations that the GCMs use as input. And there is great uncertainty about the emissions comparison. Firstly, as the article says, there was huge stated uncertainty in the IPCC 1990 emissions estimate. The reason is that emissions are not measured by climate scientists, but come from government economic statistics. And these were only systematically collected after the UNFCCC agreement of 1990. And the other component is the authors estimate of emissions in comparison, which is full of assumptions.

        • “And the other component is the authors estimate of emissions in comparison, which is full of assumptions.”

          This objection is pulling at straws. The IPCC’s business-as-usual airborne fraction of 61% is higher than that of the real world even if one does not count real-world land-use emissions at all; this was mentioned in the article, at the end of ‘Calculations regarding the real world’. In fact, the airborne fraction implied by the IPCC is higher than that of the real-world even if one also excludes real-world cement emissions. The IPCC’s implied airborne-fraction is simply much higher than the fraction that has been observed in reality, period.

          ‘Again doesn’t mention that he is talking only about scenario A, the highest. Other scenarios were much closer.’
          Two things:
          -Even if one is not 100% sure whether real-world emissions where higher or lower than those of Scenario A, there is no question that Scenario A is by far the closest to reality. Remember that, when adding cement to BP’s emissions (but still without counting real-world land-use emissions), for 1991-2018 there is less than a 10% difference between observations (218.04GtC) and the IPCC’s Scenario A (237.61GtC). And there is uncertainty about land-use emissions since 1990, but nobody expects them to be zero.

          -For Scenario B, the airborne fraction issue is even worse: concentrations rise by about 50 ppm, as the digitization says they reach 400 or 401 ppm by 2018. But cumulative emissions are only 165GtC or so, which is to say 77 or 78ppm. Thus the airborne fraction is 64-65%.

          Admittedly the article could have been clearer on the amount of emissions that Scenario B involves, but it was already obvious from the text that for this Scenario the airborne fraction was also much higher than reality’s.

          • “there is no question that Scenario A is by far the closest to reality”
            You talk only in emissions. But the IPCC, in the SPM and main text, gave the scenarios entirely in terms of concentrations, and that is what the GCMs worked from. And there, as you say, the concentration rise was much less than Scenario A, and close to B.

            You had to delve into the annex to get a graph of emissions in the scenarios, and then you showed it, but without the caption. The caption said:
            “Figure A.2(a): Emissions of carbon dioxide (as an example) in the four policy scenarios generated by IPCC Working Group III”

            They aren’t claiming it to be their authoritative statement of the scenario. They acknowledge in the text their uncertainty about emissions.

            As to which scenario eventuated – CO2 concentration rise was far below A. As you pointed out, methane was also far below. CFCs were far below, because according to the assumptions of scenario B, but not A, the Montreal Protocol was widely accepted. So the focus on A is misplaced. The IPCC clearly allowed for the possibility that things might turn out better than A (hence B,C,D), and they did. You need to find the right scenario.

        • First, any “scientific” paper that publishes a range as broad as that of the IPCC reports would normally be laughed at, but somehow the world accepts this drivel, but that comment is biased.

          Second, we can quibble over the words, but the fact remains that this analysis indicates major flaws in the underlying “science” of the IPCC CO2 and methane projections. They were wrong in “guessing” a) how much CO2 man would make b) what percentage would remain and c) what forcing would result, but they had and today you have the temerity to call that “science”.

          The science of Climate Change has been falsified in so many ways it makes ones head spin, and yet the world is being sucked into this vortex of falsehood.

          Arctic Ice is currently unchanged since 2007. Actual sea levels measured by instruments on land and not modeled on satellite data continue to show about 3mm per year rise. Scientific papers have been written showing that forest fires are a product of forestry, flooding globally is unchanged, hurricanes and tornadoes have been declining for a century, crops are at an all time high, the globe has been “greening” for decades, and deaths due to extreme temperature are declining because more people are getting heat in winter due to cheaper energy, but yet you cling to a belief that a warming planet is a) bad and b) caused by man.

          Sad . . . just really sad.

          • Spot on. The apologists for the Intergovernmental Propaganda on Climate Control are deluded – and annoying.

    • Montreal protocol was implemented, CO was controlled, coal use dropped,,,

      maybe on your planet…but not on the one I’m on

      look at what China and the rest of SE Asia has done

  13. Incredible amount of detail reviewing the silliness of the “greenhouse gas” fairy tale.

    The average surface temperature of the globe is primarily a function of the distribution and connectedness of the surface water. Without the surface water, earth would be a barren rock similar to earth’s moon.

    • I have to take issue with the belief that biogenic methane from farmed livestock is counted as an emission but burning wood pellets is not .
      They are exactly the same they are both cycles ,the animals consume forage that has absorbed CO2 and a small amount of methane is emitted during digestion ,In about a decade the methane is broken down into CO2 and water vapour.
      The trees absorb CO2 and when they are burnt the CO2 is released back into the atmosphere.
      Every one of us breathes out 5% CO2 but we are not YET counted as emitters .
      Not one single person has come op with any sensible explanation why biogenic methane is the only emission that is counted that is not extracted from below the earths surface .
      It is also the only emission that is a cycle and the only emission that does not add one atom or molecule of carbon to the atmosphere over any time period.
      I await your factual arguments why biogenic methane should be classed as an emission in any countries GHG emissions profile .
      Graham
      Proud to be a farmer feeding the world with milk and beef.

      • I posted the above post on the 1st of February and not one scientist or informed person has made any comment one way or the other .
        Why is this ?
        The answer is that what I have written is factual and can not be argued against .
        Graham

    • “Without the surface water, earth would be a barren rock similar to earth’s moon.”

      And without CO2 (unlike H2O, it has the ‘magic’ ability not to condense out of the atmosphere), then there would only be ice.

      • Funny – On Mars with an atmosphere which is over 95% CO2, but with an atmosphere so thin there is no comparable atmospheric density to Earth’s atmosphere at any level, it is a lot colder than Earth, and there is no liquid water. So much for CO2’s “heat trapping” potential.

        And if you compare Venus and Earth, if you look at each level of their atmospheres* where the atmospheric density is the same, the only thing you need to account for the temperature difference is the distance from the Sun. Even though Venus is 96% CO2 and Earth is 0.04% CO2. Again, so much for CO2’s “heat trapping” potential.

        *Except for Venus’ sulfuric acid cloud belt, which is the only outlier.

  14. This is a good argument regarding Montreal Protocol gases, as emissions of these were much lower than forecasted by the IPCC.

    However it does tell us something important about the assumptions in the work.

    It shows that the unconscious bias is too over-estimate.

    This is is to be expected. People who research things are always biased towards thinking they are doing something worthwhile. Otherwise they would be doing something else. The effect is magnified for people who campaign for something.

    Thus this helps us apply policies. Should we be more concerned about the unknown effects of action or inaction?
    If the known effects of action are over-estimated and the known effects of inaction are under-estimated we should be steadfast, unmoved and cautious. Wait before taking actions.

  15. The biggest error in the IPPC models is that the year-to-year change in the atmospheric concentration of CO2 is “caused” by anthropogenic emissions. They falsely assume that, some how, natural emissions are all consumed by natural sinks and those sinks only take up about half of anthropogenic emissions. The IPPC admits that natural emissions are around 20 times anthropogenic emissions. A five percent per year increase in natural emissions can account for the observed rate of increase in the atmospheric concentration of CO2. I have falsified their assumptions in a detailed analysis of global atmospheric CO2 on my WordPress website, Climate Changes (retiredresearcher.wordpress.com). You can get there by simply Googling climate changes WordPress. I gladly would like you to “peer review” this analysis and let me know where I might have made mistakes or where I could have made a better analysis.

  16. This is a recurring senseless argument, per Nick Stokes. Why can’t we just run Hansen’s model with modern AGW masses and his 1988 forcings to see how close he got? And then do the same with models newer than 32 years old to see if the hindcasting has improved, and if the 1988 Hansen modeling would have already been adequate to use as an AGW risk management tool? Doing so might answer questions actually worth asking….

    • Better yet, why can’t we just stop pretending that the stupid models are remotely connected to reality, when they clearly are not, since they are “tuned” to produce known historical results but are incapable of either hind-casting for different periods than they have been “tuned” for OR forecasting anything. Because, at the end of the day, the assumptions put into them to begin with a GARBAGE, as is, inevitably, their “output.”

      • “Better yet, why can’t we just stop pretending that the stupid models are remotely connected to reality, when they clearly are not, since they are “tuned” to produce known historical results but are incapable of either hind-casting for different periods than they have been “tuned” for OR forecasting anything.”

        Again, per Nick Stokes, more fact free urban mythology. Are you afraid of the results that would accrue from using actual , historic greenhouse gas trends to test the 1988 Hansen model, and those newer? I’m not, even if there are problems. All of this “discussion” is qualitative. “More methane than forecast.” “Less atmospheric[CO2] than “forecast”. FFS, why don’t we rerun the old (and newer) models based on actual concentrations. Are there practical problems, in an age of running times several orders of magnitude faster than those extant in 1988?

  17. The IPCC’s forecasts of carbon are wrong alright – because they are fundamentally flawed. “Airborne Fraction” is a concept on which the entire climate change industry relies. It assumes that increased CO2 follows only from human emissions and nature’s uptake. Several treatments predicated upon basic physics have now shown this assumption to be foolish nonsense. They show that, without invoking the fallacious assumption, human emissions have a minor role in increasing atmospheric CO2. Most of the increase must follow from nature.

    https://youtu.be/b1cGqL9y548?t=41m52s

    https://edberry.com/blog/climate-physics/agw-hypothesis/human-co2-emissions-have-little-effect-on-atmospheric-co2-discussion/

    http://www.esjournal.org/article/161/10.11648.j.earth.20190803.13

    • Yup – which should be the null hypothesis in any event – if we contribute 3-4% of “emissions,” we should be expected to contribute 3-4% of changes to the overall level – NOT 100% of changes to the overall level.

  18. What this in-depth analysis reveals is the fact that the amount of CO2 currently being emitted (and that already present in the atmosphere) cannot be removed in the amounts needed to make a difference to the Earth’s climate. The accuracy of the emission values or their sources is irrelevant. Reducing carbon fuel CO2 emissions does not lower any of that. The arithmetic in the article shows that just ONE ppm of oxidized carbon is 7.8 billion metric tons. It should be obvious many more ppms than that would be required. Some say enough to return the climate to 350 ppm. That’s 500 gigatons. And most of that CO2 must be captured, transported and safely stored…permanently. Not a likely possibility, and certainly not be 2050.

    • But what you’re missing is that no amount removed would do a damn thing to the climate – the Earth’s climate history shows clearly that the temperature/climate has been indifferent to the CO2 level, including ice ages with 10x what is currently in the atmosphere.

      • Yes, AGisWNS…That’s the point. There are substantial investments being made in CCS technology that are useless in lowering CO2 in any meaningful amount. I’m not missing that, “they” are. The late Eocene data reveal that a doubling of current CO2 did nothing “catastrophic’ to the climate.

  19. Amazing how much Ed Berry is influencing the discussion here (and now). Just a few years ago the natural rise crowd was quite in the minority. Does ferdinand still haunt these comment pages (tapping down all the nails that are popping up)?

    • The “natural rise crowd” is quite in the minority today as well. It just looks different because they comment and most readers don’t.

      The argument itself, on whether the rise in CO2 levels is natural or man-made, has been done in this and other websites about thirty-seven million times before. You don’t read comments by people arguing the correct position, that the rise of CO2 is man-made, simply because life is short and people have more important things to do.

      • No Alberto, that “natural rise” rubbish gets a solid run every time the issue is raised here and unfortunately, being an echo chamber, some gullible dupes buy it and start quoting it. The ususal suspects are at it right now in this thread. It’s called clutching at straws.

        • The “human induced CO2 rise” meme is based on nothing but bad data, bad assumptions, and circular logic. Who is grasping at straws?!

          They are not measuring all of the sources and sinks for atmospheric CO2, so they haven’t got a clue what the cause of the rise is. Their assumptions are not facts.

  20. 33 years ago a paper with the title “Carbon Dioxide and People” was published. The authors (Newell & Marcus) plotted the annual Mauna Loa CO2 values against global population. The statistical correlation was almost perfect between the sum total of human activities (population) and atmospheric CO2. The problem? There were never any “blips” on the charts to indicate natural inputs… global volcanic events, ENSOs? The strong positive correlation remains even today.

  21. “For emissions, the Annex to the Summary for Policymakers offers a not-very-good-looking chart …”

    Check the WG3 (yes, THREE !) FAR report, Chapter 2, “Emission Scenarios” (coordinating authors Tirpak and Vellinga.

    It is unclear of the exact relationship between their 5 (yes, five …) scenarii and WG1’s “A, B, C and D” versions, but “Executive Summary Tables 2.1 and 2.2” provide some … “interesting” (?) data for analysis / comparison purposes.

    NB : CO2 concentrations (Table 2.1 only) are only given for 2025 and 2075.
    CO2 emissions (both Tables) are given for 1985, 2000, 2025, 2050, 2075 and 2100 (in Table 2.2).

  22. I’ve published an article on the F_2x issue:
    https://medium.com/@Alberto.Zaragoza.Comendador/how-hausfather-et-al-2019-mis-estimate-the-climate-sensitivity-of-the-ipccs-first-assessment-31481a270c75

    Summary:
    -The 55% ‘overestimate’ in forcings that Hausfather finds for the First Assessment Report is only true if you measure forcings in raw W/m2. When you measure forcings in terms of the forcing level equivalent to a doubling of atmospheric CO2 (that is to say F_2x), the over-estimate is 31%.
    -When I compare FAR’s forcings with those of Lewis & Curry over 1990-2016, the result is very similar: FAR over-estimated by around 30%, not by 50-60%.
    -Out of the 30% or so over-estimate, more than half is due to FAR’s excessive CO2 concentrations. All of this mistake (and probably more) is because of FAR’s overly high airborne fraction. This is purely a scientific error, as mentioned in this article, but now its impact is quantified better.

    I haven’t been able to calculate the over-estimate due to excessive methane concentrations in FAR, but there’s no question that the combinaed methane + CO2 over-estimate makes up the bulk of the total forcing over-estimate in FAR (perhaps all of it).

    The combined over-estimate due to methane, CO2 and Montreal Protocol gases is greater than 100%, because FAR also under-estimated (in fact omitted) the positive forcing from increasing tropospheric ozone and declining aerosols.

Comments are closed.