More Evidence for a Low Climate Sensitivity

By Patrick J. Michaels and Paul C. “Chip” Knappenberger

We have two new entries to the long (and growing) list of papers appearing the in recent scientific literature that argue that the earth’s climate sensitivity—the ultimate rise in the earth’s average surface temperature from a doubling of the atmospheric carbon dioxide content—is close to 2°C, or near the low end of the range of possible values presented by the U.N.’s Intergovernmental Panel on Climate Change (IPCC).  With a low-end warming comes low-end impacts and an overall lack of urgency for federal rules and regulations (such as those outlined in the President’s Climate Action Plan) to limit carbon dioxide emissions and limit our energy choices.

The first is the result of a research effort conducted by Craig Loehle and published in the journal Ecological Modelling. The paper is a pretty straightforward determination of the climate sensitivity.  Loehle first uses a model of natural modulations to remove the influence of natural variability (such as solar activity and ocean circulation cycles) from the observed temperature history since 1850. The linear trend in the post-1950 residuals from Loehle’s natural variability model was then assumed to be largely the result, in net, of human carbon dioxide emissions.  By dividing the total temperature change (as indicated by the best-fit linear trend) by the observed rise in atmospheric carbon dioxide content, and then applying that relationship to a doubling of the carbon dioxide content, Loehle arrives at an estimate of the earth’s transient climate sensitivity—transient, in the sense that at the time of CO2 doubling, the earth has yet to reach a state of equilibrium and some warming is still to come.

Loehle estimated the equilibrium climate sensitivity from his transient calculation based on the average transient:equilibrium ratio projected by the collection of climate models used in the IPCC’s most recent Assessment Report. In doing so, he arrived at an equilibrium climate sensitivity estimate of 1.99°C with a 95% confidence range of it being between 1.75°C and 2.23°C.

Compare Loehle’s estimate to the IPCC’s latest assessment of the earth’s equilibrium climate sensitivity which assigns a 66 percent or greater likelihood that it lies somewhere in the range from 1.5°C to 4.5°C. Loehle’s determination is more precise and decidedly towards the low end of the range.

The second entry to our list of low climate sensitivity estimates comes from  Roy Spencer and William Braswell and published in the Asia-Pacific Journal of Atmospheric Sciences. Spencer and Braswell used a very simple climate model to simulate the global temperature variations averaged over the top 2000 meters of the global ocean during the period 1955-2011. They first ran the simulation using only volcanic and anthropogenic influences on the climate. They ran the simulation again adding a simple take on the natural variability contributed by the El Niño/La Niña process. And they ran the simulation a final time adding in a more complex situation involving a feedback from El Niño/La Niña onto natural cloud characteristics. They then compared their model results with the set of real-world observations.

What the found, was the that the complex situation involving El Niño/La Niña feedbacks onto cloud properties produced the best match to the observations.  And this situation also produced the lowest estimate for the earth’s climate sensitivity to carbon dioxide emissions—a value of 1.3°C.

Spencer and Braswell freely admit that using their simple model is just the first step in a complicated diagnosis, but also point out that the results from simple models provide insight that should help guide the development of more complex models, and ultimately could help unravel some of the mystery as to why full climate models produce  high estimates of the earth’s equilibrium climate sensitivity, while estimates based in real-world observations are much lower.

Our Figure below helps to illustrate the discrepancy between climate model estimates and real-world estimates of the earth’s equilibrium climate sensitivity. It shows Loehle’s determination as well as that of Spencer and Braswell along with 16 other estimates reported in the scientific literature, beginning in 2011. Also included in our Figure is both the IPCC’s latest assessment of the literature as well as the characteristics of the equilibrium climate sensitivity from the collection of climate models that the IPCC uses to base its impacts assessment.


Figure 1. Climate sensitivity estimates from new research beginning in 2011 (colored), compared with the assessed range given in the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5) and the collection of climate models used in the IPCC AR5. The “likely” (greater than a 66% likelihood of occurrence)range in the IPCC Assessment is indicated by the gray bar. The arrows indicate the 5 to 95 percent confidence bounds for each estimate along with the best estimate (median of each probability density function; or the mean of multiple estimates; colored vertical line). Ring et al. (2012) present four estimates of the climate sensitivity and the red box encompasses those estimates. The right-hand side of the IPCC AR5 range is actually the 90% upper bound (the IPCC does not actually state the value for the upper 95 percent confidence bound of their estimate). Spencer and Braswell (2013) produce a single ECS value best-matched to ocean heat content observations and internal radiative forcing.

Quite obviously, the IPCC is rapidly losing is credibility.

As a result, the Obama Administration would do better to come to grips with this fact and stop deferring to the IPCC findings when trying to justify increasingly  burdensome  federal regulation of  carbon dioxide emissions, with the combined effects of manipulating markets and restricting energy choices.

References:

Loehle, C., 2014. A minimal model for estimating climate sensitivity. Ecological Modelling, 276, 80-84.

Spencer, R.W., and W. D. Braswell, 2013. The role of ENSO in global ocean temperature changes during 1955-2011 simulated with a 1D climate model. Asia-Pacific Journal of Atmospheric Sciences, doi:10.1007/s13143-014-0011-z.

=========================================================

Global Science Report is a feature from the Center for the Study of Science, where we highlight one or two important new items in the scientific literature or the popular media. For broader and more technical perspectives, consult our monthly “Current Wisdom.”

About these ads
This entry was posted in Climate sensitivity. Bookmark the permalink.

92 Responses to More Evidence for a Low Climate Sensitivity

  1. James Ard says:

    All this talk about climate sensitivity is playing their game. We need to stop and make them explain why for the past seventeen years the sensitivity has been zero.

  2. ilma630 says:

    Did I read this correctly. They try to remove natural variation including solar activity to leave a modelled projection!! I’m sorry, but that seems to be an absurd approach to counter the CAGW crowd, and as JA says, playing into their hands.

    The basic question is, why is there any belief, i.e. where is the evidence, that climate sensitivity to man’s CO2 is anything other than zero. And we should be perfectly clear about this; it is the ~3% man’s CO2 additions that is the subject of the debate, not the 97% of natural CO2.

  3. Fernando Leanme says:

    I support government intervention to make the vehicle fleet more efficient and encourage the use of natural gas. These steps reduce dependence on foreign sources of energy and at the same time satisfy the apparent need to reduce greenhouse gas emissions. We should factor in the fact that if indeed the climate sensitivity is about 2 degrees C to doubling, the atmosphere is likely to have twice the CO2 content, and temperature will rise. And if it does, it´s evident sea level will increase and cause trouble. So the best solution is to cut emissions doing so in a prudent fashion and reducing the balance of trade déficit.

  4. Keith Gordon says:

    “Loehle first uses a model of natural modulations to remove the influence of natural variability (such as solar activity and ocean circulation cycles) from the observed temperature history since 1850.”

    It is good to see this issue of low climate sensitivity getting another airing, but do we really know what the values of all the natural variability is, in order to remove it from the calculations in the first place. I suspect we don’t.

    Keith Gordon

  5. DrTorch says:

    I’d like to see some of the older IPCC claims to be included on the chart as well. I think that’s important.

  6. Tom Andersen says:

    What is the current estimate of sensitivity based on data alone? In other words, look at the temperatures from 1900 to 2014, and fit to CO2. The math would end up using 1900 to about 1960 as baseline, then any increase above the expected rise would be attributed to CO2. (It might not be CO2, but it might be).

    Such an estimate will be available with smallish error bars in 2100, but what is the estimate now?

    It looks like we may be able to start ruling out climate sensitivities above 4C even today, just from data. As the decades run on, we get better data.

    Settled to a scientist should mean predictions match actual data. There is basically no data yet.

  7. Tim Obrien says:

    But it’s hard to tax (cough) extort (cough) trillions of dollars out of the public if you can’t panic them….

  8. It is somewhat disconcerting that the several low-end estimates have error bars that do not embrace the estimates. This may simply show that the models do not ‘know’ how to get the error bars.

  9. Tom Andersen says:

    Loehle’s estimate is something like what I am talking about, but he does a lot of data munging that could be hard to follow. I’m talking about a straight fit with only a few parameters – for instance not assuming that the lines have anything to do with climate.

  10. Brian says:

    Craig,

    Wouldn’t using a straight transient to equilibrium sensitivity factor overestimate the equilibrium sensitivity? After all, only recent CO2 rises give truly transient warming, while older CO2 rises should include most of the equilibrium effect. I think takiing this into account would cause a transient sensitivity of 1.1C (which you found) to give an equilibrium sensitivity closer to 1.6 – 1.7 C. Is this correct or am I missing something?

  11. Stephen Skinner says:

    “Loehle first uses a model of natural modulations to remove the influence of natural variability (such as solar activity and ocean circulation cycles) from the observed temperature history since 1850. The linear trend in the post-1950 residuals from Loehle’s natural variability model was then assumed to be largely the result, in net, of human carbon dioxide emissions. ”

    They should also remove or identify land use changes to further refine residuals. The focus is usually restricted to UHI and for some reason dismissed. But the change in land use should be extended to include agriculture, because open fields that are well drained are warmer or hotter even than what was there before. If anyone is familiar with London then this IR image clearly shows a large warm rectangle with a cool blob within. The cool blob is a lake in the Fairlop Waters Country park and the large yellow block on the right hand side is farmland. It is not what I expected. Notice that trees and water are ‘cool’.
    http://wattsupwiththat.files.wordpress.com/2011/04/3_left_london_uhi1.jpg

    Therefore the effect of CO2 is even less.

  12. Gary Pearse says:

    Gents, if there is in fact a thermostat negative forcing that counters warming (or cooling in the case of volcanic aerosols), what then can we say about CO2 doubling climate sensitivity and if it matters. In such a case it would have only theoretical interest as an effect of CO2 doubling, all other things remaining the same.

  13. albertalad says:

    Personal speaking, I’m with the groundhog ;-)

  14. James V says:

    Dr Torch
    Yes add in James Hansen’s 6 degrees centigrade estimates

  15. Craig Loehle says:

    Brian: in the discussion I point out exactly what you suggest, that conditions suggest a lower equilibrium response that I calculated.

  16. Jim Cripwell says:

    My approach is very simple. No-one has measured a CO2 signal in any modern temperature/time graph, so there is a strong indication that the climate sensitivity for CO2 added to the atmosphere from recent levels is 0.0 C to one place of decimals or to significant figures.

  17. aaron says:

    When we really know, 15 or 20 yrs out, I bet it will be between 1.1 and .8.

    I expect it to be about 1.1 now, and it will fall as temperatures rise.

  18. pokerguy says:

    “All this talk about climate sensitivity is playing their game. We need to stop and make them explain why for the past seventeen years the sensitivity has been zero.”
    ****

    You’re wrong here. Sensitivity is the whole ball game. Moreover, sensitivity to Co2 is theoretically a constant. A period of no warming doesn’t mean that atmospheric sensitivity is zero, although it certainly argues for lower sensitivity given the ongoing rise in anthro Co2 during that time.

  19. Latitude says:

    that the earth’s climate sensitivity is close to 2°C…

    Then we are already there…so pack up, go home, and stop playing their game

  20. PMHinSC says:

    Fernando Leanme says:
    “…We should factor in the fact that if indeed the climate sensitivity is about 2 degrees C to doubling, the atmosphere is likely to have twice the CO2 content, and temperature will rise. And if it does, it´s evident sea level will increase and cause trouble….”

    Even ignoring the fact that there is no data supporting a direct link between global temperature and CO2 (along with no explanation for the last 17 years), in mho this does not make much sense. According to NOAA, since 1960 atmospheric CO2 has been increasing by a factor of 1.00425 per year (from 317 to 396). At this rate it will take 163 years for CO2 to double. At least do a risk/reward analysis rather than arbitrarily picking a particular effect. Or pick data that supports doubling of CO2 resulting in a 30% increase in crop yield. A 2 degree increase in CO2 can arguable do more good than harm. Make decision on current issue such as energy sources and cost as well as known pollutants (CO2 is a plant food not a pollutant) and balance of trade not something that may or may not happened in 163 years or that may or may not be beneficial. Some of these arguments remind me of the philosophy of Pangloss in Voltaire’s Condide; “we live in the best of all possible worlds.” Condide didn’t find that to be true and I suspect that we also will not find that to be true.

  21. timetochooseagain says:

    I’m currently working on a study to try and constrain sensitivity using somewhat more complicated “simple” model, but I’m running into the problem that I haven’t got much experience with partial differential equations. Ah well.

    Hm, from the forcing dataset I have made up (which I can provide to those curious and explain the derivation, and has the added benefit that you can adjust the aerosol forcing) over the period focused on by this study as when the anthropogenic trend begins (1959-2013), the “known” forcings (Greenhouse Gases (Methane, N2O, CFCs, in addition to CO2), volcanic eruptions, total solar irradiance) trended at ~0.44 W/m^2/Decade, and if I use a factor of -0.9 (I think this is about the right magnitude? Can anyone comment on this?) to multiply my aerosol index (which is basically just sulfur dioxide emissions scaled from 0 to 1 from 1850-2013) I get a “total” trend of ~0.436 W/m^2/Decade (which I had to bring out another decimal place because otherwise it would have rounded to an identical value). So aerosols should play relatively little role over this period-this is because world SO2 emissions peaked decades ago. They have relatively short atmospheric life times, so their loading in the atmosphere should be about proportional to emissions-also, several papers suggest recent brightening, which is consistent with declining emissions leading to declining aerosol loading. Anyway, if I divide the decadal rate quoted in Loehle’s paper over that period (.066 K/Decade) by the decadal rate of forcing increase, and multiply that by the forcing from a doubling of CO2 (3.7 W/m^2), I get a transient value of .56 K. Assuming as he does that models have the right ratio of TCR/ECS, I’ get ~1.02 K per doubling.

    In other words, considering the other known forcings could cut his estimate of the transient response in half.

    Of course, that’s just working under the assumption his method is correct. I can’t speak to that. I can however speak to the importance of considering other anthropogenic factors beyond just the CO2 forcing. On balance it looks like neglecting them leads one to over estimate the sensitivity.

  22. Mac the Knife says:

    Gary Pearse says:
    February 28, 2014 at 10:28 am
    Gents, if there is in fact a thermostat negative forcing that counters warming (or cooling in the case of volcanic aerosols), what then can we say about CO2 doubling climate sensitivity and if it matters. In such a case it would have only theoretical interest as an effect of CO2 doubling, all other things remaining the same.

    Gary,
    Well stated.
    After reading Craig Loehle’s post, I was ruminating about this aspect when I read your comment.
    Thanks,
    Mac

  23. Here is how I did it…

    We have 17 years of no surface warming, so I will simply say that the energy going into the atmosphere / land is zero for 17 years (it has no appreciable heat capacity anyway since over a year you will overwhelm any local heat capacity by seasonal variation).

    Over the same 17 years, Levitus tells us (indirectly) via the change in ocean heat content that the total accumulation in the ocean is 0.5W/m^2 over the period using the 0-2000m dataset (which also agrees with 4 Hiroshimas/second, a number that struck me as totally insignificant, which is why I converted to W/m^2). I went ahead and gave full credit for 0.52 doublings of CO2 (log2(400ppm/280ppm)) that would be active right now compared to preindustrial. 0.52*3.7W/m^2 per doubling = 1.94W/m^2 direct forcing from CO2 right now. Since we know that only 0.5W/m^2 is accumulating, and we know that 1.94W/m^2 direct forcing is active from CO2, we know that the feedback is 0.5-1.94 = -1.44W/m^2.

    We know that earth is warmer since preindustrial, for whatever reason. In fact, it’s warmer enough that it should be emitting about 5W/m^2 more than it used to, just using SB equations and surface temp. It must be doing that already because there is no getting around the fact that the only increase happening now is 0.5W/m^2 and it’s going into the oceans. (the surface is cooling too by plenty of measures, but I’m ignoring that for now since it has no heat capacity to speak of compared to oceans)

    So. We have 0.5W/m^w accumulating, and 1.94W/m^2 direct from CO2, all of which is fairly well accepted by enough people it’s worth talking about. 0.5W/m^2 imbalance means that given 1°C warming for the direct forcing of 3.7W/m^2 per doubling CO2, we should be expecting 0.5/3.7*1=0.135°C further warming to establish equilibrium. It’s nothing. So for all practical purposes, the earth is at equilibrium NOW. This means we don’t have to worry about transient vs ECS, we can calculate it directly. The effect is 0.5W/m^2 / the 1.94W/m^2 direct forcing, or 26% of the direct effect. If the direct effect per doubling is agreed to be 1.0°C to 1.2°C, then the sensitivity is 0.26°C to 0.31°C per doubling after accounting for feedbacks…

    OK? Or did I run it off the rails somewhere… Thanks.

    BTW, the same analysis using ocean heat since 1957 gives a similar result, but more sensitivity, since I actually integrated the direct effect of CO2 over the time period. That one resulted in 0.66°C per doubling.

  24. evanmjones says:

    All this talk about climate sensitivity is playing their game.

    It’s the relevant concern.

  25. john robertson says:

    Climate sensitivity?
    Given the team performance to date, we will freeze as atmospheric CO2 concentrations rise.
    As for the President, sure was right about that Californian drought.
    As there is to date zero empirical causation, for temperature rise due to CO2, sensitivity seems more likely to be a percentage of S.F.A.
    It takes a seriously deluded person to pursue a career, damning the stuff of life.
    Zero CO2 equals zero carbon based life.

  26. mkelly says:

    Statement of Patrick Moore, Ph.D. Before the Senate Environment and Public Works Committee, Subcommittee on Oversight:
    “…There is no scientific proof that human emissions of carbon dioxide (CO2) are the dominant cause of the minor warming of the Earth’s atmosphere over the past 100 years. If there were such a proof it would be written down for all to see. No actual proof, as it is understood in science, exists.”
    ===============
    pokerguy says:

    February 28, 2014 at 10:51 am

    “All this talk about climate sensitivity is playing their game. We need to stop and make them explain why for the past seventeen years the sensitivity has been zero.”
    ****

    You’re wrong here. Sensitivity is the whole ball game. Moreover, sensitivity to Co2 is theoretically a constant. A period of no warming doesn’t mean that atmospheric sensitivity is zero, although it certainly argues for lower sensitivity given the ongoing rise in anthro Co2 during that time.

    +++++++++

    Pokerguy, see the statement from Dr. Moore above. Since there is no proof then talking about sensitivity to something that may not exist is in fact “playing their game” as was stated.

  27. londo says:

    The whole forcing-accounting debate stems from the assumption that the climate needs to be forced to change and that the forcings can be identified. But the thing is that it is possible that over any time scale, climate can be chaotic and all accounting of forcings, which usually is made using statistical methods to identify sensitivities is an argument essentially based on ignorance. Not until the forced contributions can be traced to fundamental physics can we separate forcings and spontaneous variations.

  28. James Ard says:

    It’s the relevant concern if you buy into their notion that carbon dioxide is the most significant driver of climate. I don’t think there’s any evidence of that yet. Other drivers sensitivity and forcings could be much more relevant.

  29. Jeff says:

    I believe the climate sensitivity is much lower than 1 to 2 degrees. I believe it is closer to zero degrees. CO2 works similarly to smog, creating a localized heat island effect. People look at CO2 assumptions completely backward, the heating of earth (from solar/ocean influences) causes CO2 to increase (due to solar energy storage within oceans heating the oceans). El ninos/La ninas release that heat (and additional CO2) into the atmosphere. But that CO2 and heat eventually exits the atmosphere into space. If it isn’t replaced with the same level of solar energy, the heat of the ocean decreases; thus, the heat released decreases during future el nino/la nina events.

  30. Latitude says:

    the change in ocean heat content…

    hogwash…..temp, pressure, and density
    If you assume heat really is hiding in the deep ocean….and at ~35F…..then, for all practical purposes, the ability of the deep ocean to hide heat is infinite

  31. Thanks, P&P. Good article.
    Craig Loehle has it right. If not, we would not be here.
    The Earth would have gotten away in a death spiral of temperature, either warming or cooling.

  32. Robert W Turner says:

    My money is on Lindzen and Choi, 2011.

  33. Jordan says:

    “All this talk about climate sensitivity is playing their game.”

    Yep – a game we will win and they will lose.

    OK, the war goes on – as before, they will try to shift the goalposts. But why not enjoy the trappings of winning this battle, and having sensitivity as another line of evidence of falsification (to add to the hotspot).

    If recent stasis continues, more data = lower sensitivity. That gives us more salt for their already painful wounds. My hands are already itching to get stuck-in.

  34. Is he bringing new insight or is he simply chasing the observations?

  35. James Ard says:

    I see where you are coming from, Jordan. And we are winning that game so far. But your goalposts comment is pertinent. They’ll never admit we’ve proven co2 doesn’t drive climate until we can put up a solid case that something else does.

  36. Stephen Richards says:

    and I garantee that none of them is correct.

  37. Stephen Richards says:

    41.75°C at 7000ppm co²

  38. Jordan says:

    Craig Loehle. I hope you read this thread, and I’d like to start by saying that I have really appreciated many of your past works. Thank you.

    I haven’t read you paper (sorry). But I don’t have an issue with a linear approximation to an exponential rise. The thing that troubles me in the method (as described above) is the following:

    “Loehle estimated the equilibrium climate sensitivity from his transient calculation based on the average transient:equilibrium ratio projected by the collection of climate models used in the IPCC’s most recent Assessment Report.”

    To my mind, a significant failure of the warmist script is to use model forecasts as though they produce real data. Statistical properties are claimed for the climate, when the statistics relate to some “ensemble” of model responses. The logical error is the assumptions that, on average, model responses converge to useable climate forecasts. Accepting that individual “realisations” may be at some variance to the future climate trajectory (i.e. not useable).

    I call BS on that. Model statistics describe the models and nothing more. There is no justification to make the leap to average model behaviour somehow being a reliable indicator of climate.

    I see the same error in the method described above. Estimating the transient stability = OK.

    Using average model to extrapolate to ECS is like adding a sprinkling of fairy dust = not OK.

  39. Jordan says:

    Oops – messed up my last post

    “I see the same error in the method described above. Estimating the transient RESPONSE = OK”….

  40. Chuck Nolan says:

    evanmjones says:
    February 28, 2014 at 11:22 am

    All this talk about climate sensitivity is playing their game.

    It’s the relevant concern.
    ——————————-
    Not to Obama or the UN.
    cn

  41. Alcheson says:

    I think Spencer and Braswell are getting close. It absolutely makes sense for the sensitivity to reflect net NEGATIVE feedbacks. It is not very likely the true feedbacks taken in total would be positive because earth’s overall climate is so relatively stable. The feedbacks practically have to be net negative or else the earth would have likely passed a tipping point LONG LONG ago.

  42. Craig Loehle says:

    Jordan: since few seem to have actually read my paper, let me point out that I argue that the true equilibrium (or at least what we will see by 2100) is probably lower than my calculated value.

  43. catweazle666 says:

    We report here on the first results of a calculation in which separate estimates were made of the effects on global temperature of large increases in the amount of CO2 and dust in the atmosphere. It is found that even an increase by a factor of 8 in the amount of CO2, which is highly unlikely in the next several thousand years, will produce an increase in the surface temperature of less than 2 deg. K.

    Schneider S. & Rasool S., “Atmospheric Carbon Dioxide and Aerosols – Effects of Large Increases on Global Climate”, Science, vol.173, 9 July 1971, p.138-141

  44. Jordan says:

    Fair comment Craig – I have read other papers you have produced, and found them to be well worthwhile. I picked up a copy of your paper through the link you provided above, and will have a look at it.

    The main gripe in my earlier comment is the practice of assuming average model response has meaning, whereas individual “realisations” don’t. Not saying you do this, but it seems to do the rounds in the wider debate.

    Anyway – I appreciate your contribution.

  45. vukcevic says:

    If it is assumed (as IPCC does) that AGW has escalated from 1975, and prior to 1975 was negligible, it is possible to calculate CO2 sensitivity knowing natural variability.
    Northern Hemisphere temperature is the one with the least uncertainty of all the records.
    In this link (with Anthony’s permission)
    http://www.vukcevic.talktalk.net/NHT.htm
    I show comparison of the Northern Hemisphere temperature anomaly and natural variability (including solar and terrestrial natural oscillations 1880-2012).
    Method I used is similar to that employed in Dr Dr. Loehle’s paper, but my result is more like 0.25C for doubling of CO2.
    It is possible that my calculation is erroneous, the above link contains all numbers required enabling others to have a go.

  46. cd says:

    Ilma630 has a point, by removing natural variability are they not accepting that it is possible to quantify anthro vs natural from a temperature record.

  47. MarkW says:

    Fernando Leanme says:
    February 28, 2014 at 9:47 am
    ——–
    What’s wrong with the atmosphere having twice as much CO2.
    A warming of 2C is not a bad thing, it is a good thing.
    More CO2 in the atmosphere means plants grow bigger and need less water.
    Yes, the oceans might rise a couple of inches, so what?
    If you want to cut imports, a better way is to allow drilling in the US. That has the benefit of not making cars smaller and more expensive and will mean lots fewer people dying.

  48. Gail Combs says:

    Fernando Leanme says: @ February 28, 2014 at 9:47 am

    ….So the best solution is to cut emissions doing so in a prudent fashion and reducing the balance of trade déficit.
    >>>>>>>>>>>>>>>>>
    Yes but is it?

    Everyone forgets WHEN we live. At the possible end of the Holocene.

    The US Government and Mass Media completely ignored the real climate debate that has been raging.

    We will illustrate our case with reference to a debate currently taking place in the circle of Quaternary climate scientists. The climate history of the past few million years is characterized by repeated transitions between `cold’ (glacial) and `warm’ (interglacial) climates… The current interglacial, called the Holocene, should now be coming to an end, when compared to previous interglacials, yet clearly it is not. The debate is about when to expect the next glacial inception, setting aside human activities, which may well have perturbed natural cycles.
    arxiv(DOT)org/pdf/0906.3625.pdf

    The thing is, informed geologists hope and pray Greenhouse Gases can delay the next glacial inception.

    William McClnney a Geologist has commented here at WUWT:

    Onset of the Little Ice Age after the Medieval Warm Period, was right on time for glacial inception. It occurred when the Holocene reached about half a precession cycle. The Modern Warm Period, reportedly less warm then the MWP, marks the second thermal pulse, a few centuries older than half a precession cycle. Note that the end of MIS 11, the best analog to the Holocene, had two thermal pulses before the big drop into glaciation. We are not out of the woods yet either because the Earth will remain at or close to the solar insolation that triggers glacial-inception for the next 4,000 years.

    If Ruddiman’s “Early Anthropogenic Hypothesis” is correct it would be GHG emissions that have prevented glacial inception so far.

    …Because the intensities of the 397kaBP and present insolation minima are very similar, we conclude that under natural boundary conditions the present insolation minimum holds the potential to terminate the Holocene interglacial. folk(DOT)uib.no/abo007/share/papers/eemian_and_lgi/mueller_pross07.qsr.pdf

    A more recent paper from the fall of 2012 says the same thing.
    Can we predict the duration of an interglacial?

    ….the June 21 insolation minimum at 65N during MIS 11 is only 489 W/m2, much less pronounced than the present minimum of 474 W/m2. In addition, current insolation values are not predicted to return to the high values of late MIS 11 for another 65 kyr. We propose that this effectively precludes a ‘double precession-cycle’ interglacial [e.g., Raymo, 1997] in the Holocene without human influence….

    In addition that paper says the warming of the Arctic and cooling of the Antarctic, (sound familiar) the bipolar seesaw, is an indication that the descent into the next glaciation has already started.

    … thus, the first major reactivation of the bipolar seesaw would probably constitute an indication that the transition to a glacial state had already taken place. …..

    Even Woods Hole Observatory warned that politicians maybe barking up the wrong tree.

    Abrupt Climate Change: Should We Be Worried?

    “….Fossil evidence clearly demonstrates that Earth vs climate can shift gears within a decade, establishing new and different patterns that can persist for decades to centuries….

    This new paradigm of abrupt climate change has been well established over the last decade by research of ocean, earth and atmosphere scientists at many institutions worldwide. But the concept remains little known and scarcely appreciated in the wider community of scientists, economists, policy makers, and world political and business leaders. Thus, world leaders may be planning for climate scenarios of global warming that are opposite to what might actually occur….”

    Now tell me again why we want to lower CO2? Why the IPCC and the US government wants to strip the evil devil gas from the late Holocene atmosphere? Is it so we can take our glacial inception chances? Really? That is the IPCC and the EPA’s recommendation? According to the early anthropogenic hypothesis we should already be in the next glacial were it not for AGW! So Obama is recommending removing the only (so far) hypothesized glacial inception deterrent!

    BRILLIANT!

  49. Jordan says:

    Hello again Craig.

    This part: “This can be converted to equilibrium sensitivity as follows. In IPCC (2007) Table 8.2 shows both transient and equilibrium sensitivity as computed by climate models. For the 18 cases where both are shown, the mean ratio of equilibrium to transient sensitivity is 1.81761◦C. Multiplying this by the transient forcing yields SE = 1.986◦C (1.745–2.227◦C).”

    Rhetorically great – use your opponents’ arguments against them. But … scientifically, if your opponents have sprinkled fairy dust in their analysis, by using their results, you have added fairy dust to yours.

    I don’t buy the idea that averaging model results adds value compared to individual “realisations”. If individual realisations cannot be relied upon, end of story.

    Rhetoric is tempting, but best not to muddy your fingers with their mistakes.

    Cheers

  50. Gail Combs says:

    Mac the Knife says:
    February 28, 2014 at 11:19 am

    Gary Pearse says:
    February 28, 2014 at 10:28 am
    >>>>>>>>>>>>>>>>>>>>
    I am with you. The Holocene has been very stable. If CO2 does have any effect it is minor and quickly countered by negative feed backs.

    The Super El Nino was 1997-1998. You can see the inflection point in the % change in Earthshine Albedo measurements as the climate switched gears Graph

  51. Gunga Din says:

    pokerguy says:
    February 28, 2014 at 10:51 am

    “All this talk about climate sensitivity is playing their game. We need to stop and make them explain why for the past seventeen years the sensitivity has been zero.”
    ****

    You’re wrong here. Sensitivity is the whole ball game. Moreover, sensitivity to Co2 is theoretically a constant. A period of no warming doesn’t mean that atmospheric sensitivity is zero, although it certainly argues for lower sensitivity given the ongoing rise in anthro Co2 during that time.

    ========================================================
    If CO2 doesn’t do what they claimed it would (Harm Ma’ Nature) then there is no basis for the USEPA or any other nations equivalent to regulate it.
    The greedy fingers reaching for cap and trade etc are cut off.

  52. Gums says:

    I am having a hard time separating Anthony’s and others’ comments from the referenced studies or papers.

    Am I the only one?

    Gotta be a way to use colors or fonts or another way to read the referenced papers versus the moderators and such, ya think?

    Gums whines…

  53. Alex Hamilton says:

    Valid physics tells us there is no warming caused by water vapor or any other greenhouse gas. A planet’s surface may be partly warmed by direct solar radiation, but even that is not found to be necessary on some other planets. Nor is any radiation from a colder atmosphere able to raise the temperature of a surface because that would be a process in which entropy had decreased. Radiation from the atmosphere plays a part in slowing surface cooling, but what does most of the slowing are nitrogen and oxygen molecules which slow conduction from the surface.

    But none of these processes are what plays the main role in setting and controllling surface temperatures. The amount of solar radiation absorbed by the atmosphere and the thermal gradient that forms autonomously in a gravitational field according to the laws of physics are the main factors determining these temperatures. This is very obvious on other planets, but as we stand in the nice warm sunshine on Earth we get somewhat confused as to what’s warming what. Just remember that there is absolutely no evidence in temperature records that the greenhouse gas water vapor increases mean surface temperatures. That fact is a bit of a bother for those who try to imagine the temperature trends show sensitivity to carbon dioxide, when in fact they are mostly just showing the main 1,000 year and 60 year natural cycles regulated by the planets.

  54. Robert JM says:

    You can easily calculate the climate sensitivity from cloud forcing observations.
    There was a 5% decrease in cloud cover in the 1990s (0.9w/m2) which caused 0.3deg of warming or 0.06 deg/% cloud change.
    Climate sensitivity is there for near neutral, or about 1.2 deg C for the proposed CO2 doubling forcing of 3.7w/m2.(assuming this is real in the first place)
    Very similar to spencer and braswell.

  55. evanmjones says:

    It’s the relevant concern if you buy into their notion that carbon dioxide is the most significant driver of climate. I don’t think there’s any evidence of that yet. Other drivers sensitivity and forcings could be much more relevant.

    Possibly so. But we need to find out as much as we can about CO2 sensitivity in any case.

    Not to Obama or the UN.

    It is the “driving” concern overall. Obama won’t be with us forever. Meanwhile, we are defended by (and counting on) his outstanding failure of leadership to tide us over.

  56. usurbrain says:

    Where is this theory that ONLY man made CO2 causes the problem coming from. Someone replied to one of my comments on a blog that their professor stated the is ONLY man made CO2 … then went on with how the other “Natural” CO2 was not harmful. HOW? What is different IR wise about man made CO2?

  57. Philip Haddad says:

    If we focus on the fact that we burn fossil fuels for the heat they produce, and that CO2 is a by-product (a minor greenhouse gas) we can apply common sense and a few calculations to show that the heat emissions from our energy use are four times the amount necessary to account for the actual measured rise in atmospheric temperature. We can then deduce that the effect of CO2 must be minor. We can then argue that the international push for CCS,carbon capture and storage, makes no sense. To reduce CO2 concentration by 1ppm 9,000,000 tons must be removed: at what cost and for what benefit? We can also argue that since nuclear power emits more than twice the total heat as its electrical output, we should not permit nor license any more nuclear plants, but that is what we are now doing. What kind of scientists do we have that cannot realize that heat is what causes temperatures to rise? The heat we emit can account for most of the things we are experiencing,i.e.rising water and land temperature, melting of glaciers. In the past century annual energy usage has increased tenfold. Sure, CO2 has increased 25%, but it is HEAT that should guide us. Will our elected officials respond? Try even getting an acknowledgement of receipt. If I sound bitter and frustrated it’s because I am.

  58. DocMartyn says:

    Craig Loehle, I have a question on the difference between transient and ‘equilibrium’ sensitivity, with respect to atmospheric CO2.
    The splice of the Law Dome ice core and the Keeling curve show that the log(CO2) is essentially biphasic, linear from 1800-1957 and from 1958-2014. The ratio of the slopes is about 4.27. This kink in atmospheric CO2 ‘forcing’ should surely be evident in temperature record. If there is a decade or so between transient and ‘equilibrium’, then there should be a smoother kink in the temperature record in 67, a 20 year tau would place the kink in the late 70’s early 80’s.

  59. Philip Haddad “we can apply common sense and a few calculations to show that the heat emissions from our energy use are four times the amount necessary to account for the actual measured rise in atmospheric temperature.”

    Care to show us those calculations?

  60. David Ball says:

    evanmjones says:
    February 28, 2014 at 11:22 am
    All this talk about climate sensitivity is playing their game.

    “It’s the irrelevant concern.”

    FIFY

  61. PMHinSC says:

    evanmjones says:February 28, 2014 at 11:22 am
    All this talk about climate sensitivity is playing their game.
    It’s the relevant concern.

    Perhaps the more relevant concern is whether warming will do more good than harm. Particularly since at the current rate we have over a century and a half before CO2 doubles.

  62. philohaddad says:

    I’ll give a specific example. In 2008 energy use was 16 terawatts which is equivalent to 50x10E16 btus for that year. The mass of the atmosphere is 1166x10E16 pounds and has a specific heat of 0.24. dH=M Cp dT. 50x10E16 =1166x10E16 x0.24 xdT. Solving for dT, the change in temperature is
    50/ (1166×0.24) = 0.17*F, the potential temperature rise if all the heat went there. The actual measured rise was 0.04-0.05*F ( the slope of the line tangent at 2008 for temperature versus time). I hope this is adequate but I would be happy to discuss it further.

  63. Steven Mosher says:

    ” If anyone is familiar with London then this IR image clearly shows a large warm rectangle with a cool blob within. The cool blob is a lake in the Fairlop Waters Country park and the large yellow block on the right hand side is farmland. It is not what I expected. Notice that trees and water are ‘cool’.

    that’s not a picture of UHI. Its a picture of SUHI. UHI has to do with the air temperature below the canopy layer. SUHI is the surface ( think dirt) temperature. UHI and SUHI are related but not in any simple way

    Here is an example of some of the biases in making IR images of LST and a comparison of SUHI and UHI

    http://www.uv.es/juy/Doc/sobrino_et_al_2013_IJRS_UHI.pdf.

  64. Steven Mosher says:

    Craig.

    The first sentence of the paper is wrong

    Climate sensitivity is the response to any radiative forcing. lambda.

    say your lambda is for example .75 C per watt/m^2

    The sensitivity to C02 doubling ( no feedbacks) is then 3.71 Watts * 0.75

    Looking only at C02 forcing ( which is maybe 75% of the forcing ) will give you the wrong answer

  65. @Jordan at 12:33 pm
    Model statistics describe the models and nothing more. There is no justification to make the leap to average model behaviour somehow being a reliable indicator of climate.
    1:43 pm
    The main gripe in my earlier comment is the practice of assuming average model response has meaning, whereas individual “realisations” don’t.

    I think that sums it up nicely. If the mean model response has any meaning, why should adding poorer models to the ensemble have greater effect on the meaning than good models?

    But let me propose that while the mean of the ensemble has little meaning, the standard deviation of the ensemble has GREAT meaning. It is a direct measure of how unsettled is the science. By eliminating poorer models from the ensemble does the envelope narrow, the standard deviation reduces, and the science becomes more settled. Not necessarily more correct, just more settled.

  66. The article employs the phrase “the equilibrium climate sensitivity” (TECS). The “the” in this phrase implies TECS to be a constant e.g. 3 Celsius per doubling of the CO2 concentration.

    TECS is the ratio between the change in the global surface air temperature at equilbrium and the change in the logarithm of the change in the CO2 concentration. Information that TECS is a constant is not a product of scientific research conducted thus far. Thus, to assume TECS to be a constant is to fabricate this information.

  67. Alex Hamilton says:

    I have raised the issue of sensitivity to water vapor on the SkS site in comments #20 and #25 after this comment pointed out that nowhere is there any discussion of the autonomous thermal gradient in any atmosphere. In case they delete comment #25, it reads …

    Moderator: As Tom Dayton pointed out, there is no thread discussing the autonomous thermal gradient that evolves at the molecular level as the isentropic state of maximum entropy in a gravitational field – a now proven fact of thermodynamic physics which happens to have been the subject of my postgraduate research for several years. That is understandable, of course, because there is no need for any extra “33 degrees of warming” if Loschmidt was right. Seeing that no one has proved Loschmidt wrong, and modern physics has been used to prove him right, I’ll do occasional searches on SkS for the word “Loschmidt” (which does not appear anywhere on the site at the moment) and then perhaps respond to any post or comment thereon. Meanwhile you might like to search for any study which uses real world temperature and precipitation data to confirm that the sensitivity to a 1% increase in water vapor above a region is several degrees of warming. I happen to have reviewed a study (to be published in April) which shows the sensitivity is negative, which of course is what is to be expected because the Loschmidt effect causes even warmer surface temperatures which are then reduced because the wet lapse rate is less steep.

  68. philohaddad

    I see how you got there. I was thinking of comparing to a forcing like I was describing above. So taking:
    5.101E+14 m^2 earth area
    143,851 terawatt hours total energy use per Wikipedia 2008
    1.43851E+17 watt hours
    5.17864E+20 joules
    1.64213E+13 watts
    16.42134703 terawatts
    0.032192407 W/m^2

    The forcing would only 0.032W/m^2 for all that energy we used. The direct forcing from CO2 at half a doubling is about 1.9W/m^2, so it is pretty small compared to that (1.7%), and it is 6.4% of the 0.5W/m^2 I was describing above. I suppose taken all at once, it could heat the atmosphere, but taken over a year, the energy escapes so fast, it really can’t make much of an impact. Good to know the relative amount though. Thanks.

    Another interesting question would be the relative concentration of that energy use. 70% of the area could be ignored (ocean), and probably 99% of land too (rural, guessing). So maybe the urban forcing is 0.032/(1-0.70)/(1-0.99) = 10.7 W/m^2. That is certainly not insignificant at a local level (it is also concentrated near the ground, which is not considered). That’s going to make a dent.

  69. Steven Mosher: You have a good background in sensitivity, forcing / feedback, would you care to critique my comment at 11:21am? Am I making a sensible argument for low sensitivity given the pause in atmospheric temperatures? If not, why not? Thanks!

  70. Alex Hamilton says:

    The sensitivity to carbon dioxide (even if it were positive) could not then be multiplied by extra water vapor because, as I explained in a comment on SkS, the assumption that, if the ocean were warmed then more evaporation would occur, is not correct. The rate of evaporation depends more on the temperature gap at the boundary and so, if the atmosphere supposedly warms first then that gap would narrow and evaporation decrease.

    Then, even if they were right about water vapor warming, the sensitivity of water vapor would have to be about 5 to 8 degrees for each 1% of water vapor, as the level varies between about 1% and 4%. So they would have to show that a desert with only 1% water vapor above it would be about 15 to 24 degrees cooler than a rainforest with 4% above it at a similar latitude and altitude. Obviously no such real world data exists, and what data does exist demonstrates a cooling effect correlated with extra precipitation.

    But of course the main problem is the incorrect SkS assumption that temperatures would be homogeneous throughout the troposphere in the absence of water vapor and other radiating molecules, because molecules in free flight after a collision cannot generate gravitational potential energy out of nothing. Modern physics can be used to explain why Loschmidt was correct about autonomous thermal gradients in solids, liquids and gases. Running a wire up the outside of a cylinder does not, however, bring about any perpetual motion of energy, because the wire also develops a thermal gradient and the combined system comes to a new and stable state of thermodynamic equilibrium. So the validity of the Loschmidt effect is all that is needed to show the greenhouse guesswork was incorrect, even though Loschmidt was wrong about perpetual motion.

    Well, as expected, SkS true to form deleted all three of my detailed comments within one to two hours because they obviously had no valid counter arguments, and nowhere on their site will you find the word “Loschmidt” because that is the weak link (now a broken link) in their chain of deception.

  71. I would like to suggest another paper by Stephen Schwartz of Brookhaven Laboratory, Heat capacity, time constant, and sensitivity of Earth’s climate system. Schwartz S. E. J.
    Geophys. Res., 112, D24S05 (2007). doi:10.1029/2007JD008746

    As I understand the paper Dr. Schwartz estimate climate sensitivity to doubling of CO2 as 1.1 ± 0.5 K. He based his estimate on ocean heat content.

    This version of the paper was attacked by a team of climatologists/ modelers and subsequently Dr Schwartz revised his estimate upward by a small amount. His result appear to be not much different from that of Spencer and Brasswell. Since the heat capacity of the oceans is orders of magnitude greater than the atmosphere, I believe that the estimates based on ocean models is more convincing.

  72. strike says:

    @PMHinSC
    “Perhaps the more relevant concern is whether warming will do more good than harm. Particularly since at the current rate we have over a century and a half before CO2 doubles.”

    Says the crab, when you warm the water from 20C to 25C.

    If that is Your argument, our beloved warmists will say, maybe you’re right for 2 degrees, but what if 4 or 6 degrees bla,bla bla and then You are in the defense. If you argument with this “even-if the temperature”, this “even-if” will not be heard and for them and for neutral listeners you already admitted 2 degrees.
    Please mind my english, but I hope you get the point anyway.

  73. “Sensitivity to CO2 is …a constant”

    Actually the physics tells us that the logarithm of CO2 is the relevant variable. That is why we use “doubling”. The process is multiplicative, not arithmetic.

    The curve is probably exponential as the following analogy demonstrates. During WWII some blackout curtains were flimsy. So the people used two layers and maybe even three. But there came a point when adding more layers had no measurable effect. However, we cannot explore this curtain analogy using only CO2, because the infrared windows are open and closed in different parts of the spectrum depending on the molecule (CO2, CH4, H2O.) and the windows overlap.

    In any event, too much attention is paid to the GHG effect on land. The oceans make up 70% or so of the Earth’s surface and considerably more in the tropics as you can see by eyeballing a map. While the oceans do reflect a portion of the energy depending on roughness and the angle of the sun, that is not the relevant point. What counts is that most of the energy normal to the surface is absorbed by the oceans. That’s the key to the Earth’s climate.

    You can easily determine this for yourself by looking at the infrared bands of a satellite image. The oceans show up as black. In fact you can use the infrared band to generate a precise and accurate map showing coastlines, middle infrared is best.

  74. beng says:

    ***
    Gail Combs says:
    February 28, 2014 at 2:24 pm

    arxiv.org/pdf/0906.3625.pdf (A Bayesian prediction of the next glacial inception)
    ***

    Thanks. An interesting paper, tho the statistics are beyond me. It “predicts” the current interglacial lasting about 50000 yrs more, a similar prediction to what our resident solar expert thinks is plausible.

  75. Coach Springer says:

    Resolved: There are a nearly infinite number of at least semi-serous ways of looking at CO2 sensitivity under a single assumption that climate revolves around the CO2 molecule. Thanks to the newer ones that try to better match models with observation. It leans more to the serious and away from the permanent human condition of promising control of weather by control of people. But it labors under assumption never the less.

  76. PMHinSC says:

    strike says:
    March 1, 2014 at 1:34 am
    @PMHinSC
“Perhaps the more relevant concern is whether warming will do more good than harm. Particularly since at the current rate we have over a century and a half before CO2 doubles.

    “If that is Your argument, our beloved warmists will say, maybe you’re right for 2 degrees, but what if 4 or 6 degrees bla,bla bla and then You are in the defense. If you argument with this “even-if the temperature”, this “even-if” will not be heard and for them and for neutral listeners you already“

    “What if” is a game not an argument.
    Arguing that sensitivity should be the primary focus is getting the cart before the horse. Someone should have to demonstrate that warming does more harm than good before sensitivity become the primary focus of discussion.

  77. Brian says:

    “. If the direct effect per doubling is agreed to be 1.0°C to 1.2°C, then the sensitivity is 0.26°C to 0.31°C per doubling after accounting for feedbacks…”

    Michael D. Smith,

    Do you mean that feedbacks add an additional 0.26 – 0.31 C to the 1.0 – 1.2 C, or are you claiming that feedbacks are negative, giving a total of 0.26 – 0.31 C? If the latter, you’ve run off the rails. If the former, I think you are standing on solid ground. A number of years back, before the pause was clearly a pause, I estimated the equilibrium sensitivity at 1.4 +- 0.2 C based on historical data over the last century. Your result, if you mean the former interpretation, seems reasonable to me.

  78. evanmjones says:

    Someone replied to one of my comments on a blog that their professor stated the is ONLY man made CO2 … then went on with how the other “Natural” CO2 was not harmful. HOW? What is different IR wise about man made CO2?

    It’s not quite like that. Look at the atmosphere like a bathtub with a couple of drains on the bottom. but what is circulated out is circulated back in (at a slight CO2 loss). When man adds CO2 to the bathtub, the level of CO2 in the bathtub goes up, too. (More drains out, as well.)

    CO2 content has been increasing at ~0.4% per year. Now, I don’t think that matters much, but still, we are adding the CO2.

  79. David Ball says:

    evanmjones says:
    March 1, 2014 at 7:33 pm

    I am sorry Evan. Analyzing your post in detail is a little confusing. Please explain.

  80. Alex Hamilton says:

    Frederick Colbourne writes regarding sensitivity .. ..
    Actually the physics tells us that the logarithm of CO2 is the relevant variable.

    No valid physics tells us there is any warming sensitivity to carbon dioxide. Valid physics tells us that a planet’s troposphere spontaneously evolves towards a state of maximum entropy, that state being thus characterised by isentropic conditions and hence displaying an autonomous thermal gradient which results from the force of gravity acting on individual molecules in free flight between collisions, wherein kinetic energy exchanges with gravitational potential energy..

    So the thermal gradient exists and is obviously caused by this process on other planets where there may be no surface, no direct solar radiation and no upward rising gases. So too on Earth, and hence there is no warming by 33 degrees or whatever due to sensitivity to water vapor and radiating gases. If water vapor were raising temperatures by most of that 33 degrees, then moist rain forest regions would be perhaps 15 to 20 degrees warmer than dry deserts, and they are not.

    So the whole postulalte of positive sensitivity is false, and in fact water vapor lowers surface temperatures because it reduces the thermal gradient by inter-molecular radiation.

  81. richard verney says:

    Jim Cripwell says:
    February 28, 2014 at 10:43 am
    //////////////////////

    I have been saying similar for years. I find the entire discussions about climate sensitivity disingenuous. Until such time as we can separate the signal from CO2 from the noise of natural variation, factually, it is impossible, from observational data, to extrapolate a figure for climate sensitivity. And without basing the figure on underlying data, any model projection is nothing more than fantasy.

    The stark fact is this; there is no first order correlation between CO2 and temperature in any of the instrument temperature records, such that presently we are unable to detect any temperature signal to increases in CO2 within the present and existing limitations of our best measuring equipment.

    As far as the satellite era is concerned, the satellite data suggests, over a 33/34 year period, that temperatures have not significantly increased at all due to any increase in atmospheric levels of CO2, ie., temperatures have been essentially flat between inception in 1979 to around before the super El Nino of 1998, and once again following that event, essentially flat to date. There has simply been a one off and isolated temperature hike in and around the super El Nino of 1998 and unless that El Nino was in some way caused by the levels of CO2 in the atmosphere (and as far as I know no one suggests that it was), the satellite data suggests that we cannot detect any CO2 driven temperature change.

    Of course, this may in part be explained by the sensitivity and accuracy of our temperature measurements. Say, if we can measure global temperatures to tenths of a degree, then observational data would permit climate sensitivity to be as high as about 0.25degC (ie., we have seen a rise in CO2 of about 120ppm from about 280 to about 400ppm and during this rise, we are unable to detect the signal of CO2 driven temperature changes). If we can measure global temperatures to one fifth of a degree, then climate sensitivity could be as high as 0.5degC, if we can measure global temperatures to a third of a degree, there is the possibility that climate sensitivity could be as high as about 0.75degC etc etc.

    Thus one problem is the accuracy of our measurements. Can we really measure global temperatures to say a third of a degree, or is it in practice that our measurement efforts are such that we can only measure global temperatures to an accuracy of between half to one degree? It is because there is potential for such errors in our current assessment of global temperatures these past 150 years or so, that there is scope for argument that CO2 may do something. One cannot say that (at current levels of CO2) it actually does drive temperature, because no signal can be discerned over the noise, but because of the wide error margins in the accuracy of our temperature measurements, we cannot say from observational data, that it does not have any significant effect. It could be the case that due to the logarithmic effect of CO2, its effect was exhausted long before the pre-industrial level (claimed to be about 280ppm) was reached.

    The take home is that climate sensitivity is so low that the effect (ie., the signal) cannot be measured within the limitations and parameters of our currently best available measurement equipment.

  82. richard verney says:

    In my opinion, to test these papers and their various assessments, the temperature record should be divided into slots (ie., the late 19th century/early 20th century cooling, the ~30 year warming to 1940, the ~30 year cooling between 1940 to 1970, the ~25 year warming to 1998, the 17 year stasis to date), and the authors should be required to list what positive and what negative forcings were used during each of these periods and why those forcings were used (ie, what evidential basis supports the forcings used).

    One would then look at whether the climate sensitivity figure (as assessed by them) properly explains each of the distinct slots. One can also assess the reasonableness of the claimed forcings claimed to be operative during the period in question. If each slot cannot be explained by the climate sensitivity figure assessed by the author, there is a problem.

    As many of us have been saying for years, the longer the current stasis continues, the lower the figure for climate sensitivity will become, and the longer it continues the more papers we will be seeing with climate sensitivity assessed at the low end of the IPCC range, or even below the low end.

    The reasonableness of the 5th Assessment will not be judged on the date of its publication, but rather at the date when the next large international gathering takes place, or may be even 2020, since I seem to recall that, at Rio, China indicated that it would do nothing before 2020. This is a problem for the IPCC (and politicians on the bandwagon) since it is likely that their report will, at the material time, be seen to be irrelevant, having been superseded by a substantial number of papers assessing modest levels of climate sensitivity, and with modest climate sensitivity, the case for mitigation becomes increasingly weak, and the case for adaption becomes more attractive.

  83. philohaddad says:

    I would like to point out that heat emissions from our energy use are seldom if ever mentioned as a contributor to global warming, The fact is that heat emissions alone provide four times the energy that is required to raise the atmospheric temperature by the measured amount. This cannot simply be ignored. over the past century energy use has increased tenfold,and is currently over 16 terawatts, while CO2 concentration has increased by 25%. Present heat emissions are about 0.03w/m2. This has been ridiculed as being insignificant as compared to a present carbon forcing of 2,9w/m2, however we must look at the change over the past century of each of the contributing factors, i.e. what was the CO2 forcing a century ago as compared to the present. Probably no one knows.. I am not saying that the increase in CO2 contributes nothing, but the only thing we have a reasonable measure on, is the increase in heat emissions during the past century and it is enough to account for most of the effects we are experiencing. CO2 may cause a minor increase in heat retained, but increased conversion of CO2 by photosynthesis provides cooling by absorbing 5000 btus of solar energy or every pound of CO2 converted to trees, etc. The cooling effect may outweigh the minor heating effect (pure speculation).

  84. philohaddad says:

    Richard Verney: Although it may be determined that climate sensitivity to CO2 is too low to be a consideration, the increase in CO2 is an indication that more heat is being emitted into the environment. Heat emissions from our energy consumption are four times the amount accounted for by the measured rise in atmospheric temperature. This heat goes somewhere besides the atmosphere. Some heats the land and water, melts glaciers etc., and some is lost by radiation and convection to space. Even if CO2 is found to be low sensitivity the damage from conventional energy will continue. Nuclear energy emits more than twice the total heat as its electrical output. CCS, carbon capture and storage, a program of international scope, is a preposterous boondoggle that should be exposed. To reduce CO2 by 1ppm requires the removal of 9,000,000 tons. The lower the sensitivity the less the benefit. I appreciate the opportunity to comment in agreement with your position.

  85. Bill Illis says:

    Here are some basic numbers.

    All Forcing 2013 (GHG and all others, IPCC AR5) –> +2.30 W/m2

    Accumulating Energy 2013 (all known components) –> +0.58 W/m2

    Negative Feedbacks 2013 (including increased OLR which is rarely mentioned) –> -1.72 W/m2 or -75%

    Average Annual Temperature change for 0.58 W/m2 since 2004 when energy numbers are available –> 0.0C at surface, 0.002C deep ocean.

    Temp C change per 1.0 W/m2 –> 0.0C/W/m2 to 0.003C/W/m2 –> same Zero as the paleoclimate.

    Temp C change per 4.2 W/m2 doubling –> 0.0C to 0.012C

  86. Bill Illis says:

    Last point should read.

    Temp C change per 4.2 W/m2 doubling by 2080 –> 0.0C to 0.200C

  87. Alex Hamilton says:

    philohaddad wrote “This heat goes somewhere besides the atmosphere. Some heats the land and water, melts glaciers etc., and some is lost by radiation and convection to space.”

    Firstly, there is no significant convection into space because of the shortage of molecules to convey the kinetic energy. Radiating molecules like carbon dioxide and water receive energy conveyed by convection and radiate it away.

    Secondly, the whole Earth plus atmosphere system acts like a black body and so it will radiate back to space close enough to the same incident insolation that it receives. Measurements near TOA rarely show a net difference of more than plus or minus half of one percent, and those measurements have uncertainties which effectively mean that there is no convincing evidence of any net imbalance at all. There probably is, though, when there is prolonged natural warming or cooling as part of the obvious 1,000 year and 60 year natural cycles. But temperature variations are the cause not the result.

    This is another reason why there is no warming by carbon dioxide. The only transfers of thermal energy by radiation are from warmer to cooler regions, so radiation can transfer thermal energy only upwards in a troposphere, possibly following a random path between several molecules of those pollutants like water vapor and carbon dioxide until it breaks out of the maze and into space..

  88. Brian says:
    March 1, 2014 at 12:55 pm

    “Do you mean that feedbacks add an additional 0.26 – 0.31 C to the 1.0 – 1.2 C, or are you claiming that feedbacks are negative, giving a total of 0.26 – 0.31 C? If the latter, you’ve run off the rails. ”

    I mean the feedbacks are negative. If you see what I’ve done, I’m claiming that surface temperature changes over the last 17 years are zero. So I’m using Levitus 2010 to estimate that the only accumulation of heat in the system is 0.5W/m^2. We know that 0.52 doublings should force the system by 1.94W/m^2. But we only “see” 0.5W/m^2. So the feedback is -1.44W/m^2

  89. george e smith says:

    “””””……Alex Hamilton says:

    March 2, 2014 at 5:39 am

    philohaddad wrote “This heat goes somewhere besides the atmosphere. Some heats the land and water, melts glaciers etc., and some is lost by radiation and convection to space.”

    Firstly, there is no significant convection into space because of the shortage of molecules to convey the kinetic energy. Radiating molecules like carbon dioxide and water receive energy conveyed by convection and radiate it away…….”””””

    Kevin Trenberth says that of the 390 W/m^2 radiated LWIR from the surface (at 288 K), only 40 W/m^2 escapes to space, presumably in the “atmospheric window” in the 10 micron region. The rest is absorbed by GHG and or clouds; H2O and CO2, being most prominent.

    Now clouds, being water or ice, can absorb a wide range of LWIR spectrum energy (from the surface) at 10.1 micron spectrum center (288 K) and also LWIR emissions from atmospheric GHGs, which are emitted only at GHG molecular frequencies (non thermal spectrum). It is asserted, by experts who know more than I do, that atmospheric thermal energies are transferred by collisions to the GHG molecules, thereby raising the GHGs to some excited state, from which they subsequently re-radiate, isotropically at those specific spectral frequencies, characteristic of the particular GHG species. Thus cooling of the atmospheric gases at higher and higher altitudes, occurs only by eventual radiation by GHG gases at GHG specific resonance frequencies, primarily H2O and CO2 frequencies which are essentially Temperature independent, so there is no thermal radiation at frequencies characteristic of the atmosphere Temperature.
    Now all of the LWIR radiations that get absorbed by CLOUDS, liquid, and solid water, are absorbed largely independent of the cloud Temperature, and help to determine the Temperature of the cloud, in concert with the altitude.

    The cloud then, being solid or liquid, subsequently radiates a purely thermal LWIR radiation spectrum, that is entirely determined by the cloud Temperature, and is virtually always, a longer wavelength spectrum, than what was emitted from the surface, because of the lower (often much lower) cloud Temperature.

    So the radiation to space from a cloudless sky, should consist ONLY of Temperature independent GHG band resonance radiation in addition to the 40 W/m^2, surface Temperature dependent radiation that Trenberth asserts escapes directly from the surface. Over dry cloudless deserts, that cooling radiation, would consist almost exclusively of CO2 band spectra, in addition to the surface Temperature thermal radiation.

    So it seems to me, that over cloudless skies, the extra-terrestrial LWIR radiation should consist of only a Temperature independent H2O plus CO2 band spectrum, in addition to the surface Temperature dependent surface thermal emissions, that escape in the atmospheric window.

    Over clouds, there would be an additional thermal spectrum component that is characteristic of the cloud Temperature, and each cloud layer, would have its own characteristic Temperature thermal spectrum.

    Every extra-terrestrial LWIR spectrum, I have ever seen, consists of a surface Temperature dependent black body like thermal spectrum, with Temperature independent spectral holes at the prominent GHG resonant absorption band frequencies.

    The atmosphere itself of course does not radiate thermal spectra at the atmosphere Temperature.

    So why do people claim that the earth external emissions are at a 255 K or so characteristic Temperature, and the CO2 delaying raises the surface Temperature to 288 K.

    With a 288 K surface Temperature, you can ONLY get a 255 K LWIR external spectrum, from high clouds at 255 K Temperature. At higher altitudes beyond all clouds, there cannot be Temperature dependent BB like radiation, at other than the surface Temperature of 288 K (on average of course).

    Well unless you believe the atmospheric gases themselves can radiate a Temperature dependent thermal spectrum.

  90. george e smith says:

    It would be nice, if somebody more knowledgeable would explain how at the higher beyond water altitudes, where atmospheric gas collision energies are much lower, they can still kick primarily CO2 molecules into the 15 micron resonance bending oscillations., since that must be the only remaining mode of radiative cooling.

  91. Philip Haddad says:

    To Alex Hamilton: There should be no doubt that heat is transferred between earth and atmosphere by convection. When the cold winds come the earth is cooled. I will not conjecture how the heat leaves the atmosphere.

Comments are closed.