Climate Sensitivity: A Simple Inverse Model

Guest post by David Middleton

Models often get a bad rap among skeptics, largely because climate models have demonstrated an epic failure in predictive skill.  However, models are extremely valuable scientific tools, particularly when used heuristically. Models are learning tools.

Generally speaking models fall into two general categories:

  1. Forward problems.
  2. Inverse problems.

Inverse problem

From Wikipedia, the free encyclopedia

An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in computer tomography, source reconstructing in acoustics, or calculating the density of the Earth from measurements of its gravity field.

It is called an inverse problem because it starts with the results and then calculates the causes. This is the inverse of a forward problem, which starts with the causes and then calculates the results.

Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields.

[…]

Wikipedia

In oil & gas exploration, we make extensive used of inverse models.  We start with a result (seismic amplitude anomaly) and then try to calculate the causes (oil, gas, tabular salt, oyster beds, geopressure, tuff, marl, etc.).  One of the most widely used tools is called a “fluid replacement model.”  Using sonic and density logs from an existing well drilled through the objective section, we can mathematically substitute oil and gas for brine in wet sands and generate a synthetic seismic anomaly to compare with the real seismic anomaly.  While these models are very useful, sometimes you will get a result that just doesn’t make sense.  If a model defies realistic geology, it’s probably wrong.  I wondered if we could do the same sort of thing to climate data.

Climate Sensitivity Inverse Model

In a fluid replacement model, we replace one fluid with another to see what the real data would look like with different fluid contents.  In my climate model, I simulated what a climate reconstruction would look like without the industrial era rise in atmospheric CO2.  I used two transient climate response cases 1) 1.35 °C and 2) 3.5 °C per doubling of atmospheric CO2.

sensitivity_01

Figure 1) Transient Climate Response: 1.35 °C (red) and 3.5 °C (green) per doubling of CO2. starting at 277 ppmv.

This yielded two equations for ΔT:

  1. ΔT = 1.9476*ln(CO2) – 10.954 for TCR = 1.35 °C
  2. ΔT = 5.0494*ln(CO2) – 28.398 for TCR = 3.50 °C

Using CO2 data from Law Dome (MacFarling Meure et al., 2006) and a Northern Hemisphere climate reconstruction (Ljungqvist 2010), calculated the CO2-driven temperature component and then subtracted it from the reconstructed temperatures.

sensitivity_02

Figure 2) Northern hemisphere temperatures over the past 2,000 years. Black = with industrial era CO2 rise. Red = without industrial era CO2 rise, TCR = 1.35 °C. Green = without industrial era CO2 rise, TCR = 3.50 °C.

The removal of a 1.35 °C TCR yields  reasonable result.  The removal of a 3.50 °C TCR yields  a temperature much colder than the nadir of the Little Ice Age. This seems massively unlikely.  (When I applied this to Greenland ice core temperatures (Alley, 2000 and Kobashi et al., 2010) using a 2X polar amplification, the model yielded temperatures equivalent to the  Bølling-Allerød glacial interstadial.)

Zooming in on the “Anthropocene,” it appears that a high TCR (alarmist) world would already be buried under a mile of ice, if not for anthropogenic augmentation of the so-called greenhouse effect.

sensitivity_04

Figure 3) In the mid to late 1970’s, we were on the verge of a “new ice age” according to many media reports. Just imagine the global cooling hysteria if we hadn’t been augmenting the so-called greenhouse effect!!!

Conclusions

If my methodology is valid, it seems highly probable that the climate is relatively insensitive to atmospheric CO2.  This would seem to validate recent observation-based climate sensitivity estimates, which indicate rather low TCR and ECS values.

Notes

  • I use the phrase “so-called greenhouse effect” because it doesn’t really work like a greenhouse… Not because I deny its existence.
  • If I’ve made any glaring errors, please point them out.
  • For the purpose of this exercise, I am assuming that the Law Dome ice core has adequate resolution to yield an accurate depiction of pre-industrial atmospheric CO2.

 

References

[1] Ljungqvist, F.C. 2010.
A new reconstruction of temperature variability in the extra-tropical
Northern Hemisphere during the last two millennia.
Geografiska Annaler: Physical Geography, Vol. 92 A(3), pp. 339-351,
September 2010. DOI: 10.1111/j.1468-0459.2010.00399.x

[2] MacFarling Meure, C., D. Etheridge, C. Trudinger, P. Steele,
R. Langenfelds, T. van Ommen, A. Smith, and J. Elkins. 2006.
The Law Dome CO2, CH4 and N2O Ice Core Records Extended to 2000 years BP.
Geophysical Research Letters, Vol. 33, No. 14, L14810 10.1029/2006GL026152.

[3] Science News

[4] Featured Image

 

 

 

 

 

 

Advertisements

128 thoughts on “Climate Sensitivity: A Simple Inverse Model

  1. If my methodology is valid, it seems highly probable that the climate is relatively insensitive to atmospheric CO2. I agree, and assert one reach the same conclusion looking at the ice core/ CO2 vs. temperature reconstructions. That is, every time CO2 is at its highest levels, we re-glaciate, therefore. how much warming can it be causing, and two, what is really driving the climate, overwhelms CO2. That unknown driver also warms when CO2 is at its lowest levels.

    • The sunspot cycle 20 (~1968-1979) was a rather weak cycle. I still believe, we all know the “unknown” driver: it’s the sun(s behaviour still not fully understood )!

      • Robert,

        The sun radiates approximately as a 5778K black body. So if you were to plot the wavelength versus temperature you get a Planck curve that peaks somewhere near yellow light (thus a yellow sun) and extends into far infrared and longer but at quickly decreasing intensities.

        There are disruptions in the solar atmosphere that can perturb the curve away from the perfect Planck curve and that could have some climate impacts if they were to persist, but the sun otherwise seems pretty constant. The problem is our atmosphere actually does behave differently in different wavelengths so any change in the curve could seriously change the amount of power reaching the surface. I am not sure how much research has been done in this area (natural sun-earth interactions) since the IPCC stole all the funding with the goal of finding a way to blame it on man.

      • What I’m getting at might be a dumb question for those already experts in this area, but here goes anyway (fools sometimes ask the most enlightening questions):

        Does the sun radiate ANY 15 micron radiation, and if yes, then how much, compared to how much the Earth radiates?

      • Robert, yes the Sun radiates a lot at that frequency, much more that the Earth. However, by the time it gets here, its “brightness” (in W/m2) is much less than what the Earth emits. This is because the intensity is reduced by 1/R^2, where R is the distance between the Earth and the Sun. As a result, the Earth emits much more far infrared than it receives from the sun even though the Sun is emitting many thousands of times more energy at the same frequency.

        By the way, the NIST video you linked to shows intensity verses wavelength. Plots of that type are extremely misleading – plots used with respect to climate change should ALWAYS be with respect to frequency (usually expressed as a wave number) because energy is linear to frequency and inversely proportional to wavelength. The plot NIST provides implies that most of the solar energy is in the visible part of the spectrum when, in reality, most of the energy is in the IR !!

      • Robert Kernodle, A hot black body radiates a smaller portion of it’s total radiation at longer wavelengths, but more at longer wavelengths than a similar but cooler black body.

      • “The plot NIST provides implies that most of the solar energy is in the visible part of the spectrum when, in reality, most of the energy is in the IR !!”

        According to Naturalfrequency.com:
        “Ultraviolet (UV) radiation makes up a very small part of the total energy content of insolation, roughly 8%- 9%. The visible range, with a wavelength of 0.35mm to 0.78mm, represents only 46%-47% of the total energy received from the sun. The final 45% of the sun’s total energy is in the near- infrared range of 0.78mm to 5mm.”
        http://naturalfrequency.com/wiki/solar-radiation
        It seems that there are almost equal amount in visible and IR.

      • Seaice,

        Your source must be citing the energies incident at the top of the atmosphere, rather than the surface. Your link has a graph showing the spectrum at both elevations.

        At zenith, sunlight provides an irradiance of just over 1 kilowatt per square meter at sea level. Of this energy, 527 watts is infrared radiation, 445 watts is visible light, and 32 watts is ultraviolet radiation.

        http://rredc.nrel.gov/solar/spectra/am1.5/

        However, the UV component varies considerably, and is naturally higher at the top of the atmosphere.

      • Thanks to people here responding to my question about infrared radiation from the sun.

        Robert Clemenzi answered one of my questions as follows:

        Robert, yes the Sun radiates a lot at that frequency, much more that the Earth. However, by the time it gets here, its “brightness” (in W/m2) is much less than what the Earth emits. This is because the intensity is reduced by 1/R^2, where R is the distance between the Earth and the Sun. As a result, the Earth emits much more far infrared than it receives from the sun even though the Sun is emitting many thousands of times more energy at the same frequency.

        I’m still not clear on how this infrared entering at top of atmosphere from the sun fails to have a similar effect on atmospheric CO2 as infrared re-entering the atmosphere from Earth’s surface.

      • Gloateus Maximus, thank you for the clarification. Yes the figures I quoted were probably top of atmosphere, not surface.

    • As CO2 and water vapor are more accurately called “radiative gases” they have different behaviors depending on day or night. During the day they absorb IR convert it to heat and just as readily convert heat into IR. As they are saturated during the day, their effect would be nil.

      However, at night, with no insolation causing RI from above or below, these gases would convert heat energy in the atmosphere to IR which is eventually lost to space. Note how quickly the lower troposphere chills down after sunset or how quickly little breezes kick up in the shadows of scudding clouds on a mostly sunny day. The bottom line is that radiative gases cool the planet.

      The idea that these gases heat the atmosphere directly is a sleight of hand done by the warmists. As their tropical upper tropospheric “hotspot,” which their “science” said HAD to be heating faster than anywhere else and thus heating the surface, simply does not exist (the region has been gently cooling for 30+ years). They quietly abandoned that “science” and tacitly assume that these gases heat the atmosphere directly. They have no evidence for this of any kind, but talk constantly as if it’s done science.

      Warmist “science” is failed science, as the surface is always hotter than the air during the day and any IR sent to the surface would be reflected and not absorbed, according to thermodynamics that cannot be disputed. The upper tropical troposphere is -17 deg C and the surface is 15 deg C. Cold cannot warm hot.

      Like a carbonated beverage atmospheric CO2 always, at all time scales, follows temperature changes. That said, CO2 cooks out of the oceans rapidly with as they warm, but goes back in more slowly when they cool.

      • HIghly7 – Yup, sort of…
        Best, Allan

        https://wattsupwiththat.com/2017/01/24/apocalypse-cancelled-sorry-no-ticket-refunds/comment-page-1/#comment-2406538

        [excerpts]

        I have stated since January 2008 that:
        “Atmospheric CO2 lags temperature by ~9 months in the modern data record and also by ~~800 years in the ice core record, on a longer time scale.”
        {In my shorthand, ~ means approximately and ~~ means very approximately, or ~squared).

        It is possible that the causative mechanisms for this “TemperatureLead-CO2Lag” relationship are largely similar or largely different, although I suspect that both physical processes (ocean solution/exsolution) and biological processes (photosynthesis/decay and other biological processes) play a greater or lesser role at different time scales.

        All that really matters is that CO2 lags temperature at ALL measured times scales and does not lead it, which is what I understand the modern data records indicate on the multi-decadal time scale and the ice core records indicate on a much longer time scale.

        This does NOT mean that temperature is the only (or even the primary) driver of increasing atmospheric CO2. Other drivers of CO2 could include deforestation, fossil fuel combustion, etc. but that does not matter for this analysis, because the ONLY signal that is apparent signal in the data is the LAG of CO2 after temperature.

        It also does not mean that increasing atmospheric CO2 has no impact on global temperature; rather it means that this impact is quite small.

        I conclude that temperature, at ALL measured time scales, drives CO2 much more than CO2 drives temperature.

        Precedence studies are commonly employed in other fields, including science, technology and economics.

        Does climate sensitivity to increasing atmospheric CO2 (“ECS” and similar parameters) actually exist in reality, and if so, how can we estimate it? The problem as I see it is that precedence analyses prove that CO2 LAGS temperature at all measured time scales*. Therefore, the impact of CO2 changes on Earth temperature (ECS) is LESS THAN the impact of temperature change on CO2 (ECO2S).

        What we see in the modern data record is the Net Effect = (ECO2S minus ECS). I suspect that we have enough information to make a rational estimate to bound these numbers, and ECS will be very low. My guess is that ECS is so small as to be practically insignificant.

        Regards, Allan

        *References:

        1. MacRae, 2008
        http://icecap.us/images/uploads/CO2vsTMacRae.pdf

        Fig. 1

        Fig. 3

        2. http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14

        3. Humlum et al, January 2013
        http://www.sciencedirect.com/science/article/pii/S0921818112001658

      • Higley, “these gases would convert heat energy in the atmosphere to IR which is eventually lost to space. ” Where did these gas molecules get the energy from in the first place?

    • I came to exactly the same conclusion some time ago – you can simply look at the temperature response to carbon dioxide to date, and absolutely rule out a high level of climate sensitivity. Why anyone keeps claiming otherwise is baffling.

      Given the 1.3 degree increase is for a doubling of carbon dioxide concentrations, this suggests the response of the climate will begin slowing. In other words, we have already seen most of the climate change impacts from carbon dioxide already, as each unit of carbon dioxide added will have increasingly less impact.

      Good news!

    • ” However, models are extremely valuable scientific tools, particularly when used heuristically.”

      “A heuristic method is used to rapidly come to a solution that is hoped to be close to the best possible answer, or ‘optimal solution’. Heuristics are “rules of thumb”, educated guesses, intuitive judgments or simply common sense. A heuristic is a general way of solving a problem.”

      Who needs data then?

      Cheers

      Roger

      http://www.rogerfromnewzealand.wordpress.com

  2. Well the sun consisting of 99.8 of the mass of the solar system may have some effect. I have noticed it is warmer during the day than it is at night. Perhaps I need to draw a graph and put numbers all over it.

    • I would invent big lights so working people could enjoy outdoor sports at night. Then large buildings so we could enjoy indoor sports in winter.

      Then I would invent computers to run models for all the things we could not get done with slide rules. [/sarc]

      If you need a computer and model to tell you there is a problem, there is not a problem.

      David’s example of using of models in the oil industry and those we use in the nuclear industry’ help us be more productive so we have more time to enjoy what we like.

      • Too funny!!! Back in the mid-1980’s, my first employer, Enserch Exploration, decided to host a geophysical symposium in which each of the company’s geophysicists presented a “paper” on innovative approaches to exploration. I was working the Smackover play of the East Texas Salt Basin and had been doing some work on vertical incidence and ray tracing models to help image deep salt structures. I closed my presentation with a slide of me with my feet up on my desk reading Bill James Baseball Abstract (I was a Stratomatic Baseball junkie) and said, “Thanks to these structural models, I have more time to relax in the office… ;)

      • Much simpler to just implement Kevin Trenberth’s sun which shines 24/7 day and night all over the earth at all times, at a steady 431.5 W/m^2.

        g

      • seaice,

        I WAS asking the question sincerely, and thanks for responding. I actually DO pose serious questions and engage in serious dialogue from time to time, despite my drifts into sarcasm and satire.

        You said:

        Consider that 1m3 of atmosphere contains something like 1 x 10^22 molecules of CO2, To put this in perspective, that is over 1000x as many molecules of CO2 in every M3 of air than there are grains of sand on the Earth. That is quite a staggeringly large number. In every meter cubed of atmosphere!

        Consider that 1m3 of atmosphere contains something like 1 x 10^22 molecules of CO2, To put this in perspective, that is over 1000x as many molecules of CO2 in every M3 of air than there are grains of sand on the Earth. That is quite a staggeringly large number. In every meter cubed of atmosphere!
        When looked at this way, what does your intuition tell you now? Mine tells me that this number of molecules could certainly have a significant effect.

        Well, per your helpful calculation, my intuition tells me that one cubic meter of Earth atmosphere contains something like 2500 x 1 x 10^22 OTHER molecules, … or 2.5 x 10^25 OTHER molecules for every 1 x 10^22 molecules of CO2. That is over 2,500,000 times as many OTHER molecules (besides CO2) in every cubic meter of Earth air than there are grains of sand on the Earth. That is quite a bit MORE staggeringly large number. In every cubic meter of Earth atmosphere!

        In other words, the proportionate control of 1 molecule over 2500 molecules would NOT change; it would just get multiplied in respect to the volume under consideration. My question, thus, (honestly) remains.
        Hopefully, I got my basic math correct. My intuition still tells me, then, that this seems like a lot of control being exerted by such a proportionately small quantity of CO2 molecules on a so much larger quantity of OTHER molecules.

      • Robert Kernodle,
        I don’t quite get where your intuition is coming from. We agree that there are a staggeringly large number of molecules of CO2 in the atmosphere. This would intuitively seem to be enough molecules to interact with radiation and have a significant effect. We are only talking of a few degrees for a doubling of this number of molecules.

        Yet your intuition tells you that because there are many more molecules of something else this is not the case. I am not sure why this should seem intuitive to you.

        Imagine putting a 1 kg piece of microwave transparent plastic in the microwave oven and turning it on. The plastic does not heat up at all because it is transparent to the energy. The waves just pass straight through. Now we include 0.04% of a strong microwave absorber in the plastic. The waves no longer pass straight through, but interact with the absorber and heat up. The plastic will heat up. It is unarguable that the plastic will heat with the 0.04% addition and not heat without. This seems like a lot of control for such a small proportion, but intuition tells me that this small addition will result in heating where there was none before.

        Exactly how much heating I would need to work out, but intuitively it makes no sense at all to say that because there are 2,500 times as much plastic as microwave absorber there will be no heating.

        However, this intuition is interesting when we apply it to Mars. Partial pressure of CO2 on Mars is about 570 Pa. Partial pressure of CO2 on Earth is about 40 Pa. Using our molecule counting intuition, there are 10 times more molecules of CO2 per meter cubed on Mars than there are on Earth. That suggests a much stronger effect than we get on Earth, yet I read many places that Mars’ atmosphere is “too thin” for a significant greenhouse effect. Despite a quick look I have not yet seen the calculated temperature of Mars without atmosphere. Interesting point for discussion, think.

    • Well the sun consisting of 99.8 of the mass of the solar system may have some effect. I have noticed it is warmer during the day than it is at night. Perhaps I need to draw a graph and put numbers all over it.

      The sun? … 99.8% of the mass of the solar system? Nah, that couldn’t be it. It HAS to be something that is 0.04% of Earth’s atmosphere. THIS little picture should make it clear:

      • Thank you, Mr. Kernodle, for the graphic. I have tried to impress upon the socialist government in the province of Alberta, Canada, with a similar graphic of 50 x 50 squares, identifying N2, O2, Ar, CO2 and other “trace” molecules.

        Unfortunately it has fallen upon deaf ears (and blind eyes), ….. , so far ……

      • Looking at my simple dot graphic of a CO2 molecule compared to surrounding air molecules (representing 400 ppm), I wonder what exactly happens at the molecular level that this one CO2 molecule could dominate the motion of those other twenty-five hundred air molecules.

        Does one CO2 molecule really vibrate, rotate, or translate enough to work up all those other molecules to help “heat” them up? Does the seemingly more internal action (radiation) of that one molecule among twenty-five hundred others further command the greater seemingly more external action (convection/maybe conduction?) of those other twenty-five hundred molecules of air?

        Maybe this is a child-like, “fold-out-book” (^_^) perspective, but can intuition be so wrong as all the complex equations, models, and explanations make it out to be?

      • Robert Kernodle. “but can intuition be so wrong as all the complex equations, models, and explanations make it out to be?”
        I don’t know if you are asking those questions in a genuine spirit if inquiry, but if you are then the answer is yes. Although that does depend on what angle your intuition is coming from.

        Consider that 1m3 of atmosphere contains something like 1 x 10^22 molecules of CO2, To put this in perspective, that is over 1000x as many molecules of CO2 in every M3 of air than there are grains of sand on the Earth. That is quite a staggeringly large number. In every meter cubed of atmosphere!

        When looked at this way, what does your intuition tell you now? Mine tells me that this number of molecules could certainly have a significant effect.

      • “When looked at this way, what does your intuition tell you now?”

        My intuition tells me that as there are 2,500 molecules of other gases, the effect of that one CO2 molecule is likely to be insignificant.

        I notice you entirely fail to take into account the very much more common H2O molecules, incidentally.

        Why is that?

      • One of the ways climate skeptics hurt their cause is pushing the trope that CO2 constitutes “only” one molecule per 2500 molecules of air, as if that were convincing evidence that it could not possibly have a meaningful effect. If you believe this, would you be willing to drink some water containing one molecule of arsenic for every 2500 of water? If not, why not?

      • Let’s assume the average human being is 75 kilograms. That is, 75,000 grams. 75,000 grams is 75 million milligrams. Is anyone willing to ingest 1/4th of a milligram of lysergic acid diethylamide?

  3. Interesting model, but historical proxies for temperature and CO2 seem to indicate there is something else going on besides a simple relationship between the two variables. In other words, sometimes reasonable climates with much higher levels of CO2 than the current relationship, and ice ages that seem to not be caused by CO2 deficiencies.

  4. Inverse problems are some of the most important mathematical problems in science and mathematics because they tell us about parameters that we cannot directly observe. They have wide application in optics, radar, acoustics, communication theory, signal processing, medical imaging, computer vision, geophysics, oceanography, astronomy, remote sensing, natural language processing, machine learning, nondestructive testing, and many other fields.

    Why, then, are warmists so quick to dismiss RSS and UAH temperature trends as the result of faulty satellite orbits and sensors? Why are ARGO buoys adjusted, yet satellite measures of SST are “correct”?

    The whole field of climate science is rife with false beliefs and hypocrisy.

    • AZ1971,

      Only Satellite temperature data matter, when it is about the “greenhouse” effect,because it takes place in the ATMOSPHERE arena,where the postulated CO2 effect is supposed to take place.

      Since the Satellite data destroys the AGW conjecture totally,it has to be fought tooth and nail, to maintain their delusion, that a trace increase of a trace atmosphere gas,can drive the climate,they even push the absurd Positive Feedback Loop hard,trying to add onto the long known minimal warm forcing effect of CO2 molecules.

      Surface and Ocean temperature data doesn’t really support the AGW delusion since it is not where the postulated IR to back radiation paradigm takes place in. The Sun, with a dribble from the inner earth, is the what warms the waters, NOT CO2 or the never found Positive Feedback Loop.

      Anthony and his band of Volunteers,years ago exposed the shoddy Surface temperature data,due to poor siting issues and calibration methods,here in America. Yet AGW supporters LOVE it despite the obvious flaws of the system.

      It is clear to me, that the whole climate warmist,AGW loving cabal, are dishonest,misleading, absurd and greedy..

  5. Interesting how when someone uses the term model, the article only seems to draw bricks, regardless of whether or not it is pointing in a favorable direction. A good intellectual exercise in trying to find more ways to show the fact that there is nothing special about human created carbon dioxide.

    • Well a model is only as useful as far as it actually emulates the supposed real Physical system.

      If the climate model does not derive from known physical “law” descriptions of the behavior of the elements in the system, then it has little credibility.

      For example an earth illuminated continuously all over at a constant 431.5 W.m^2 , can NEVER reach an average global Temperature of 330 K no matter how much CO2 is in the atmosphere. so that model is useless.

      But if a rotating earth is illuminated at a constant 1362-6 W/m^2 over it’s sunlit projected area, it would surpass that maximum surface Temperature in the tropical regions directly beneath the zenith sun. If the earth rotation slowed to say half its present rate you would see peak temperatures way higher than we have now.

      Luckily the sun sets before we all fry, each and every day.

      G

      • “If the climate model does not derive from known physical “law” descriptions of the behavior of the elements in the system, then it has little credibility”
        Thaks George.
        it is fundamental in any mathematical model, as any Engineer knows. Given 102 models, all different, it is obvious that none have a grasp on all the fundamentals that drive the system. I happen to believe that it is impossible to develop a valid model regardless of how big the computer happens to be. The problem is not a larger computer.

  6. It is an interesting exercise, but even if we assume the black curve is entirely the result of CO2 concentration, and we know it is not, it must result from concentration and system time lags whereas the red and green curves are instantaneous results.

      • Thanks for your reply. So a linear trend of 1% per year takes 72 years to double (constant thereafter) and it looks like there is still apparent temperature change at 100 years. I would only call that instantaneous if your substantive simulations had a much longer time scale than 100 years; but they take place over a time span of 100 years or so. I still maintain you can’t directly compare the the green and red curves to the black one without calculating the green and red using a convolution integral, and the actual time constants involved are not apparent to me.

      • The Transient Climate Response is more or less instantaneous. It’s the “temperature change at the time of CO2 doubling.” The Equilibrium Climate Sensitivity has the long lag time.

      • Thanks again for your response. I see the method here. It will not work the same for all CO2 transients of course. Interesting post.

      • It is worth mentioning that even instantaneous incineration of all known fossil reserves would not produce a single CO2 doubling.

      • There are more than enough fossil fuels to achieve a doubling of CO2. At the current rate of consumption, we’ll reach ~560 ppmv around the end of this century and we’ll still has plenty of coal, oil and gas remaining.

        Current worldwide proved oil reserves equal 30-40 years of current consumption… And proved reserves are only a fraction of the oil in existing fields.

      • The actual measured annual increase of atmospheric of CO2 is only about 0.55%, not 1% as often quoted by IPCC and others. It makes a big difference in the time to double.

  7. “If my methodology is valid, it seems highly probable that the climate is relatively insensitive to atmospheric CO2.”
    In other words then the huge volumes of liquid water (and related water vapor – up to 4% of the atmosphere) is the main or only driver for moderating our atmosphere, keeping it from being too cold or too hot? The Earth is not exactly a Greenhouse but more of a giant heat exchanger?

    • Indeed, not just a heat exchanger but a type of heat pipe called a Perkins Tube. Water evaporating from the surface takes heat, and being latent heat, thermometry is lost unless inferred from enthalpy.
      Average annual precipitation accounts for a majority of surface cooling. This enthalpy escapes the surface and skips half the mass of the atmosphere as average condensation level for meaningful precipitation is over 18,000 feet.
      Then there is verga. This is precipitation that doesn’t reach ground before evaporating. This is the nature of many cumulus clouds where they appear to have constant level. In fact, they are cooling the lower atmosphere. Solar powered airplanes have used these energy transfers to gain altitude. I’m talking about glider pilots.

  8. I would suggest doing this but instead of taking CO2 out of the equation, take the Sun out of the equation.

  9. I’ve done the same type of analysis using as many datapoints as are available. Essentially, taking all the reliable CO2 estimates there are and backfitting the temperature estimate curves to those exact same times.

    You can call this the empirical equilibrium CO2 sensitivity experienced in the past on planet Earth. I would define it as random. Anything between +/- 40C per doubling appears to work. Or, in other words, Zero correlation.

    • I really don’t get the use of the term “equilibrium” with respect to climate. It is inherently a non-equilibrium phenomenon. The major driver is, obviously, the solar influx, which varies by 6.9% every 183.25 days. The second is albedo, which varies according to season (higher in northern hemisphere winter than summer based on surface reflectiveness) and hourly with cloud cover, by as much as 20% (unpredictably). The third is aerosols, from vulcanism, human activity, and non-human biological activity (I don’t know which is greater). We also have heat addition from vulcanism. The lava river from Kilauea now spouting is adding energy to the climate at the same rate as all human energy production, and it isn’t alone. It may stop soon, or continue for a long time. There’s no prediction. But it is sending volumes and volumes of water vapor into the atmosphere, the number one greenhouse gas.

      Where’s this “equilibrium”?

      • I recall reading one study that found both AH and RH above cumulus tropical cloudtops to be essentially 0%. ECS was calculated at about 0.5°, in the absence of the postulated “positive feedback” from stratospheric H2O.

  10. Can we do this test for very low CO2 concentrations, like just after the ice age?

    The impact of an increase in CO2 concentration should be greater at low levels.
    It may be that we are just above the region where CO2 makes any further difference.

    • The subtle variations in pre-industrial CO2 did yield small, but noticeable, temperature changes in the model (see Figure 2).

  11. There are multiple lines of evidence that suggest that temperatures today in the Northern Hemisphere are broadly speaking about the same as they were in the late 1930s/early 1940s.

    If that is so, that means that during the period where some 95% of all manmade Co2 emissions have taken place, there is no, or all but no, measurable increase in temperature. that would suggest that Climate Sensitivity to CO2 is zero or close thereto.

    Of course, that is only the Northern Hemisphere, However:

    1. We have no real data on the Southern Hemisphere. This is mostly ocean, which was and is sparsely sampled, and there is no reliable data going back 50 years. Ditto with land based thermometers. there is hardly any coverage especially going back prior to 1960.

    2. Since we have no reliable data on the Southern Hemisphere, it follows that we have no reliable data on global temperatures.

    3. we are therefore only left with the Northern hemisphere data set.

    4. It is claimed that CO2 is a well mixed gas. if so, it ought to be possible to detect its signal just by examining the Northern Hemisphere. We do not need global information to test the CO2 GHE theory.

    4 We know of no process whereby so called GHGs could heat the globe, but not the Northern Hemisphere.

    I would suggest that given what appears to be the case, on all time scales, that temperature appear to lead CO2, and that CO2 changes appear to be a response to temperature change, coupled to the fact that the Northern Hemisphere is very probably about the same temperature as it was in the late 1930s/early 1940s, the balance of evidence suggests that CO2 does very little, if anything at all to control temperatures in the present era through which we are living.

    • MC and RV,

      The first 100 and maybe 200 ppm of CO2 does have an effect on global temperature. After that, not so much.

      C3 plants however thrive on higher CO2 levels, all the way up to real greenhouse concentrations of 1300 ppm, but then benefit little from further increases. Optimum for earth is thus 800 to 1300 ppm.

      • Yep. C3 plants exhibited clear signs of CO2 starvation during the last Pleistocene glacial stage. 400 ppmv CO2 is a heck of a lot closer to CO2 starvation than it is to the Hot House climates of the Mesozoic Era.

        “Anthropocene” CO2 levels are a lot closer to the C3 plant starvation (Ward et al., 2005) range than they are to most of the prior 540 million years.

      • Yup. When seed plants evolved in the Devonian, CO2 concentration was ten times higher than now.

        When the angiosperm (flowering plant) radiation occurred in the Cretaceous, it was still three times today’s level.

      • Chile is suffering a heat wave and forest fires, but the area involved isn’t a pimple on the posterior of the record-breaking cold over so much of the NH.

  12. The second figure brings about the need to answer an important question. For the most part, the CO2 line is flat until around 1800. So, if CO2 is controlling things now then what created the MWP of the LIA? CO2 sure could not.

    CO2 can’t be a wall switch. The CO2 effect can’t just be turned on. It was irrelevant in the year 1000. What makes it relevant now?

    I have posted this earlier related to other topics but I have analyzed the NH data.

    https://1drv.ms/i/s!AkPliAI0REKhgYwpmfYQ3wY0LUQUoA

    Many of the longer period cycles align with periods in the McCracken paper.

    Note the low ECS value. I get just about the same when I do the same with the H4 data.

    Dr. Evans has proposed a much lower sensitivity value and a change to the model architecture. This aligns with that and perhaps what you are suggesting that sensitivity has to be lower

    • If plant stomata CO2 reconstructions are correct, CO2 wasn’t as flat as the ice cores indicate.

      The ~1,000-yr climate cycle is probably driven by long-term changes in oceanic circulation, which are probably driven by subtle changes in solar activity.

      The point of this exercise was to simply remove the industrial era CO2 rise under a couple of sensitivity scenarios. The result was that the removal of the CO2 rise with a high sensitivity basically drives the Earth’s climate back to the Late Pleistocene. Since this is an extremely unlikely result, it is extremely unlikely that high climate sensitivity values are correct.

      • David

        I was not being critical. I think you have demonstrated what you intended. I agree with you.

        In my career I worked with real hardware. If I had to come up with a model to verify proposed design changes and I had results similar to what we get from climate models I would have concluded that the model was unsuitable and could not be used for design verification. I would have further concluded that its performance was so bad that I just have missed something or had an error. Remedial actions are required before it is a useful design tool.

        I don’t understand why people defend what can’t be defended. The models suck. It is that simple but then this comes from someone with a perspective that my efforts must solve real problems via design changes.

        Perhaps I am being too harsh.

      • Charles,

        Considering that the models are trying to simulate something so complex that it defies simulation, I think they are very impressive. The problem isn’t so much in the structure of the GCM’s as it is in the input parameters.

        The evidence for a low climate sensitivity to CO2 is overwhelming… Yet they continue to insist on using a high sensitivity and then adding in flux adjustments (like aerosol forcing) to account for the fact that the models always over-predict warming. It strikes me as being analogous to the use of epicycles in the Ptolemaic Solar System to explain retrograde motion.

      • David

        I do have some appreciation for transient models and I fully grasp your defense of what they are trying to do and how complex it is. From that viewpoint, I can appreciate the difficulty.

        Ultimately, as they stand now I think they are unsuitable for policy decisions. They are just going to have to do better before we bet our future on them.

      • Absolutely. The state of the science is way too immature to be the basis of any policy decisions.

      • CM and DM, I studied the climate models in some detail, and located at least one source of major bias. To adequately simulate important climate drivers like convection cells requires small high resolution grids (<4 km) . Not a computational problem for a weather model going outna few days. But 6-7 orders computationally intractable for climate models. Finest resolution in CMIP5 was 110km, typical was 250. NCAR rule of thumb is doubling resolution (halving grid sides) means 10x computation.
        So a lot of critical processes have to be parameterized. These are tuned (there are two basic methods) to best hindcast. For CMIP5, the written design was from YE2005 back 3 decades to 1975. Second required run submission.
        That period raises the attribution problem. The warming from ~1920-1945 is essentially imdistinguishable to that from ~1975-2000. Yet AR4 WG1 SPM figure 8.2 made the point that the earlier period had to be mostly natural variation; not enough change in CO2. Natural variation did not stop in 1975. The attribution assumption in the models makes them run hot.

      • ristvan

        I have read a number of articles on the problems with the models. The grid size you mentioned was one of them. I really can’t even comprehend what might be required from an initialization viewpoint. How much information is required for that purpose.

        I think we had a visualization of the problems just a few years ago with a weather model that uses a finer grid than a climate model. Just 12 hours out NYC decided to shutdown everything because of a forthcoming blizzard. They shutdown the subways. Instead of a blizzard I think NYC got just a few inches of snow. Pretty normal stuff.

        If I can miss this 12 hours out why should I rely on a prediction of temperature for the end of this century in a model that has compromises as you noted.

        I leave the models to others. I concentrate on the measurements that we have (if the are reliable and unadjusted) and try to see if I can discern anything that might be informative. I did it that way for 35 years and implemented many design improvements by paying attention to what we measured.

      • CM, for more visualizations of this problem see either essay Models all the way Down inbook Blowing Smoke, or a guest post on models over at blog WUWT about a year ago. Same messages, slightly different examples.

  13. Very interesting exercise. By the way, Mike Jonas, in his 4 part series “The Mathematics of CO2” that appeared earlier on this site came to a similar conclusion using the same mathematics that is employed in the general circulation models. When he attempted to hindcast with these equations, he found that CO2 was a bit player, only contributing about 10% to global warming/cooling with other drivers being responsible for the rest. If my memory is correct, Willy Soon calculated about 12% due to CO2 and theorized that the Sun was responsible for most of the rest.

    I would ask any proponents of the CAGW conjecture who happen to read this post to kindly refute this logic. I for one am always willing to learn and be corrected if my reasoning is faulty.

  14. I’m struggling with how the oil/gas thing was/is an actual ‘model’

    As I understand, you are making guesses about what’s underground and then asking the computer ‘model’ to do the sums that tell you if the model (in your mind, not the computer) is in any way sensible.
    The computer is just a very fast calculator, you could do the same with a pencil, paper and maybe some log-tables.
    Is the pencil & paper a ‘model’ because, all the computer is doing is replacing the p & p – so is it a ‘model’?

    So the same applies to forward looking climate models – they are just guesses at what will happen and can only deemed sensible when what was expected (guessed) to happen has, or has not, happened.
    Therefore, they are junk.

    Then, as richard v says above, the only real temperature data we have is from the northern hemisphere, surprise surprise= where ‘most everybody lives.
    So, last night in a car-park in central Retford Nottinghamshire, the air temp was 2.5 degC. 10 mins drive.
    5 miles away at my home in the middle of farmland, the air temp was 1.0 degC
    How does that sit with ‘Global Temperature’ being what, something like 0.8 above pre-industrial times?

    What do the wild guess models say about that? Also, the farmland I’m surrounded by is mostly arable stubble or roughly ploughed fields awaiting further cultivation and planting. They will have very low albedo.
    What do climate models say about that?

    CO2 and ploughing, cultivation and annual plants.
    How do computer models explain why the stomata on plants’ leaves are on the underside of the leaves. If plant food was falling down from the sky above them, why are the stomata not on the upper-side to best intercept that nourishment?
    So many simple questions and you’re regarded as stupid, ignored or verbally abused if you even ask them……

    • When we drill wells, we log them with a set of instruments that measure various petrophysical parameters of the rocks surrounding the well bore. In the Gulf of Mexico, brine-filled sandstone generally has higher acoustic velocity and higher density than an oil- or gas-filled rock.

      The velocity and density differences often generate an amplitude anomaly (“bright spot”) on reflection seismic profiles. If we have a nearby well that was drilled outside of the amplitude anomaly and encountered a brine-filled sandstone at the same stratigraphic level as the bright spot and it has sonic and density logs, we can mathematically substitute oil and/or gas to generate a synthetic seismic profile and compare this to the real data.

      • David, the above brings out the prospector in me ….
        Interesting model – big change in intercept, but small gradient in both cases, despite having a much bigger PR contrast in the oil case. Not very intuitive…. a pink flag … maybe a red flag. I would be pretty leery drilling that one unless I saw a larger gradient….

      • My eyeball zeroes in on the synthetic gathers traces. The oil gathers shout “PAY!”

        The Intercepts of the oil peak and trough have higher absolute values than the brine peak and trough (higher vertical incidence amplitude). The oil gradients increase in absolute value, while the brine gradients decrease (increasing amplitude with offset). This is a classic Class 3 AVO anomaly, typical of Plio-Pleistocene and many Miocene reservoirs in the Gulf of Mexico.

      • Has this one been drilled or is it more of a “teaching tool” example?
        A more porous wet sand (porosity substitution vs fluid substitution) ought to give a similar intercept response … thus my pink flag.
        Not knocking you or the model though – just fun to discuss ideas when it comes to prospecting

    • @peta

      “Is the pencil & paper a ‘model’ because, all the computer is doing is replacing the p & p – so is it a ‘model’?”

      No, pencil & paper is a computer. Model is a set of equations. Computer solves the equations using some algorithms (mathematical procedures)

      “So the same applies to forward looking climate models – they are just guesses at what will happen”

      Climate models cannot predict what will happen. Climate system is chaotic and inherently unpredictable. Models can possibly draw a realistic probability curve. Interesting mathematical exercise but its utility for policy making is debatable.

      “How does that sit with ‘Global Temperature’ being what, something like 0.8 above pre-industrial times?”

      Global temperature anomaly is not meaningful for practical applications. Local temperature change is more useful.

      “If plant food was falling down from the sky above them, why are the stomata not on the upper-side to best intercept that nourishment?”

      Plant food (CO2) does not fall from the sky. It’s a well-mixed trace gas. Stomata are on the down-side to avoid direct exposure to sunlight that increases plant water loss via transpiration

  15. The problem with calculating sensitivity to CO2 is that the forces that regulate CO2 levels on the planet (in the absence of things like human emissions or massive geologic changes) are driven by temperature. Basically, CO2 is a proxy. According to all the information we know about the processes, it would make no difference at all if CO2 was a powerful greenhouse gas or a sort of bizarre, anti-greenhouse gas…so long as the temperatures changed in the same way we would expect CO2 to change in exactly the same way in the ice cores.

  16. David, what if you ran the same model..
    …using temps that are 1 degree warmer in the past
    and 1/2 degree cooler in the present

    …raw, not adjusted

    • Too much work… But I might try to run the CO2 sensitivity models om the transition from the Medieval Warm Period to the Little Ice Age.

  17. Middleton ==> Thanks for using a real Science News cover — we have seen fake magazine covers enough.

  18. On fig. 3 above the red trend line indicates the NH temperature tend without CO2 forcing of 1.3C per doubling, i.e. subject to natural forcings only (I think I’m right there).
    If so, it looks very like the simulated 20th century temperature trend subject to natural forcings only based on IPCC AR4:
    http://image.slidesharecdn.com/presentation-on-edf-2005-1220744572581642-8/95/presentation-on-edf-2005-10-728.jpg?cb=1220719642
    The IPCC model central projections I think are based on a transient sensitivity around 3C per doubling yet on the above graph up to 2000 it looks like they assume a sensitivity around ~1.3C.

    • It would be natural forcing plus non-CO2 anthropogenic. Although, it’s not based on forcing. It’s a subtraction of modeled CO2 forcing from the temperature reconstruction.

  19. Here is an empirical study that agrees with your findings that CO2 is at best a minor player.
    Humlum et.al (Global and Planetary Change 100 (2013) 51–69)
    The conclusions are:

    There exist a clear phase relationship between changes of atmospheric
    CO2 and the different global temperature records, whether
    representing sea surface temperature, surface air temperature, or lower
    troposphere temperature, with changes in the amount of atmospheric
    CO2 always lagging behind corresponding changes in temperature.

    (1) The overall global temperature change sequence of events appears
    to be from 1) the ocean surface to 2) the land surface to
    3) the lower troposphere.

    (2) Changes in global atmospheric CO2 are lagging about 11–
    12 months behind changes in global sea surface temperature.

    (3) Changes in global atmospheric CO2 are lagging 9.5–10 months
    behind changes in global air surface temperature.

    (4) Changes in global atmospheric CO2 are lagging about 9 months
    behind changes in global lower troposphere temperature.

    (5) Changes in ocean temperatures appear to explain a substantial
    part of the observed changes in atmospheric CO2 since January
    1980.

    (6) CO2 released from anthropogene sources apparently has little influence
    on the observed changes in atmospheric CO2, and
    changes in atmospheric CO2 are not tracking changes in human
    emissions.

    (7) On the time scale investigated, the overriding effect of large volcanic
    eruptions appears to be a reduction of atmospheric CO2,
    presumably due to the dominance of associated cooling effects
    from clouds associated with volcanic gases/aerosols and volcanic
    debris.

    (8) Since at least 1980 changes in global temperature, and presumably
    especially southern ocean temperature, appear to represent
    a major control on changes in atmospheric CO2.

  20. It appears to me that your methodology doesn’t address the correlation versus causation question. Consider the possibility that there exists some yet undiscovered factor “X” that is actually the predominate parameter “driving” temperature. We know in fact that, to some extent:
    A) temperature does increase CO2, (ice record lag, ocean out gas and life respiration to name a few)
    B) human CO2 emissions have contributed to having more carbon available for the carbon cycle than the amount made available by nature.

    So IF CO2 sensitivity was in fact much lower, (say .. less than ~0.75), then would not the exercise of raising it higher than what it actually is in an attempt to fit the past temperature record not result in masking the affect by “X” thus sweeping “X” under the rug so it will never be suspected to exist?

    • MM
      I don’t think your point B is correct. Please see the Humlum paper sited in my post just above yours. I think it implies that the human emissions are so small in relation to the natural ones they are lost in the noise of determining atmospheric CO2 content. This is what Murray Salby has been saying and Humlum’s findings seem to confirm his analyses.

  21. The bottom line:

    Case 1: Sensitivity is low. CO2 is not a problem.

    Case 2: Sensitivity is high. CO2 has saved us from ice age. Keep releasing it.

    Case 3: The models are worthless.

  22. I believe the earth thermometer is the oceans. Measuring atmosphere temp is useless as is so much influenced by seas (nino/Nina). The seas expansion is more reliable index. If sea depth is average 3500 m (guess) a +1C will increase level to 72 cm. Pity below 4C water contracts when cooling, so things are more complex…. But just thinking the atmospheric CO2 extra heat goes to the upper 350 m, have we observed the 7,2 cm raise per +1C?

  23. I have a question. The GHG effect addresses radiative energy, which isn’t the same as heat. Are there experiments that demonstrate that CO2 absorbing 15 microns IR can actually warm the atmosphere? You are after all activating only 400 out of every 1,000,000 molecules, or 1 out of every 2500 molecules. If there enough CO2 to actually convert that radiative energy into thermal energy that can be measured by a thermometer? IR between 13 and 18 microns, after all, is consistent with a black body of temperature -80 deg C.

    There are other problems with the models:
    1) They assume a linear vs logarithmic relationship between CO2 and temperature.
    2) They are misspecified. They are X=mY+b instead of Y=mX+b
    3) They are underspecified, leaving out significant variables.

    Climate “Science” on Trial; If Something is Understood, it can be Modeled
    https://co2islife.wordpress.com/2017/02/06/climate-science-on-trial-if-something-is-understood-it-can-be-modeled/

    How to Discuss Global Warming with a “Climate Alarmist.” Scientific Talking Points to Win the Debate.
    https://co2islife.wordpress.com/2017/01/03/how-to-discuss-global-warming-with-a-liberal-the-smoking-gun-files/

  24. A model as a tool, computer or otherwise, can be very useful.
    Many use models to help them produce something useful and profitable. (Aircraft, car bumpers, weather forecast, etc.) When the model doesn’t match reality, they adjust the model to make a better reflections of reality or buy another model that does a better job. Their bottom line depends on it.
    A BIG problem in “Climate Science” is that the models themselves and how they are sold have become “the bottom line”. Those willing to buy into them have done so, not for “science” or to have a better reflection of reality, but personal gain, financial or political. Those who buy usually pay with tax-dollars.
    The models don’t work in the real world, but they help achieve their ends.

  25. Sorry, not good enough. I can’t see any predictions of doom coming from this model, so it must be a dud.

  26. David: You won’t be the first geophysicist I’ve taken issue with, nor will you be the last, but please accept that it’s always done in good humour.

    The removal of a 1.35 °C TCR yields reasonable result

    That’s your comment on your figure 2. Sorry, I beg to differ, I don’t think it’s reasonable at all.

    Looking at this from where I’m sitting, it appears that figure 2 demonstrates that IF AND ONLY IF CO2 was the ONLY driver of temperature change, then a TCR of 1.35°C explains most if not all of the 0.5°C warming from 1800 to 2000 (+0.25°C/century)

    Well fine, that makes you a certified lukewarmer. Congratulations, you’ve earned an official T shirt.

    What then explains the 0.4°C warming from 800 to 960 (+0.25°C/century)? Or the 0.4°C warming from 1700 to about 1750 (+0.8°C/century – I’ll accept that this is a blip in the proxy reconstruction) And what explains the 0.5°C cooling from 170 to 320 (-0.33°C/century)? Or the 0.9°C cooling from 960 to 1700 (-0.12°C/century)? And why was that elusive factor or group of factors (sun? oceans? hmm…..) not operative at all after 1800?

    By drawing the conclusion that you do, you’ve essentially locked yourself into a logical improbability “THIS variation was caused by CO2 but THAT variation was caused by something else”. It’s basically the same logical stance taken by alarmists, but with a dose of mathematical rigour to make it a bit less absurd.

    I won’t be joining the “CO2 has no effect on climate” crowd. There is the physics (which I’m not very good at BTW) that suggests CO2 can, to some degree, return a portion of outgoing IR back to the surface. But there are unknown (or very poorly known) interactions and feedbacks with H2O, which hugely complicates things because of albedo-changing clouds and the way its phase changes move heat from the surface to the troposphere or stratosphere. And on and on; it’s all been hashed over many times in these pages. I think it quite likely that the NET effect of CO2 on climate is very small, after all of its interactions are accounted for.

    I agree that being a lukewarmer makes us appear a bit more respectable, a bit less (rabid………?), but to be a lukewarmer, you have to make assumptions like the one I’ve attributed to you, that are hard to justify when you look at paleo-temp data.

    • SR, your comment contains a logic flaw. Please see extended comment on model attribution above thread. Then get back with some irrefutable rejoinder, if you can. Details were in the 12.56 pm response. Do try harder. Not going to repeat them again, cause gets tiresome after so many posts on the same irrefutable facts. Check them out before responding, please.

    • As I have stated. The purpose of this exercise was just to remove an assumed CO2 effect; not to calibrate all of the forcing mechanisms.

      I called the 1.35 C case “reasonable” because it didn’t yield a crazy result. It resulted in a continuation of the mid-20th century cooling. I don’t think this is necessarily correct; just reasonable. We know that the late 20th century warming began with a shift in the long-term phase of the PDO. This shift wasn’t driven by CO2; so 1.35 C is probably still way too high.

      The 3.5 C case is clearly batschist crazy. It yields a rapid temperature drop which crashes through the lower 95% confidence band, colder than the nadir of the LIA, to glacial interstadial territory.

  27. maybe irrelevant aside: this demonstrates the problem with inductive logic (finding putative causes from effects). in essence we guess at a model, and see if it produces the result that matches reality.

    Sherlock Holmes never deduced anything at all. His logic was all inductive.

    to summarise: to solve an inductive problem we propose a solution and deduce the phenomena from it, and if they fit the data we say the model is good and accurate.

    Of course that doesn’t prove it is true. Our models say that the sun will rise tomorrow, because it always has. Essentially.

    As the man falling past the thirteenth floor said when asked how he was doing:

    “OK so far!”

    • Earth Science, by its nature, is very inductive. Most of our direct observations are of results, from which we interpret causes.

  28. There is a problem on this site : you keep publishing graphs ending in 1995 or 2000, when it is not “the present”, meaning sixty-seven years ago. Today is February 7,2017. Time passes by, you know…

    • The graphs were of a climate reconstruction and ice core-derived CO2 estimations. These things, by definition, don’t extend to the present day and the x-axes are in calendar years, not years before present (1950 AD).

  29. Just when I think I’ve figured out how to get my follow-up reply to somebody to appear somewhere in the order of the replies given to that person, it ends up appearing BEFORE the comment that it is supposed to follow. This happened with my attempted follow-up reply to seaice above, which makes the flow of the dialogue screwed up.

  30. When you say that heat energy in the atmosphere is converted to IR, are you talking about thermal kT energy? Because, that is certainly not cosistent with heat capacity ratios of gases at room temperature. Most gases, including CO2, would need to be heated to very high temperatures before any thermal energy could excite those molecules to higher vibrational states.

  31. In your conclusion you say : “which indicate rather low TCR and ECS values”
    I suggest you remove the term “ECS” from this statement.
    There’s nothing in your entire paper that speaks to ECR or indicates anything about it.

  32. Agree it’s related and in same direction, but what quantitative relationship are you assuming when you say “rather low”?
    IPPC claim for TCR is 1 – 2.5. For ECS it’s 1.5 – 4.5.
    If TCR of 1.35 looks reasonable based on measured data, that’s consistent with IPPC claim
    If there’s linear relationship between the two, then an ECS of 2.5 is implied.
    Is this what you mean by “rather low”? Much higher than some of the studies suggest (Curry, Forster, Otto, etc.)

    It would be interesting to do the math for the IPPC max TCR of 2.5.
    It looks like it would end up at -.8 on your graph.
    So if IPPC max were right, we would be about halfway towards an ice age by now.

    Anyway thanks for an interesting and thought provoking paper.

    • The ECS on the IPCC graph is 3.5 C, the TCR is approximately 2 C.

      Lewis & Crok had an ECS of 1.75 C and a TCR of 1.35 C.
      LC14 had 1.33 and 1.72 for 1995-2011.
      Otto had 1.33 and 2.0 for 2000-09.

      I plotted the four values and got ECS = (2.5212*(TCR)) – 1.5455 with R² = 0.9738. It looks like a very linear relationship between TCR and ECS.

  33. … geophysics, astronomy, remote sensing,… you left out the biggest detective work of all: geology. Having mapped and interpreted Precambrian geology (3B+ybp to ~0.5Bybp), regional and ore deposit emplacement, occasionally Paleozoic, Mesozoic sedimentation and vulcanology, and for a time Quaternary (northward draining former Missouri R. and Lake Agassiz clays, strand lines and deltas ). This is the granddaddy of forensic reasoning.

  34. David Middleton,
    Your model for predicting temperature response to CO2 is inadequate for the task you are trying to carry out. The heat balance equation yields a solution of a linear relationship between Temp and lnCO2 ONLY when the CO2 forcing is increasing linearly with time. (Some authors have argued that the relationship should still be approximately linear in a situation of a monotonic increase in CO2.) You absolutely cannot apply your linear relationship to a situation of increasing and decreasing CO2 levels. It is equivalent to your throwing out the acoustic equations and carrying out your fluid substitution exercise under the assumption that amplitude varies linearly with offset.

  35. Given that in the literature of global warming climatology, “model” is polysemic (has multiple meanings), this article suffers from its failure to inform us of what is meant by the term “model.” Models are of two types. A model of Type 1 supplies no information to a regulatory agency about the outcomes of events thus being useless for the purpose of regulating Earth’s climate system.. A model of Type 2 supplies information thus being potentially useful for this purpose.

    Models featuring the alleged “climate sensitivity” are of Type 1. That they are of the useless Type 1 is not widely grasped among bloggers.

Comments are closed.