Comment by Cowtan & Jacobs on Lewis & Curry 2018 and Reply: Part 1

From Dr. Judith Curry’s Climate Etc.

Posted on December 16, 2019 by niclewis | 52 Comments

By Nic Lewis

A comment on LC18 (recent paper by Lewis and Curry on climate sensitivity)  by Cowtan and Jacobs has been published, along with our response.

Introduction

In an earlier article here I discussed the Lewis and Curry (2018) paper “The impact of recent forcing and ocean heat uptake data on estimates of climate sensitivity” (LC18) and set out its results.

The LC18 analysis used a global energy budget model to estimate the planetary equilibrium climate sensitivity (ECS) and transient climate response (TCR). ECS and TCR are estimated from changes (Δ) in global mean surface temperature [T], effective radiative forcing (ERF) [F] and the planetary radiative imbalance[1] [N] between a base and a final period, as:

ECS = F2×CO2  × ΔT /(ΔF – ΔN)  and  TCR = F2×CO2 × ΔTF

where F2×CO2 is the ERF for a doubling of atmospheric CO2 concentration.

The main LC18 estimates for ECS and TCR were as per Table 1. The main points of note are that they lie near the bottom end of the IPCC AR5 ‘likely’ ranges for ECS and TCR, and that they are both less uncertain and slightly lower than those given in the predecessor study, Lewis & Curry (2015) when using HadCRUT4 global surface temperature data. The LC18 best estimates based on the faster warming infilled Cowtan & Way Had4_krig_v2 temperature dataset are very similar to the HadCRUT4-based results in Lewis & Curry (2015).

Table 1 (based on Table 3 in LC18) Best estimates (medians) and uncertainty ranges for ECS and TCR using the base and final periods indicated. Values in roman type compute the temperature change involved (ΔT) using the HadCRUT4v5 dataset; values in italics compute using the infilled, globally-complete Had4_krig_v2 (Cowtan & Way) dataset. The preferred estimates are shown in bold. Ranges are stated to the nearest 0.05 K. Also shown are the comparable results (using the HadCRUT4v2 dataset) from LC15 for the first two period combinations given in that paper. ECS estimates assume that effective climate sensitivity does not change with time elapsed since imposition of forcing.

Summary of the Comment and Reply

A Comment on LC18 by Kevin Cowtan and Peter Jacobs, and a Reply from myself and Judith Curry, have just been published by Journal of Climate. A copy of the Reply is available here.

The Comment (referred to as CJ20, as it  appears in the 1 January 2020 issue) is arguably more a critique of observational sea surface temperature (SST) datasets than of the methods and results of LC18. Its abstract reads as follows:

Lewis and Curry (2018) (hereafter LC18) present a method for the estimating the transient climate response (TCR) of the climate system from the temperature change between two time windows – an early baseline period in the 19th century, and a modern period primarily in the 21st century. The results suggest a lower value of TCR than estimates from climate model simulations. Previous studies have identified uncertainty in the historical forcings, the impact of the time evolution of the forcing on temperature response, and observational issues as contributory factors to this disagreement. We investigate a further factor: uncertainty in the bias corrections applied to historical sea surface temperature data. This uncertainty can particularly impact the estimation of variables on decadal timescales, and therefore impact the estimation of TCR using the window method as well as estimates of internal variability. We demonstrate that use of the whole historical record can mitigate the impacts of working with short time windows to some extent, particularly with respect to the early part of the record.

Originally, CJ20 asserted that the base and final periods – what they call early and late windows – chosen in LC18 – which were matched as regards volcanic forcing and influence from multidecadal internal variability – led to lower values of TCR (CJ20 did not address the LC18 ECS estimates). They subsequently removed that claim, which the analysis in our submitted Reply disproved. The final version of CJ20 focuses on the possible impact of using windows rather than all the historical data, in particular the impact – based on comparing warming in CMIP5 (current generation) climate models and in observations – of the choice of varying dates for the windows, and on uncertainty in bias corrections to historical SST data. CJ20 focus on use of the HadCRUT4 temperature record, but – as LC18 made clear –  it is appropriate to use a globally complete record for comparison with climate model results. We accordingly used only Kevin Cowtans’s infilled version of HadCRUT4, Had4_krig_v2, in our Reply.

The abstract for my and Judith Curry’s Reply to CJ20 reads as follows:

Cowtan and Jacobs assert that the method used by Lewis and Curry  in 2018 (LC18) to estimate the climate system’s transient climate response (TCR) from changes between two time windows is less robust – in particular against sea surface temperature bias correction uncertainty – than a method that uses the entire historical record. We demonstrate that TCR estimated using all data from the temperature record is closely in line with that estimated using the LC18 windows, as is the median TCR estimate using all pairs of individual years. We also show that the median TCR estimate from all pairs of decade-plus length windows is closely in line with that estimated using the LC18 windows, and that incorporating window selection uncertainty would make little difference to total uncertainty in TCR estimation. We find that when differences in the evolution of forcing are accounted for, the relationship over time between warming in CMIP5 models and observations is consistent with the relationship between CMIP5 TCR and LC18’s TCR estimate, but fluctuates due to multidecadal internal variability and volcanism. We also show that various other matters raised by Cowtan and Jacobs have negligible implications for TCR estimation in LC18.

In a nutshell, we refuted all points of substance made in CJ20. I plan to deal with the differences between observed and CMIP5 model-simulated historical warming, which formed the basis of CJ20’s numerical analysis, in a subsequent article. In this article, I will elaborate on our refutation of points in the remainder of CJ20.

Window selection related uncertainty

Regarding the claim by CJ20 concerning uncertainty induced by window choice, this is what we had to say in the Reply, having tested the effects of random selection of windows from a decade upwards in length,[2] all of which led to median TCR estimates very close to LC18’s 1.33 °C [= 1.33 K]:

For estimates with the highest (2.0 Wm−2) minimum forcing increase, which are most relevant to LC18’s TCR estimate, the 5–95% TCR uncertainty range arising from random window selection is 1.08–1.54 K, or 1.20–1.59 K using 0.55-scaled volcanic forcing. The width of these ranges – 0.103 and 0.073, respectively, in fractional standard deviation terms[3] –  reflects the fact that many of the window combinations involve mismatched influences from internal variability and/or volcanism. These window selection uncertainty ranges do not imply that LC18 underestimated uncertainty in global temperature change: the 1σ fractional uncertainty in LC18’s preferred TCR estimate attributable to temperature change uncertainty (including that from internal variability) alone was 0.103.[4] Moreover, even if no allowance is made for double counting of temperature change uncertainty, estimated overall TCR uncertainty would increase little if window selection uncertainty were added. Adding (in quadrature) the 0.103 or 0.073 1σ fractional uncertainty in TCR from window selection to the 1σ fractional uncertainty of the preferred LC18 TCR estimate, would only increase it to 1.13⤬ its original level, or to 1.07⤬ that level if using 0.55-scaled volcanic forcing.[5]

This shows that uncertainty in TCR estimation arising from window selection is minor even if no allowance is made for double counting of temperature uncertainty, and negligible if allowance is made for such doubling counting.

Using data from the entire historical record

CJ20 propose use of data from the entire historical record. In fact, LC18 tested doing so, by the usual regression method, but found mismatching volcanic influence made estimation sensitive to the scaling factor used for volcanic forcing. Without scaling down volcanic forcing the TCR estimate from regression over the whole historical period is far lower than that from using the windows method. This is what we said in the Reply:

When AR5 volcanic forcing is scaled by 0.55, regression of median annual-mean temperature on forcing over 1850–2016 gives a 1.27 K Had4_krig_v2-based TCR estimate, marginally lower than LC18’s 1.33 K two-window based preferred estimate. Regressing pentadal means (over 1852–2016) significantly improves the fit (to an R2 of 0.92) and gives a TCR estimate of 1.33 K. Using such pentadal-mean regression on each of the 500,000 pairs of samples of temperature and forcing time series gives a 5–95% TCR range of 0.91–1.84 K, marginally lower and narrower than the LC18 preferred estimate range.

So, the results of TCR estimation using data from the entire historical record is closely in line with those using LC18’s window method and chosen windows, provided the volcanic forcing is scaled down as per LC18’s recommendation. However, the uncertainty induced by having to estimate the appropriate volcanic forcing scaling factor arguably makes using data from the full historical record a less satisfactory approach than using the windows method.

Issues with historical sea surface temperature data

There is indeed significant uncertainty as to the accuracy of the global SST record. However, CJ20 did not show that the LC18 TCR estimates were materially affected by any identified errors in SST bias corrections. Nor did they show that uncertainty in the SST record was greater than that estimated by the providers of the datasets used in LC18.

CJ20 make the point that coverage of the ‘water hemisphere’ was almost non-existent in the 1860s. However, the 1869–82 primary early window used in LC18 avoids the 1860s (save for 1869, when coverage was better), and provides slightly higher coverage in the (land-sparse) southern hemisphere than in the northern hemisphere.

CJ20 also state that nineteenth century temperatures are dependent on large ‘bucket corrections’ to sea surface temperature (SST) observations, however CJ20 themselves suggest that the change from wooden buckets to poorly insulated canvas buckets requiring a large bias correction occurred primarily during 1890–1910. Bucket corrections were relatively small during 1869–82, the LC18 early window.

Possible misestimation of forcings

This is what we wrote in the Reply concerning two forcing estimation issues raised in CJ20:

CJ20 claim that previous studies have identified differences in inferred forcings and in the temperature impact of historical versus transient forcing changes as potential explanatory factors for recent observational energy-budget TCR estimates being lower than average climate model TCR values. None of the three supporting studies that they cite supports either contention.

and

CJ20 claim that comparison of modeled and observed temperatures for late windows starting after 2005 is affected by overestimation of forcings in models. Since LC18 did not make any comparisons of modeled and observed temperatures over the historical period, the only issue of relevance to LC18 is whether it misestimated recent forcing. None of the three supporting studies that CJ20 cite indicate that LC18 misestimated recent forcing.

In fact, a more comprehensive study[6] found, in their CMIP5-specification historical simulations, that since the mid-2000s underestimation of changes in other forcing agents more than counteracted overestimation of changes in solar and volcanic forcing. Moreover, none of the studies cited in CJ20 addressed the real problem, of bias in CMIP5 model forcing that already existed several decades ago (due to principally to excessive aerosol forcing); none of their analyses started before 1980.

Ocean and air surface temperature in models and observations

In CMIP5 models near-surface marine air temperature warms more than the ocean surface temperature field (‘tos‘). CJ20 state that “Lewis and Curry argue that this field [tos] is not the top layer of the bulk ocean surface temperature” (to which measured SST broadly corresponds). However, this straw man argument, which CJ20 disprove, was never made in LC18. As the reply states:

CJ20’s claim that LC18 “argue that this field [tos] is not the top layer of the bulk ocean surface temperature” is incorrect. Rather, LC18 argued that the tas/tos warming difference reflects the model-simulated warming difference between tas and ocean skin temperature, which will warm differently from SST.

There are theoretical reasons for expecting air just above the ocean surface to warm slightly faster than the ocean skin temperature. However, the extent of the difference depends on many factors and is uncertain, as is the difference between the warming rates of SST and of ocean skin temperature. LC18 therefore focused on observational rather than CMIP5 model evidence in this area. We say in the Reply:

LC18 (section 7e) concluded from observational and reanalysis evidence that in the real climate system, tas warmed at most a few per cent more than a blend of tas and tos (model top ocean layer temperature), a substantially smaller difference than that claimed by CJ20. Indeed, the 1979-onwards ERA-interim reanalysis globally-complete surface air temperature record, adjusted for inhomogeneities in their SST source (Simmons et al. 2017), shows slightly lower warming over 1979–2016 than does Had4_krig_v2.

It is also worth noting that in CMIP5 models tas, unlike tos, is a diagnostic rather than a prognostic variable – it is a parameterised extraneous variable, not a variable featuring in the basic model physics.

Conclusion

None of the criticisms of LC18 in the Reply stand up to examination. I leave examination of differences between observed and CMIP5 model-simulated historical warming, which formed the basis of CJ20’s numerical analysis, to a subsequent article. Suffice to say here that such differences, when properly analysed in the light of differences in forcing evolution, are fully consistent with the LC18 TCR estimate.

Nicholas Lewis    December 2019

[1] N is estimated from its counterpart, the rate of climate system heat uptake, which is mainly by the ocean.

[2] Since small inter-window forcing increases provide poor TCR estimation, minimum required inter-window forcing increases, ranging from 1.0 to 2.0 Wm−2, were imposed. (The greater the forcing increase the lower the relative uncertainty, as regards both forcing and the change in temperature that it causes. The windows used for LC18’s main ECS and TCR estimates gave a forcing increase of 2.52 Wm−2.) There were over 11,000 decade plus long window combinations giving a forcing increase of 2.0 Wm−2 or more. For computational tractability, early and late windows were specified to be of equal length. When using LC18’s suggested 0.55 scaling of volcanic forcing the median TCR estimates were even closer to 1.33 K at all levels of required forcing increase, and had lower uncertainty ranges, than when using unscaled volcanic forcing.

[3] So as to be able readily to combine uncertainties, we work with 1 standard deviation fractional uncertainties, here derived by scaling from 17-83% ranges and medians in Table 1

[4] Scaling from the 5-95% range and median for Had4_krig_v2 ΔT in Table 2 of LC18. If temperature uncertainty alone is incorporated, the fractional uncertainty in TCR equals that in ΔT.

[5] Scaling from the 17-83% range in Table 3 of LC18, giving a fractional standard deviation of 0.193 for the preferred LC18 TCR estimate. Uncertainties are taken to be normally distributed and independent for the purposes of deriving their standard deviations and combining them. Adding in quadrature a fractional standard deviation of 0.103 (0.073) to the original level of 0.193 increases it to 0.219 (0.207).

[6] Outten, S., Thorne, P., Bethke, I. and Seland, Ø., 2015. Investigating the recent apparent hiatus in surface temperature increases: 1. Construction of two 30‐member Earth System Model ensembles. Journal of Geophysical Research: Atmospheres, 120(17), pp.8575-8596.

Advertisements

42 thoughts on “Comment by Cowtan & Jacobs on Lewis & Curry 2018 and Reply: Part 1

  1. Even 1.0 to 3.0 degrees C is too high an ECS range. I’m with Lindzen and Choi (revised), who found a much cooler range, based upon satellite observations.

    http://www-eaps.mit.edu/faculty/lindzen/236-Lindzen-Choi-2011.pdf

    Lindzen is the greatest living atmospheric physicist, so naturally has been subjected to scurrilous attack by third-rate “climate scientists”, who often aren’t even scientists, let alone climatologists, but GIGO computer gamers.

    For GCMs actually to model clouds, rather than “parameterize” them, would require 100 billion times more computing power than now available.

    • John Tillman: To have the resolution to do internal modeling of storm cells and of the effects SURROUNDING the storms (because of the storms) requires an incredible amount of overlapping redundancy for recursions (The same goes for the various cloud formation regimes….then add in the ocean interactions)

      My back-of-the-envelope estimates based on coupled mechanical/fluid dynamics modeling suggests that our current computing power needs a boost of at least 10^12 times (assuming calculation times of less than a year are required) to achieve the required spatial resolution of critical events. Energy paths through fluids and the kinetics that are involved has a crazy level of complexity…for fluids NOT involved in phase changes…which the atmosphere is critically involved in.

      In any event, as you point out, our current computing power is not even close to being up to the task. It’s laughable.

  2. After many years of reading these types of science articles; I’m now of the opinion that “global energy budget” is a faux metric and can not be estimated, calculated or measured.

    • Everybody has the right to his own opinion but in this case, it is only an opinion. The energy fluxes can be measured and they are continuously measured. Based on these observations the energy budget values can be calculated and the closing of the energy budget values is a piece of convincing evidence that the energy budget of the Earth is nowadays known within the accuracies of the observations. In line with these observed energy fluxes is the fact that infrared flux emitted by the Earth surface is the same as the measured flux. Also, the incoming shortwave flux is the same as the outgoing longwave flux.

      Contrarians cannot win the battle of climate change by denying simple scientific facts.

      • Contrarians cannot win the battle of climate change by denying simple scientific facts.

        But they can win by pointing out basic flaws in popularly practiced reasoning, like equating the w/m^2 flux quantity emitted by Earth over the whole sphere to the w/m^2 flux quantity actually received by Earth over the half sphere, and dividing up or adding units that cannot be divided up or added in principle, according to proper dimensional treatment and analysis, and proper understanding of what those units actually mean.

        • I never cease to be amazed by the seriousness allowed for such bunk science. Energy budget , OK, if you can measure it accurately enough. In principal it has physical meaning.

          But how can you can you keep a straight face while writing equations such as the above which pretends that you average the temperatures of very different media ( land sea and air ) as though they all had the same relationship to thermal energy content?

          If you have energy on one side, you need energy on the other side. Average temperatures are a chimera with no meaning is physics.

          As soon as you play the “average temperature” game you have abandoned the laws of “basic physics”.

        • @RK

          … basic flaws in popularly practiced reasoning, like equating the w/m^2 flux quantity emitted by Earth over the whole sphere to the w/m^2 flux quantity actually received by Earth over the half sphere …

          Yes, I too believe it is a flaw in reasoning (‘over-simplification’). But it is not a simple flaw.

          Physics is filled with such oversimplifications. Such as equating the sine of an angle with the value of the angle (theta == sin(theta)). It works, sort of, when the angle is small, such that it is popularly believed that the period of a clock pendulum is independent of the angle of swing. It is only an approximation. But it is a very useful approximation that works quite well in the real world.

          So, if you stand on the Earth-facing side of the Sun (in your insulated boots) and look up at the virtually constant spherical angle subtended by the Earth, you will see that about 70% (assuming 30% albedo) of the solar radiance is totally absorbed through that tiny hole in the solar sky. Under the ‘grey-body’ assumption, the geometry of the Earth (sphere or disk) does not affect the fact that 70% of the TOA irradiance is totally absorbed and then totally, eventually (waving my hands), turns into terrestrial radiance and is emitted back into space where it came from.

          It is mathematically true that the surface area of a spherical Earth is exactly 4 times the surface area of the sunny side of a disk-Earth. And that is why geophysicists and others divide the sphere surface by 4, assuming that the solar heat is then uniformally and evenly distributed over the rest of the Earth.

          We say a system is erdodic if the time average of the its behavior is the same as the spatial average of the behavior. But that is clearly not the case here. Dividing by 4 spreads the heat evenly over the entire surface spatially, but temporally, on a short time scale, the system is not ergodic because the temperature fluctuates depending on which side is facing the Sun.
          https://en.wikipedia.org/wiki/Ergodicity

          It is argued that the Earth’s climate is ergodic over a climatic time scale (> 30 y). But, like all approximations, the ergodicity assumption fails to account for differences in latitude. And certainly doesn’t explain why most of the warming is taking place in the Northern hemisphere , in spite of the fact that CO2 is well mixed over the entire planet.

          Approximation is a necessary and useful facet of physics. But it can be overdone. Reminds me of those “spherical cow” jokes.
          https://en.wikipedia.org/wiki/Spherical_cow.

      • You have no idea of the complexity of tracking energy paths through a system of coupled moving fluids.

        Modeling the whole Climate at the resolutions required would require computing power that most experts believe we will never have…many orders of magnitude greater than current computing powers.

        The Climate modelers AGREE WITH THIS and thus need to employ “paramerterizations” which are gross estimations of the behaviors of subsystems like cloud formation and storms. THIS fudging isn’t free…the amount of error that this brings along with it is considerably larger than the acknowledged effects of CO2…rendering the Models…not worthless…but worthless for the job of making predictions, for which they have shown no skill. (When assessing skill of models, you get no points for predicting the continuation of a preexisting trend…and they are all failing to track changes in the trend).

      • Antero Ollila : “The energy fluxes can be measured and they are continuously measured.”

        Finally! Now, would you kindly direct me to the charts of these measurements? I want to see these measurements by elevation, by humidity, by moon phase (atmospheric tides), and more. I want to see these values as measured on Mars with its 95% CO2 atmosphere.

        Once you provide access to these charts then I’ll believe you have more than just your opinion to offer.

        • Climate energy budget values are based on at least one year-long average values and not on daily values. Please start studying the CERES values.

          • Antero Ollila – Thank you for your reply.

            your claim: “values are based on at least one year-long average values and not on daily values”

            your earlier claim: “The energy fluxes can be measured and they are continuously measured”

            Those claims contradict in my mind.

            I agree with much of the content of your various comments, but to claim that we are measuring ‘back radiation’ in any meaningful way is absurd.

    • The energy the Earth receives from the Sun approximately equals the energy the Earth radiates to outer space. If there is an imbalance over time, it will eventually lead to a warming or a cooling planet.

      The problem is the uncertainty in the measurements.

      CERES performed a flux uncertainty analysis and determined that the CERES instrument calibration was the largest uncertainty at 2% for the SW and 1% for LW. link

      Since the downwelling radiation is so close to the upwelling radiation, we can’t even say what the net is because it’s less than the uncertainty.

      The global energy budget can be estimated, calculated, and measured but you have to account for errors.

      The biggest problem I have with the way the science is presented to the public is that it pretends we know things much more accurately than is actually the case.

      • Maybe that the accurate figures make this image of high accuracy but the scientific papers always show the uncertainty values.

      • CommieB

        Spot on- certainly more accurate and certain than the SW and LW values. At 2% the difference for SW is what, 9 Watts? And the claim is to find an imbalance of less than half of that?

        How does this rate as climate “science”? Or any other kind of science. How they get around this (as is done in the papers discussed above) is to use the uncertainty and calculate the range about the central value and assign a confidence. It is reported as “value difference of x with some confidence”. You mean it is different from the baseline value? “We think so, but its hard to say for sure. In fact we can’t say for sure, but probably.”

        Good grief.

  3. Climate sensitivity studies based on empirical temperature data are based on the assumption that the temperature changes are due to CO2 concentration changes only. But how do you know that there are no other forcing elements? This method is an oversimplification because there are other forcing elements like the sun, which provides 99.97% of the energy of the Earth. This kind of oversimplification cannot explain the Little Ice Age or the warm Viking period around 1000. As evidence is the melting of Bearing and Mendenhall glaciers in Alaska exposing the remnants of forest growing 750-1000.

    • They are also based on the assumption that it is possible to calculate the average temperature (or anomaly) over the entire surface of the Earth to within fractions of a degree for the past 100 years, and indeed that even if we could calculate such a thing, it would have any physical meaning. I disbelieve this claim.

  4. “CJ20”?

    Are they referring to something that’s going to be published next year? I know Cosmopolitan et. al. run a publishing business this way but that doesn’t mean scientists should emulate their habits.

    • Michael: Judith and Nic are probably looking at a “pre-publish” paper, if not the actual paper that is scheduled to be printed. The paper has been submitted and will be printed. They (C&L) have been given the paper that will be published and are commenting on it in reply…. SOP in scientific circles

  5. The latest evidence about the other forcing elements is the temperature pause of the 2000s. The CO2 emissions increased from 264 gigatons carbon (GtC) in 1979 to 404 GtC in 2014: an increase of 49%. It meant the CO2 concentration increase from 337 ppm to 399 ppm. What was the temperature effect? Zero. The conclusion based on this short time period is that the TCS is zero.

    I have carried out climate sensitivity studies based on spectral analyses and climate models including also validation analyses. My result for TCS is 0.6 C degrees.

  6. If there is no correlation between CO2 in the atmosphere and GMST , and changes in emissions do not change the growth rate of atmospheric CO2, and fossil fuel CO2 is about 3% of atmospheric CO2 isn’t it a waste of time and effort to base model calculations of expected warming on expected emissions? It might be nice to know the ECS but we certainly have no way of changing CO2 flux in the atmosphere to any appreciable extent. All this concern over high vs low ECS is predicated on our controlling the CO2 content of the atmosphere and we clearly don’t. Now the Connollys have shown CO2 in the atmosphere has no measurable effect on atmospheric temperature. If they are right and Salby, Harde, and Berry are right then the whole enterprise of determining the ECS is arm waving. A substantial effort should be raised to falsify these findings before spending more on ECS let alone CO2 control policy.

    • DMA, yes, it is a waste of time and effort, unless your intent is to push socialism, wherein the people with more money (no matter how they got it) give money to the people that have less (no matter why they have less). This is a logical extension of the great book written by Hillary Clinton: It takes a Village to Raise a Village Idiot. I need a drink.

  7. The two basic methods to estimate ECS and TCR from real-world data (as opposed to using models) are (1) using the global temperature increase and the increase in atmospheric CO2 and derive the numbers on the assumption that the entire temperature increase was caused by the greenhouse effect and (2) using the radiative imbalance at TOA.

    The problem with (1) is that you need to factor in all the non-GHE effects on climate to try and isolate the GHE. Including changes in the planetary albedo, which mostly depends on cloudiness. Can the cloud cover be measured with sufficient accuracy? I seem to recall reading that a 1 percent change in cloud cover would be enough to counter all the theoretical GHE effects of a CO2 increase.

    Also, it’s fine to talk about volcanoes and SO2 emissions and aerosols, but you have to do quite a lot of guesswork for historical volcanism. And there are factors that probably affect heat transfer that are barely mentioned because no one really knows how they work. Like the Svensmark effect. How about changes in the strength and orientation of the earth’s magnetic dipole?

    The problem with (2), again as far as my limited grasp of the topic goes, is that you are using the small difference between two large numbers. And that difference may be no larger than the uncertainty in either the “radiation in” or “radiation out” measurement.

    And then there’s a concern that low-angle reflection of solar SWIR (e.g. from a calm sea, or an ice cap with a surface that is slick with melt water, might send some “incoming” straight out again without warming the surface. And being low-angle reflection, it would arrive at the satellites that measure “outgoing” at the wrong angle, i.e. almost horizontally. And perhaps not measured as well as diffuse reflection from clouds or snow etc.

    I have yet to be convinced. After all, all these estimates of TCS and ECR are all based on “other things being equal”. And it’s more than conceivable (if you ask me it’s very likely) that we don’t know what all the “other things” even are, let along being able to measure them. and integrate them over time into a global energy budget.

    • You have to start somewhere. The climate sensitivity calculated by Lewis and Curry is non-alarming. It also looks like nobody has found a serious error with their work.

      The alarming climate sensitivities are produced by computer models. It’s a problem for them that mother nature has a different idea about what the climate sensitivity should be.

      When there is an erroneous paradigm, the correction is usually not instant. Scientists will produce results that get closer to the truth over time. Nobody will stand up and deny CAGW. Like Lewis and Curry they will produce results that chip away at its foundations until it quietly goes away. Eventually the alarmists will find no scientists to support them.

      • “When there is an erroneous paradigm, the correction is usually not instant. Scientists will produce results that get closer to the truth over time. Nobody will stand up and deny CAGW. Like Lewis and Curry they will produce results that chip away at its foundations until it quietly goes away. Eventually the alarmists will find no scientists to support them.”

        I think this is the way things will unfold. The estimates of how much warmth CO2 adds to the atmosphere keeps going lower with each new study. Any lower and CO2 becomes a benign gas.

  8. Dr. Roy Spencer, “Global Warming Skepticism for busy people”, Ch.1.1,” Not all Science is created equal”-
    “ Many scientists claim the diagnosis of the cause of global warming is obvious and can be found in basic physical principles. If basic physical principles can explain all of the global average warming ,as the climate consensus claims, then how do we account for the following.
    All of the accumulated warming of the climate system since the 1950s, including the deep oceans, was caused by a global energy imbalance of 1 part in 600; yet modern science does not know, with a precision approaching 1 part in 100, ANY of the natural energy flows in and out of the climate system.
    It is simply assumed that the tiny energy imbalance – and thus warming – was caused by humans.”

    • The amount of energy the sun put’s into the ocean stays relatively constant. However the temperature of the air impacts how easily this heat is transferred from the ocean to the atmosphere.
      The warmer the atmosphere gets, the warmer the sea needs to get in order to maintain the same rate of transfer.

    • I understand that the hypothesis is that LW heats the Thin Surface Layer only, which is only mms thick. This alters it’s temp gradient, thus slowing conductive heat loss from the water below. There were some practical experiments carried out by a N.Z. Research ship some years ago that purported to show that LW from cloud cover did this, but I have never seen a numerical theoretical proof of this.

      As for the heat retained by this ‘insulant’ effect getting to the deep oceans, that is normally explained by hand waving, evocation of overturning etc.

      It amazes me that this hypothesis is rarely raised or questioned. Without it, you are right – LW won’t heat the oceans.

  9. In the Science Report of the Third Assessment Report (TAR) they note,

    “In climate research and modeling, we should recognize that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

Comments are closed.