Perspective Needed; Time to Identify Variations in Natural Climate Data that Exceed the Claimed Human CO2 Warming Effect

Guest opinion: Dr. Tim Ball

This article is intended as a starting point for a project that I hope will involve the extensive reach of the Internet and allow input from the traditionally ignored knowledge of the public. It also invites specialists, who would not normally look at the Intergovernmental Panel on Climate Change (IPCC) Reports, to critique what they are saying about their area of expertise. Most of them assume that IPCC scientists would rigorously follow scientific methods and procedures. They will learn the same as German physicist and meteorologist Klaus-Eckart Puls.

I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements.

Maybe Anthony Watts will consider setting up a file on this web page so people can record their information. The public has been excluded in the climate debate using the usual variety of excuses all designed for an elite group to control and dictate what is known and understood. You know the list; academic qualifications, peer review, government control, media bias, and so on. That is now coming to an end as the recent US election and the Brexit vote illustrated.

All the control of the elite at both ends of the political spectrum and the bias of the media, failed to control the outcomes because the people had access to the Internet and, if not necessarily to the truth, at least exposure to what they were not told. I am asking anyone who wants to identify data that puts the IPCC claimed human CO2 impact on global temperature in perspective. This will include data that is omitted, not measured, or falls within an error range that makes it statistically meaningless.

In a previous article, I wrote that

The Intergovernmental Panel on Climate Change (IPCC) claim with 95 percent certainty that they completed a 5000-piece puzzle using only eleven pieces. The pieces are shown in the Radiative Forcing diagram (Figure 1) from AR5. By their assessment, they have high confidence in only five of these pieces.

 

clip_image002

Figure 1

The focus on eleven pieces was deliberately dictated by the definition of climate change given to them in Article 1 of the UNFCCC, a treaty formalized at the “Earth Summit” in Rio in 1992.

a change of climate which is attributed directly or indirectly to human activity that alters the composition of the global atmosphere and which is in addition to natural climate variability observed over considerable time periods.

The validity of the claim of the impact of the eleven variables is unjustified because of the error range in the data.

The next question is what do they define as a forcing? Here are the AR5 Glossary definitions:

External forcing External forcing refers to a forcing agent outside the climate system causing a change in the climate system. Volcanic eruptions, solar variations and anthropogenic changes in the composition of the atmosphere and land use change are external forcings. Orbital forcing is also an external forcing as the insolation changes with orbital parameters eccentricity, tilt, and precession of the equinox.

Radiative forcing Radiative forcing is the change in the net, downward minus upward, radiative flux (expressed in W m–2) at the tropopause or top of atmosphere due to a change in an external driver of climate change, such as, for example, a change in the concentration of carbon dioxide or the output of the Sun. Sometimes internal drivers are still treated as forcings even though they result from the alteration in climate, for example aerosol or greenhouse gas changes in paleoclimates. The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values, and after allowing for stratospheric temperatures, if perturbed, to readjust to radiative-dynamical equilibrium. Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for. The radiative forcing once rapid adjustments are accounted for is termed the effective radiative forcing. For the purposes of this report, radiative forcing is further defined as the change relative to the year 1750 and, unless otherwise noted, refers to a global and annual average value. Radiative forcing is not to be confused with cloud radiative forcing, which describes an unrelated measure of the impact of clouds on the radiative flux at the top of the atmosphere.

This is a bureaucratic rather than a scientific definition that allows them to focus on, modify, or ignore variables of the climate system as suits their purpose. Surely, the only definition they need is

“a change in any variable within the climate system that causes a net change, regardless of the amount.”

Of course, that is not possible given the narrow restriction to only human causes given them. But this is contradicted by acknowledgment of multiple natural forcings and the selection of only two for consideration. From AR5,

Several natural drivers of climate change operate on multiple time scales. Solar variability takes place at many time scales that include centennial and millennial scales (Helama et al., 2010), as the radiant energy output of the Sun changes. Also, variations in the astronomical alignment of the Sun and the Earth (Milankovitch cycles) induce cyclical changes in RF, but this is substantial only at millennial and longer time scales (see Section 5.2.1.1). Volcanic forcing is highly episodic, but can have dramatic, rapid impacts on climate. No major asteroid impacts occurred during the reference period (1750–2012) and thus this effect is not considered here. This section discusses solar and volcanic forcings, the two dominant natural contributors of climate change since the pre-industrial time.

There is no evidence to support the claim in the last sentence. It is not untoward, given the track record of manipulation and selectivity exposed primarily by the leaked emails of Climategate, to suspect that solar and volcanic forcings are examined for a political reason. Partial proof of that charge is that they only examine solar radiant energy while effectively ignoring Milankovitch and the Cosmic theory. This is further supported by the fact that they mention and dismiss them with confusing comments. For example,

Also, variations in the astronomical alignment of the Sun and the Earth (Milankovitch cycles) induce cyclical changes in RF, but this is substantial only at millennial and longer time scales.

The Milankovitch Effects cause variation in the amount of radiant energy reaching the Earth all the time. Figure 2 shows the plot of that variation over one million years. The range of variation is approximately 100 watts per square meter, which far exceeds the 1.68 Wm2 that the IPCC attributes to human CO2. The IPCC argue that the Milankovitch Effect

Orbital forcing is considered the pacemaker of transitions between glacials and interglacials (high confidence), although there is still no consensus on exactly how the different physical processes influenced by insolation changes interact to influence ice sheet volume (Box 5.2; Section 5.3.2). The different orbital configurations make each glacial and interglacial period unique (Yin and Berger, 2010; Tzedakis et al., 2012a). Multi-millennial trends of temperature, Arctic sea ice and glaciers during the current interglacial period, and specifically the last 2000 years, have been related to orbital forcing (Section 5.5).

clip_image004

Figure 2: Variations in the amount of insolation (incoming solar radiation) at 65°N

The main reason given for not including Milankovitch in the IPCC calculations is the time scale, but that is a canard. There is a change, albeit small, since 1750 A.D the IPCC period of consideration, but it is only one part of a multitude of changes that collectively swamp the human CO2 portion of change.

The amount of forcing they attribute to human CO2 is 1.68 Wm2 but the error range is 1.33 to 2.03 Wm2 (Figure 1). The total amount of forcing by humans is 2.29 Wm2 with an error range of 1.13 to 3.33 Wm2. This is a 2.2 Wm2 range on a total forcing of 2.29 Wm2, or 96% of the total claimed forcing. They then claim in the final column labelled “Level of confidence,” that it is VH (Very High) for CO2 and H (High) for total anthropogenic forcing. This cannot be correct because they assume that CO2 is an evenly mixed atmospheric gas, which is contradicted by the data from the OCO2 satellite.

The problem is they produce the data on the annual amount of CO2 humans produce and they create and code the models that produce these results. It is a gross figure when a net figure is a reality for scientific accuracy. The error range is typical for most of the data and conclusions throughout the IPCC Reports.

For example, in the 2001 Physical Science Basis Report they wrote,

So we calculate that since the late 19th or the beginning of the 20th century, up to 2000, global warming has been 0.6 ± 0.2°C.

This is an error range was ±0.2°c or ±33%. This was reportedly produced by Phil Jones ‘hockey stick’ was an increase of 0.6°C, however, even a politician wouldn’t use those numbers in a polling survey. It is not surprising that when Warwick Hughes asked for the data to check it Jones, replied on February 21, 2005,

“We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.”

No wonder Jones conveniently lost the data. But as Vincent Gray noted, the data was useless anyway.

The question is what was the error range of the other ten variables, but also of the natural variables? How do those error ranges compare to the IPCC claimed impact of human CO2 of 1.68 Wm2?

Human influence on the climate system is clear. This is evident from the increasing greenhouse gas concentrations in the atmosphere, positive radiative forcing, observed warming, and understanding of the climate system.

Consider that claim against the following their statement from AR5 about the uncertainties.

Box 2.1 | Uncertainty in Observational Records

The vast majority of historical (and modern) weather observations were not made explicitly for climate monitoring purposes. Measurements have changed in nature as demands on the data, observing practices and technologies have evolved. These changes almost always alter the characteristics of observational records, changing their mean, their variability or both, such that it is necessary to process the raw measurements before they can be considered useful for assessing the true climate evolution. This is true of all observing techniques that measure physical atmospheric quantities. The uncertainty in observational records encompasses instrumental/ recording errors, effects of representation (e.g., exposure, observing frequency or timing), as well as effects due to physical changes in the instrumentation (such as station relocations or new satellites). All further processing steps (transmission, storage, gridding, interpolating, averaging) also have their own particular uncertainties. Because there is no unique, unambiguous, way to identify and account for non-climatic artefacts in the vast majority of records, there must be a degree of uncertainty as to how the climate system has changed. The only exceptions are certain atmospheric composition and flux measurements whose measurements and uncertainties are rigorously tied through an unbroken chain to et al., 1976a).

Uncertainty in data set production can result either from the choice of parameters within a particular analytical framework—parametric uncertainty, or from the choice of overall analytical framework— structural uncertainty. Structural uncertainty is best estimated by having multiple independent groups assess the same data using distinct approaches. More analyses assessed now than in AR4 include published estimates of parametric or structural uncertainty. It is important to note that the literature includes a very broad range of approaches. Great care has been taken in comparing the published uncertainty ranges as they almost always do not constitute a like- for-like comparison. In general, studies that account for multiple potential error sources in a rigorous manner yield larger uncertainty ranges. This yields an apparent paradox in interpretation as one might think that smaller uncertainty ranges should indicate a better product. However, in many cases this would be an incorrect inference as the smaller uncertainty range may instead reflect that the published estimate considered only a subset of the plausible sources of uncertainty. Within the time series figures, where this issue would be most acute, such parametric uncertainty estimates are therefore not generally included. Consistent with AR4 HadCRUT4 uncertainties in GMST are included in Figure 2.19, which in addition includes structural uncertainties in GMST.

To conclude, the vast majority of the raw observations used to monitor the state of the climate contain residual non-climatic influences. Removal of these influences cannot be done definitively and neither can the uncertainties be unambiguously assessed. Therefore, care is required in interpreting both data products and their stated uncertainty estimates. Confidence can be built from: redundancy in efforts to create products; data set heritage; and cross-comparisons of variables that would be expected to co-vary for physical reasons, such as LSATs and SSTs around coastlines. Finally, trends are often quoted as a way to synthesize the data into a single number. Uncertainties that arise from such a process and the choice of technique used within this chapter are described in more detail in Box 2.2. (My bold).

They are acknowledging that for a variety of reasons the data for human causes of forcing has severe limitations that result in a very wide error range. We know what that range is for the few variables they use, but what is it for the variables they don’t use?

It is difficult to list all the variables, but Figure 3, an often-used schematic of the complexity of the atmosphere is a good place to start.

clip_image006

Figure 3

Here are a few examples to get the list going.

Solar radiation; AR5 includes a graph of surface solar radiation they say is the longest available. It shows a range of radiation of approximately 95 to 135 Wm2

clip_image008

Here is what they say about the record.

The longest observational SSR records, extending back to the 1920s and 1930s at a few sites in Europe, further indicate some brightening during the first half of the 20th century, known as ‘early brightening’ (cf. Figure 2.13) (Ohmura, 2009; Wild, 2009). This suggests that the decline in SSR, at least in Europe, was confined to a period between the 1950s and 1980s.

A number of issues remain, such as the quality and representativeness of some of the SSR data as well as the large-scale significance of the phenomenon (Wild, 2012). The historic radiation records are of variable quality and rigorous quality control is necessary to avoid spurious trends

The graph generates many questions that are not even considered. For example, how does the graph track against a) the general temperature trend b) low cloud cover c) CO2 and d) water vapor?

Soil moisture is central in the schematic and in transfer of energy. AR4 says:

Since the TAR, there have been few assessments of the capacity of climate models to simulate observed soil moisture. Despite the tremendous effort to collect and homogenize soil moisture measurements at global scales (Robock et al., 2000), discrepancies between large-scale estimates of observed soil moisture remain.

AR5 says,

There has been a long history of off-line evaluation of land surface schemes, aided more recently by the increasing availability of site-specific data (Friend et al., 2007; Blyth et al., 2010). Throughout this time, representations of the land surface have significantly increased in complexity, allowing the representation of key processes such as links between stomatal conductance and photosynthesis, but at the cost of increasing the number of poorly known internal model parameters. These more sophisticated land surface models are based on physical specific data-rich sites, current land surface models still struggle to perform as well as statistical models in predicting year-to-year variations in latent and sensible heat fluxes (Abramowitz et al., 2008) and runoff (Materia et al., 2010).

Either way, they know virtually nothing about soil moisture and yet acknowledge it is important in the entire atmospheric mechanism. In AR4 they acknowledged that

Unfortunately, the total surface heat and water fluxes (see Supplementary Material, Figure S8.14) are not well observed.

It had not improved by AR5.

Carbon dioxide: This is at the heart of the problem, the forcing caused by human produced CO2. Figure 4 is directly from the IPCC WGI Report and shows their estimates of components of the carbon cycle. It is pure fiction. A multitude of examples exist to support this charge including the fact that there are no actual measures of any of the numbers in the diagram, except for the Mauna Loa record and the amount of CO2 humans produce. The Keeling family owns the patent for all IPCC CO2 measurements, and Ralph Keeling is a member of the IPCC. Figure 4 shows him in illustrious company.

clip_image010

Figure 4 (Original caption)

Scientists Ralph Keeling, Naomi Oreskes and Lynne Talley all participated in a press conference Friday at The Scripps Institution of Oceanography

The IPCC produce the annual estimate of human CO2 production. In their 2001 Report they note the increase of CO2 from 6.5 GtC (gigatons of carbon) human sources to 7.5 GtC in the 2007 report. In the FAQ section, they answer the question “How does the IPCC produce its Inventory Guidelines?” as follows.

Utilizing IPCC procedures, nominated experts from around the world draft the reports that are then extensively reviewed twice before approval by the IPCC.

There is another example of why it is all fiction in AR5. They wrote,

During the Holocene (beginning 11,700 years ago) prior to the Industrial Era the fast domain was close to a steady state, as evidenced by the relatively small variations of atmospheric CO2 recorded in ice cores (see Section 6.2), despite small emissions from human-caused changes in land use over the last millennia (Pongratz et al., 2009).

The “relatively small variations” are artificial and a function of the measurement method. Figure 5 shows 2000 years of CO2 records from ice cores and stomata. A 70-year smoothing curve was applied to the ice core records, and they clearly underestimate the actual atmospheric level, but this was essential to the lower pre-Industrial levels required for the AGW impact story.

clip_image011

Figure 5 (Original caption)

clip_image013

Figure 4

What is amazing is the diagram, caption (see below), and written commentary follow the IPCC practice of identifying the severe limitations of the data, which creates the great contradiction between the claims of certainty in the information that goes to the public. They can’t be accused of not identifying the problems. The deception is in the certainty they present to the public. For example,

The numbers represent the estimated current pool sizes in PgC and the magnitude of the different exchange fluxes in PgC yr–1 averaged over the time-period 2000–2009 (see Section 6.3).

Or, consider this bizarre part of the caption,

Some recent studies (Section 6.3) indicate that this assumption is likely not verified, but global estimates of the Industrial Era perturbation of all these fluxes was not available from peer-reviewed literature.

None of this justifies the comment in the summary to the chapter

With a very high level of confidence1, the increase in CO2 emissions from fossil fuel burning and those arising from land use change are the dominant cause of the observed increase in atmospheric CO2 concentration.

I urge everyone to read the caption that accompanies Figure 4 that follows.

The caption from page 471 of the IPCC Report AR5.

Figure 6.1 | Simplified schematic of the global carbon cycle. Numbers represent reservoir mass, also called ‘carbon stocks’ in PgC (1 PgC = 1015 gC) and annual carbon exchange fluxes (in PgC yr–1). Black numbers and arrows indicate reservoir mass and exchange fluxes estimated for the time prior to the Industrial Era, about 1750 (see Section 6.1.1.1 for references). Fossil fuel reserves are from GEA (2006) and are consistent with numbers used by IPCC WGIII for future scenarios. The sediment storage is a sum of 150 PgC of the organic carbon in the mixed layer (Emerson and Hedges, 1988) and 1600 PgC of the deep-sea CaCO3 sediments available to neutralize fossil fuel CO2 (Archer et al., 1998). Red arrows and numbers indicate annual ‘anthropogenic’ fluxes averaged over the 2000–2009-time period. These fluxes are a perturbation of the carbon cycle during Industrial Era post 1750. These fluxes (red arrows) are: Fossil fuel and cement emissions of CO2 (Section 6.3.1), Net land use change (Section 6.3.2), and the Average atmospheric increase of CO2 in the atmosphere, also called ‘CO2 growth rate’ (Section 6.3). The uptake of anthropogenic CO2 by the ocean and by terrestrial ecosystems, often called ‘carbon sinks’ are the red arrows part of Net land flux and Net ocean flux. Red numbers in the reservoirs denote cumulative changes of anthropogenic carbon over the Industrial Period 1750–2011 (column 2 in Table 6.1). By convention, a positive cumulative change means that a reservoir has gained carbon since 1750. The cumulative change of anthropogenic carbon in the terrestrial reservoir is the sum of carbon cumulatively lost through land use change and carbon accumulated since 1750 in other ecosystems (Table 6.1). Note that the mass balance of the two ocean carbon stocks Surface ocean and Intermediate and deep ocean includes a yearly accumulation of anthropogenic carbon (not shown). Uncertainties are reported as 90% confidence intervals. Emission estimates and land and ocean sinks (in red) are from Table 6.1 in Section 6.3. The change of gross terrestrial fluxes (red arrows of Gross photosynthesis and Total respiration and fires) has been estimated from CMIP5 model results (Section 6.4). The change in air–sea exchange fluxes (red arrows of ocean atmosphere gas exchange) have been estimated from the difference in atmospheric partial pressure of CO2 since 1750 (Sarmiento and Gruber, 2006). Individual gross fluxes and their changes since the beginning of the Industrial Era have typical uncertainties of more than 20%, while their differences (Net land flux and Net ocean flux in the figure) are determined from independent measurements with a much higher accuracy (see Section 6.3). Therefore, to achieve an overall balance, the values of the more uncertain gross fluxes have been adjusted so that their difference matches the Net land flux and Net ocean flux estimates. Fluxes from volcanic eruptions, rock weathering (silicates and carbonates weathering reactions resulting into a small uptake of atmospheric CO2), export of carbon from soils to rivers, burial of carbon in freshwater lakes and reservoirs and transport of carbon by rivers to the ocean are all assumed to be pre-industrial fluxes, that is, unchanged during 1750–2011. Some recent studies (Section 6.3) indicate that this assumption is likely not verified, but global estimates of the Industrial Era perturbation of all these fluxes was not available from peer-reviewed literature. The atmospheric inventories have been calculated using a conversion factor of 2.12 PgC per ppm (Prather et al., 2012).

This is sufficient alone to support my claim that the data is fiction, and IPCC claims totally unjustified, but remember, this is only one segment considering a few variables. Let’s collectively brainstorm to identify the others, compare them with human CO2 impacts and further expose and belittle the greatest deception in history.

Advertisements

99 thoughts on “Perspective Needed; Time to Identify Variations in Natural Climate Data that Exceed the Claimed Human CO2 Warming Effect

  1. Unfortunately temperature and other climatic data from the past can only be estimated inaccurately and imprecisely. Same as for present climate, although perhaps with wider error bars.

    However, we can determine that most of the Holocene and practically all of the prior interglacial, the Eemian, were globally warmer than now, so the null hypothesis can’t be rejected. Indeed, they hypothesis has been repeatedly confirmed that nothing the least bit out of the ordinary has occurred in climate since CO2 has been increasing.

    In the past 130,000 years, for instance, the climate of southern England has varied from warm enough for hippos to bathe in its southern rivers to its Midlands covered by thick ice sheets. Whatever effect humans have globally isn’t a pimple on the posterior of natural climate change. In some local environments, there might be a detectable human effect.

      • Sorry, but saying that something is improbable and then discounting historical records due to that improbability while returning to an appeal to authority isn’t “busting” the null hypothesis. It’s side stepping it.

        He claims that he has done “an experiment” but all he has done is use a model of statistical probability while claiming he didn’t use a model.

        Additionally, he holds to Arrhenius’ 1986 paper and ignores his much abated figures from 1904 and 1906, once he started adding the effect of water.

        Sounds like he has a talking point he wants to push.

        Where’s my nobel?

      • Mosh,

        Have them mail the prize to my alma mater, Southern Connecticut State University, attention: Earth Science Department…

        Lovejoy’s “analysis” addresses neither the natural variability of the Late Holocene climate, nor the abject failure of the computer models. That said, Lovejoy does deserve credit for trying an empirical approach, independent of models and at least paying lip service to natural variability. However, L14 is seriously flawed in at least three ways:

        Nothing in Earth Science is 99% certain.
        A fundamental misunderstanding of Holocene climate variability.
        A totally unscientific and [demonstrably] wrong assessment of equilibrium climate sensitivity.

        […]

        https://wattsupwiththat.com/2014/04/27/a-geological-perspective-on-lovejoys-99-solution/

      • Since we are still several degrees below the known fluctuations of just the last couple thousand years, how could the null possibly be busted?

      • I have a good background in oceanography, a little in meteorology and statistics, but would not have standing in climate science, thank goodness. Nevertheless, there is a commonality.
        It is called problem solving. Sometimes lots of descriptive information is useful, or at least part of it is. What we have looks like elevating favored hypotheses, Historically most are wrong, but this is mine so it must be correct.

        Papers often in too many disciplines go like this. Problem (Too much this, not enough this, etc.) Results, discussion, conclusions. Then a solution which suspiciously fits the problem. However, the solution is a different matter, however derived from the conclusions, than the assessment (science). The solution (policy) is sometimes even divorced from the information in the paper and may even be mentioned in the introduction.

        We are failing with problem solving and the ultimate “bridge collapse” is somebody else’s problem. That is unless you do what I say. Solutions looking for problems? Close to my home our local government is developing a $11 million project based on this (emphasis mine). “Indicators over the past decade have pointed to several ‘possible’ factors contributing to this decline.” There is one in there somewhere, they mention ten. A flux chart would look something like figure 3 above, but less complicated.

        “Close to my home, it leaves an even greater disconnect between science and policy. The Canadian government has axed climate research (my research was unfunded) and shamelessly promoted the dirtiest fuels.” Lovejoy might be believed, but I would like to know more about his understanding of thermodynamic sustainability. Sounds like a solution. I am also unfunded.

      • From the linked Op-Ed: “A single decisive experiment effectively can disprove a scientific hypothesis”

        The last interglacial ended with temperatures dropping 10 degrees with no change in CO2 levels. The data also shows several occasions where temperature rose as CO2 fell, including at least one occasion where temperatures rose by 4 degrees.
        If the null theory had been busted, a Nobel would have been awarded for science instead of politics.

      • I was visiting this link and it does not sound very clever to say “The proxy predicts with 95 percent certainty that a doubling of CO2 levels in the atmosphere will lead to a warming of 1.9 to 4.2 degrees C.”
        How can we speak of 95% certainty for a warming between 1.9 and 4.2 degrees for a doubling of CO2?
        On the other hand, how can we have so much confidence in Arrhenius measurement? As soon as he become an authority in the Nobel prize Comity he become de facto shielded against criticism. Anyway it can exist some modern study disputing the figures presented by Svante Arrhenius. The measurements of Arrhenius could be proved wrong.

  2. The IPCC has gone far enough down a self-referential rabbit hole it requires an independent audit.

    • It is impossible to audit subjective levels of confidence.

      Whatever assessment is supposed to mean within scientific conduct, an objective assessment should be an assessment that is not influenced by personal feelings or opinions. However, the Guidance Note for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties impose upon the lead authors to assign levels of confidence to their findings:
      «The AR5 will rely on two metrics for communicating the degree of certainty in key findings:
      1 Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., mechanistic understanding, theory, data, models, expert judgment) and the degree of agreement. Confidence is expressed qualitatively. «
…
      «A level of confidence is expressed using five qualifiers: “very low,” “low,” “medium,” “high,” and “very high.” It synthesizes the author teams’ judgments about the validity of findings as determined through evaluation of evidence and agreement.»

      Here is an comment to the IPCC practice in the Inter Academy Council review report of IPCC:
      Climate change assessments; Review of the processes and procedures of the IPCC
      «IPCC authors are tasked to review and synthesize available literature rather than to conduct original research. This limits their ability to formally characterize uncertainty in the assessment reports. As a result, IPCC authors must rely on their subjective assessments of the available literature to construct a best estimate and associated confidence levels.»

      By the way: The IAC review of IPCC was not independent!.

  3. Also see Donna Laframboise’s analyses of IPCC work. A lot of non-peer-reviewed junk from WWE, Greenpeace and other activst groups.

  4. Rhodes Fairbridge (Australian geologist, educated at Queen’s University in Ontario) looked at the (storms built) ‘Hudson Bay area staircase’ concluding that there is “a 45-yr cycle in beach building that is presumed to be related to the “double Hale” solar cycle.”
    During the last 150 years the average length of solar cycles is 10.75 years, this would make the “double Hale” cycle 43 years.
    Some 5 or 6 years ago I decided to test the Fairbridge hypothesis with the rainfall data recorded daily by the University of Oxford.
    Verdict : affirmative

    (click on the image to enlarge)

    • Strong correlation with solar activity, but none with CO2, as for example, at the South Pole:

      Dunno if the increase in CO2 at the Pole is from the generators the station uses or not. Generally CO2 is only about as well-mixed in the air as is H20.

      On longer time scales, the only correlation is that there is more CO2 after earth warms up:

      Note the essentially all of the Eemian was warmer than the Holocene, including the past century.

      • @vukcevic

        The first statement of Dr. Ball’s article goes thus:

        “This article is intended as a starting point for a project that I hope will involve the extensive reach of the Internet and allow input from the traditionally ignored knowledge of the public.”

        I am one of the members of the traditionally ignored public, yet despite Dr. Ball’s introductory statement, you start posting graphs which are utterly meaningless to me.

        Get a grip man, you are selling science to scientist’s, who are a fraction of the general community. How about communicating with the rest of us? Do that and this debate might end rather quickly.

        No one is interested in the minutia of geekiedom, other than scientific geeks.

      • HotScot – 1:38 pm
        No one is interested in the minutia of geekiedom, other than scientific geeks.

        Yes, us groundlings need some red meat.

        We also should consider more than just temperature, if WattsUpWithThat is going to host some sort of the clearing house of issues, factoids, chronologies, he said she said, etc.

        Everybody has their favorite whipping boys in this, I know I do (-:

        CO2 is the issue at the bottom of the barrel, but but the boogeyman is in the claimed effects and rising sea level is probably the biggest.

      • Hotscot,

        “I am one of the members of the traditionally ignored public, yet despite Dr. Ball’s introductory statement, you start posting graphs which are utterly meaningless to me.”

        So am I, and fortunately you can ask questions of vukcevic . . I did when I didn’t understand something he posted, and he was very gracious in bringing me up to speed on the matter.

    • “Rhodes Fairbridge (Australian geologist, educated at Queen’s University in Ontario)”

      In honour of our 175th Anniversary, the Queen’s University Alumni Association asked campus partners and alumni from around the world to send in videos of themselves singing the Oil Thigh.
      The QUAA received submissions from across campus, across Canada and across the world. (Five continents to be exact.)

      This is the Global Oil Thigh. Published on Oct 11, 2016

      Allan MacRae, Science “71 (four generations to date at Queen’s)

  5. “Figure 4 is directly from the IPCC WGI Report and shows their estimates of components of the carbon cycle. It is pure fiction.”

    I have no interest in quibbling over the numbers they attribute in their diagram, but the glaring truth is self evident – even in their representation:

    Carbon Dioxide is the unique singular throttle in the Carbon Cycle of Life.

    Without Carbon Dioxide the Carbon Cycle of Life cannot complete and all life perishes.

    • It would improve the image if the illustration could show how Carbon Dioxide makes up 0.04% of the atmosphere rather than leaving the impression that the entire atmosphere is included in the Carbon Cycle.

      The entire Carbon Cycle necessarily depends on 0.04% of the atmosphere.

  6. A heavy hand was applied to the final report at key points to assure the alarm message was preserved while keeping the science uncertainty for others to find if they ever bothered. So who would bother when the future of the world is at stake? Not very many is the answer because it’s a sham to begin with. No one really nit picks a non-bind, non-legal document that is know to be a political tool. It could have been written by UN peacekeepers in Haiti for that matter.

  7. Why does the Radiative Forcing diagram (Figure 1) from AR5 not include the effects of naturally occurring water vapor (i.e. normal moisture in the air)?

    • That is a good question.
      What IPCC, Working Group I, did not mention in the summary is that:

      The central estimate for the current global energy accumulation is about 0.6 W/m² :
      “considering a global heat storage of 0.6 W/m² ”
      Ref.: IPCC;AR5;WGI;Page 181
      And
      The central estimate for current total feedback from clouds is also 0.6 W/m².
      Ref: IPCC;AR5;WGI; Figure 7.10 | Cloud feedback parameters)

      If the hypothesized cloud feedback effect is correct, the sum of all other effects must be zero.
      Anyhow, judging by current global energy accumulation there seem to be something fishy in the theory propounded by United Nations climate panel IPCC. All central estimates can´t be right.
      See the full argument here: The hypothesized cloud feedback effect, alone, is comparable to the current global energy accumulation.

      The statement from Roy Spencer in his book “The great warming blunder” also seem to be highly relevant:
      “The insistence of the IPCC and the scientific “consensus” that clouds cannot cause climate variations continues to astound me. All atmospheric scientists know that clouds are controlled by a multitude of factors; my position is that causation between clouds and temperature flows in both directions. In contrast, the IPCC´s position is that clouds can only change in response to temperature change (temperature cause clouds). But neglecting causation in the opposite direction (clouds cause temperature) can lead to large errors in our understanding of how and why the climate system changes, as well as in our diagnosis of how sensitive the climate system is to human influences.”

      • This is correct. Unfortunately Roy’s book relies an an alternaive model that has been falsified three ways. Good basic idea, bad implementation.

      • Still, the hypothesized cloud-feedback is supposed to be a feed-back to atmospheric temperature. At the same time, clouds themselves have an effect on atmospheric temperature. To me it seems reasonable that clouds alone can cause some variation in global atmospheric temperature.

      • A brief search brought me this article:
        Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming
        “Stratospheric water vapor concentrations decreased by about 10% after the year 2000. Here we show that this acted to slow the rate of increase in global surface temperature over 2000-2009 by about 25% compared to that which would have occurred due only to carbon dioxide and other greenhouse gases. More limited data suggest that stratospheric water vapor probably increased between 1980 and 2000, which would have enhanced the decadal rate of surface warming during the 1990s by about 30% compared to estimates neglecting this change. These findings show that stratospheric water vapor represents an important driver of decadal global surface climate change.”

        That article was about Stratospheric water vapor concentrations. How about the rest of the atmosphere?

        The article clearly states that atmospheric water vapor is an important driver.
        That statement support that the question by usurbrain is relevant.

        IPCC consider volcanoes as a source to natural variation, but not water vapour and clouds. That seem to be inconsistent.

      • ristvan April 12, 2017 at 3:07 pm

        … falsified three ways.

        That reminds me of a picture of St. Thomas Aquinas. link

    • The Warmunists chose to pretend that water vapor is a dependent variable, rather than an independent variable, thus sweeping the entire elephant under the rug. It’s nonsense, of course, but they’ve gotten away with it for decades. There will be consequences.

    • There is a least “one” Radiative Forcing diagram that has the water vapor and albedo feedbacks in it.

      It was actually made by Trenberth in his infamous series on the “missing energy” – a companion paper to the main one.

      Guess what? The water vapor and albedo feedbacks are even larger than the sum of the direct forcing agents.

      In addition, as soon as the direct and feedback forcings make things warmer at the surface and at the tropopause,

      … The “natural” rate of “radiative feedback” energy emission back to space immediately goes up. We get a “natural negative radiative feedback” which cancels out almost all the direct and feedback forcing.

      This is the only time you will see this diagram.

      • I should add that the “Total net imbalance” at the bottom is too high in this diagram. The measured amount is 0.6 W/m2.

        So why all this fuss about direct forcing being 2.3 W/m2 (2016 values) and the feedbacks on top of that which should be about 2.6 W/m2 (2016) because …

        … only 0.6 W/m2 is actually showing up.

        Mostly in this, 0.56 W/m2 shown here in joules, but also note that it is taken a big dip lately.

  8. Well done Dr. Ball. You encourage my scepticism of the myriad climate myths by your direct approach to dealing with the issue.

    I’m not a scientist, in that, I don’t have a scientific qualification. However, I have used the ‘scientific method’, unwittingly, for many years and I find it universally misrepresented. Every baby uses the scientific method to learn about the world around them.

    The term has been misappropriated as an example of elitist scientists, the process has somehow been misrepresented as one exclusive to adults.

    I welcome your desire to include those of us with few, or no recognised qualifications, as observers of fact rather than predictors of doom, or nirvana.

    The Western world has been seduced by the science of prediction, the developing world is subject to the science of reality. My belief is, we should be sticking to what we know, rather than what we think we know.

    What we should all be aware of is that historical events, much like financial history, are no predictors of future performance. I had an online debate today with a dyed in the wool alarmist, a retired Geography schoolteacher. He presented indisputable evidence that geological records demonstrate CO2 is harmful.

    With the best will in the world to any geologist’s reading this, I countered him with the statement that geology is little more than educated guesswork. We cannot possibly glean from rocks, sediment, ice cores or tree rings, estimates, any more accurate than hundreds of years, and, in the distant past, any more accuracy than millennia, of activity, of anything, with any degree of confidence. His case was of 450 million years of change in atmospheric CO2, Vs. the change in atmospheric CO2 since the 1960’s.

    In 450M years, atmospheric CO2 could have fluctuated wildly within short periods, but never have been detected in the geological records. Since the 1960’s we at least have empirical records, albeit dodgy ones.

    Averages are, as average suggests. Average science.

    Finally, I find it bizarre that whilst the green movement have been insisting we all improve the planets green footprint, NASA’s study, into 30 years of their satellite records, demonstrates that the planet is doing just what the greens want, greening by 14% over the last 30 years. An astonishingly positive indictment of increased CO2, utterly ignored by them.

    Isn’t this what they wanted? Job done!

    However, I offer a note of caution. The alarmists have campaigned their cause on political lines. Sceptics have campaigned it on scientific lines. To date, the alarmists have been successful.

    We cant rely on Trump. His recent ‘U turn on Syria demonstrates he can change his mind to suit the circumstances in an instant. Sceptics need to grow up and recognise that climate change is a political issue, not a scientific one.

    • “The Western world has been seduced by the science of prediction, the developing world is subject to the science of reality.”
      I suppose that is the difference between being reactive and being proactive. I suspect that those who are constantly reacting to events are at a disadvantage to those who predicted the event and acted accordingly.

      This is itself a conundrum because in order to act accordingly one must have predicted accordingly. This is where the scientific method is so very useful. It has its limitations though, while it cannot assure correct predictions it can rather quickly dispense with incorrect ones.

      Science’s ability to self correct is also its greatest strength as even well accepted “incorrect predictions” will eventually fall afoul of contradictory observations.

      BTW random trial and error is not an application of the scientific method. Even a baby will make some sort of prediction to suggest his next “trial”. The scientific method requires you to make a prediction. First you observe a phenomena or condition. Then you make a prediction: “If I do this, then I expect this to occur.” The next steps are producing experiments to TRY TO FLASIFY your prediction. That is, try to conduct tests where “I do this and do not get the result that was predicted.” If after numerous trials and tests conducted by numerous other experimenters, your initial prediction still has not falsified, you may be on to something.

    • HotScot,

      I think I agree with the gist of what you’re trying to convey, though your use of terms is somewhat problematic . .

      “The Western world has been seduced by the science of prediction, the developing world is subject to the science of reality.”

      The “prediction” of catastrophic global warming is being treated as “settled science”, rather than as a scientific hypothesis, and in that sense the Western world has been seduced by the “science of prediction” . . which is to say, for the most part, by the results of computer model-world “tests”, rather than real-world ones.

      Perhaps; seduced by the “technology of prediction” is a less “triggering” way of putting it, among actual scientists ; )

    • Every baby uses the scientific method to learn about the world around them.

      I beg to differ. I have been blessed, or quite likely cursed, with what seems to be an unusually analytical mind. It is immensely frustrating, mainly because I see so many people making inferences and decisions based on very limited and/or poor analysis of situations and data.

      I believe that generally we tend to use intuition far more often than analysis, and thus is probably why the Scientific Method was required, and has become so successful. If we don’t follow it rigorously, we typically tend to fall into traps created by our own faith in our intuition.

      What is even more frustrating is that intuition is often correct! This is especially true with intelligent people, I find. The failure comes when it is then applied to everything, because the less likely failures are accepted as correct.

      Proper analysis takes longer, requires more effort, but is much better and getting correct answers. The main problems occur when you don’t have enough information. In that case, you need to rely on intuition to get a meaningful result, but that tends to break the scientific method unless you can test these results. I think that has happened in climate science, and it is therefore not science.

  9. One thing I really never understood that seems like an obvious check to the models…

    The assumption is that additional CO2 warms the atmosphere slightly, which results in feedback from increased water vapor, which causes 2x-3x additional warming (yes oversimplified I know but bear with me).

    variables here for the primary drivers are CO2, absolute humidity, and temperature.

    Shouldn’t we be able to take claimed temperature rise due to CO2, the measured CO2, and find the expected change in absolute humidity? Wouldn’t records of absolute humidity show this increase?

    If no correlation can be found there, the AGW argument drops dead in its tracks wouldn’t it? There is a ready supply of relative humidity records out there spanning 100+ years, easily converted to absolute humidity.

    I don’t know the magnitude of water vapor change this requires, so maybe it’s too small to measure reliably anyway, but it would seem to be a fairly straightforward check…

    roland

    • I think the CO2 gets excited by some concrete photons of low frequency. While the H2O have other low frequencies associated. With proper filters, we can see how much heat is radiating the CO2 molecules and the H2O They do not absorb the same frequencies. Even from a land station, appropriate filters can detect the energy of the different components of the atmosphere. A little infrared telescope watching up to the atmosphere can tell us how much energy is emitting the CO2 and the H2O. It only needs sensors for some frequencies. As CO2 is the minority gas, one can measure the amount of energy for the two frequencies associated with the CO2 and we must subtract this to the total infrared energy. A molecule can absorb a photon in whatever direction it comes close, and in a sort fraction of milliseconds or less the photon is sent out again in some random direction. It has not any preferential direction.
      The experiment is rather simply. And the main frequencies for CO2 molecules are about three or four.
      https://en.wikipedia.org/wiki/Absorption_band

  10. Thanks to clouds, the temperature governor is alive and well on planet earth.

    In real estate appraisals the three most important factors to determine the value of a property are: Location, location, location.

    Likewise, in climate modeling the three most important factors to estimate the future climate on earth are: Clouds, clouds, clouds.

    CO2 is a strong greenhouse gas, second only to water vapor in affecting the climate on earth. If CO2 were to double from pre-industrial times, which it will have done in 50 years or so, global temperatures on earth will increase about 0.9 degree Celsius from pre-industrial times, if that was the only factor affecting the greenhouse effect. This corresponds to a radiative forcing of 4.9 W/m2. But water vapor is a stronger greenhouse gas than CO2, and, this is important, they are not orthogonal as defined by chemometrics, that means, the responses from water vapor and CO2 are not independent, and they are only partly additive. much more: https://lenbilen.com/2017/04/10/thanks-to-clouds-the-temperature-governor-is-alive-and-well-on-planet-earth/

    • Strong negative feedback effects also must be considered. The real, complex climate system doesn’t operate according to a simple “forcing” function. As you note, clouds are important, and the GCMs can’t do clouds.

      Increasing H2O in the air is liable to increase cloud cover at some elevations and in some locations, such that under certain circumstances, adding CO2 could have a cooling effect.

      Thus instead of the absurd result of warming of from 3.0 to 4.5 degrees C imagined in the anti-science fantasy of IPCC, the actual warming, if any, is much more liable to fall between 0.0 and 1.5 degrees C, with 1.5 to 3.0 not totally counted out, at least regionally.

      The no-feedback effect, as determined in controlled lab situation rather than in the chaotic climate system is, for comparison’s sake, about 1.2 degrees C. If net feedbacks, positive and negative, cancel each other out, that would be the expected ECS. IPCC idiotically looks only at presumed radiative forcing from CO2 and increased H2O, without considering evaporative cooling, clouds blocking sunlight and a host of other important factors.

      IPCC’s cr!minal cabal of conclusion for policy maker fakers should be locked up and the key thrown away.

      • Strong negative feedbacks are pretty much guaranteed to exist. The proof of that is tge fact that recent warming such as the Roman and medieval warm periods did not result in runaway warming.

        I truly cannot understand why this obvious fact is not used to challenge climate ‘science’.

    • I’m sorry, but I don’t understand what the term “strong” means in this context. My understanding is that water vapour forms around 95% of all greenhouse gases, and CO2 forms around 3%. Does the volume of water vapour give it more ‘strength’? Or is CO2, molecule for molecule, something less than 30 times more potent as a greenhouse gas?

  11. I don’t think such a database of large natural variation helps. Two reasons. 1. We already have one ranging in time scales from glacial/interglacial, to MWP/LIA, to an apparent ~60-70 year Arctic oscillation (Akasofu 2010, Wyatt and Curry stadium wave)… 2. The CAGW debate is less anout science and more about politics. Warminists minds are made up and cannot be confused by facts.

    • Ristvan, so the evidence for a global warm period comparable with the modern warm period is ……..?

      Careful: the original hypothesis of the MWP being much more than just a European phenomenon goes back to studies in the 1970s based on….. tree rings. Do you trust them?

      • Bill,

        The global extent of the MWP rests upon abundant, overwhelming evidence from every possible paleoproxy.

        Even back in Lamb (1965), worldwide data from botany (not just tree rings but pollen and preserved plant remains), historical document research and meteorology, combined with records indicating prevailing temperature and rainfall in England around AD 1200 to 1600. Since then this conclusion has only been confirmed by new observations from every continent and ocean.

      • BH, surely you gest. In the NH, the Vikings had many church burials in what is now permafrost. Ok, your side says not global. Then explain the concurrent mineral (ikaite) evidence from the Antarctic Peninsula. You lose on global irrefutable facts. Do raise your game. This was pathetic.

    • ristvan,

      “2. The CAGW debate is less anout science and more about politics. Warminists minds are made up and cannot be confused by facts.”

      Sure, but the public’s minds are not, as evidenced by the consistent polling of “Climate change” at or near the bottom of long lists of potential concerns, it seems to me. Screw the “warminists” minds, I believe Mr. Ball is advocating . . taking it to the people . .

      “1. We already have one ranging in time scales from glacial/interglacial, to MWP/LIA, to an apparent ~60-70 year Arctic oscillation (Akasofu 2010, Wyatt and Curry stadium wave)…”

      Make that more evident and understandable for John and Susy Q, so to speak, ourselves (rather than fight through the mass media) I think Mr. Ball suggests we attempt.

  12. Well, I’ll take up Dr. Ball’s challenge, and start by shooting down a climate myth perpetrated … by Dr. Ball in this post, namely that Milankovitch cycles “lead to changes in irradiance reaching the Earth” by about 100 W per sq cm. If you look at the graph he cites as evidence for this claim you will see that the y axis is marked “irradiance at 65 degrees North”, i.e. the irradiance reaching, not “the Earth”, but that specific meridian of latitude. It would be nice if Dr. Ball provided a reference for this graph, so we could get some context. The only reference he gives is to a paper by Berger that doesn’t contain this graph. Milankovitch oscillations greatly affect the distribution of irradiance across the Earth , while hardly affecting total irradiance at all. So if irradiance increases by 100 W per sq cm at 65 degrees N it is decreased by an amount very close to this value at 65 degrees S.

    • Bill H, in terms of the Earths total recieved solar energy, this is fairly steady annually and the amount hardly changes on astronomical timescales. So there is some very limited truth in what you say. But the Earths surface is very different at 65 deg north when compared with the surface at 65 deg south. The response of the climate system to these changes in distribution of insolation is not the same in each hemisphere, In other words the effect on climate of maximum insolation at 65 Nth is not matched by the same effect at 65 Sth half a cycle later. I would also point out during an insolation maxima at 65 Nth, the concurrent minima at 65 Sth does not act as a counterbalance such that there is zero nett global effect on the climate. There definitely is a cyclical effect on climate that accompanies seasonal changes in insolation at 65 Nth. Dr Ball is entirely correct in pointing this out.

    • Changes in the orbit also impact how long the planet stays in those regions where it receives less energy from the sun vs. those regions where it receives more.

  13. There’s no question that much within the various AR’s has misrepresented the scientific truth. The reason is simple and it’s the conflict of interest that arises as the IPCC requires CAGW to support its existence yet has become the arbiter of what is and what is not climate science by what they publish in their reports. If the IPCC were to acknowledge the scientific truth, they would prove themselves out of existence. Does anyone really think that an entrenched bureaucracy potentially controlling trillions of dollars would promote ideas that falsify their need to exist?

    They certainly blew it with feedback, where strong positive feedback was prominently portrayed as the theoretical basis for CAGW. If indeed, the next W/m^2 of forcing increased surface emissions by the 4.4 W/m^2 or so as claimed by the IPCC, the 240 W/m^2 of solar forcing that preceded must contribute at least 1056 W/m^2 to the surface emissions corresponding to a surface temperature close to the boiling point of water.

    One thing that they will never produce is a plot of the sensitivity vs. surface emissions (equivalent BB emissions at the surface temperature). There’s no reasonable function that can end at the current total surface emissions and the 4.4 W/m^2 per W/m^2 of forcing claimed by the IPCC unless the sensitivity is highly negative at low temperatures. They obfuscate this by expressing the sensitivity as 0.8C per W/m^2, but when the average temp of 288K is increased by 0.8C, the required surface emissions increase by about 4.4 W/m^2 which must be replenished by ‘feedback + forcing’, or else the surface cools. One W/m^2 of forcing can not result in 3.4 W/m^2 of feedback resulting in a surface emissions increase of 4.4 W/m^2 unless Bode’s, infinite, implicit supply of Joules powering the gain is actually present, which of course, it’s not.

    Another thing they can not do is connect the dots between a world without an atmosphere, which is governed strictly by the SB Law and one with an atmosphere, which at the sensitivity claimed either violates the SB Law or violates COE.

  14. Curiosity drove the cats, now they are all over the shop.
    Good luck herding them.

    Nothing a proper education can’t remedy.
    It’ll take time though.
    They are still curious.

  15. Is this the real reason the space shuttle program has stopped ?
    July 15, 2015
    “Every time the space shuttle is launched, 250 tons of hydrochloric acid is released into the air. With each launch, .25% of the ozone is destroyed. So far, the space shuttle has de­stroyed 10% of the ozone.”
    http://projectcensored.org/4-nasa-space-shuttles-destroy-the-ozone-shield/

    And why the Concorde was decommissioned ?
    ” Recent publications by two physicists from Columbia University, U.S.A., are of vital significance with regard to the question of pollution of the upper atmosphere by the exhaust gases of supersonic transports such as the Concorde.”
    http://jonjayray.tripod.com/concord.html

    “Based on data collected since the 1950s, scientists have determined that ozone levels were relatively stable until the late 1970s. Severe depletion over the Antarctic has been occurring since 1979 and a general downturn in global ozone levels has been observed since the late 1970s.”
    http://www.ozonedepletion.info/education/part3/ozonesources.html

    • The math on the first paragraph didn’t seem to add up. At .25% of the ozone destroyed with each shuttle mission, it would only take 40 missions to get to 10%, but there were more shuttle launches than that. It turns out the linked article from 2015 references a old Soviet claim and other articles written in 1990. There were a total of 135 shuttle missions, which would mean 33% of the ozone layer was destroyed by the program.

      The linked article explains that it wasn’t just the shuttle program that causes a problem, but that all solid rocket engines deplete the ozone layer, with each Delta rocket contribution 80% as much damage as a shuttle, and the Titan and Ariane V also near the top of the list. The Delta and Titan both have had a variety of sizes, but each family has over 300 launches, with the Ariane V contributing another 91. That could very easily translate to a total of 300 launches averaging 50% of the destruction of a shuttle mission – totaling another 33% of the world’s ozone.

      That’s two thirds of the world’s ozone destroyed without even including other varieties of solid rockets. Either the majority of the world’s ozone was destroyed without anybody noticing, or the Soviets (who mainly stuck with liquid fuel designs) just wanted to find a way to discredit the shuttle and western rocket programs. Either way, you should check the basic math in your sources.

  16. Ok, I have a couple things I wonder about.
    1. Deciduous trees. They soak up the sun in summer and let the sun shine through in winter. With this is also farmland that has much exposed land that soaks up more sunlight in the winter than it does in the summer. Natural and man-made moderators of temperature.

    2. Volcanism. Volcanos that erupt into the atmosphere loaded with ash can have a cooling effect overall. Under-sea volcanos that eject volcano stuff into the water would have a longer effect by warming up seawater and by making it dirty and able to soak up more heat from the sun.

    Stuff like this?

  17. Go for it, Tim! Tired of the Natural Climate Change
    Deniers repeating their Endless BullS hit!

  18. Dr. Ball. Good work, but you need to proof-read it and clean it up a bit.

    For example, two figure 4’s (one after figure 5)

    and this, apparently hanging sentence

    ” The IPCC argue that the Milankovitch Effect………..” (ellipses are mine)

    One thing that really bothers me is the way that “they” calculate global average temperatures from erratically (and in many areas, very sparsely) distributed surface weather stations. Ignoring the fact that global average temperature really doesn’t communicate much useful information, and ignoring adjustments and homogenizations for now – it’s the gridding process that can result in major distortions of reality. Gridding is something we do a lot of in the mineral exploration game with geophysical and geochemical data. We almost never use it to produce averages, but to show trends that we can attribute possible real-world meaning to.

    Gridding involves dividing your project area into equal-sized “cells” and using one or other mathematical technique to place a number inside every cell, whether those cells have data points in them or not. It’s normal to specify a maximum distance from the nearest data point to extend the gridding process, because if becomes painfully obvious that the further you get from actual data points, the more unrealistic the numbers that are created in the cells are. What usually happens is that, as you approach an area with no data, if the last data point has a higher value than the next last data point, the upward trend between those last two data points continues upwards into the data-free area, producing a totally spurious high.

    When your project area is the whole earth, and your need for an “average” temperature means that every cell has to be filled with a value, you can’t stop the gridding when you get into data-free areas, so all those areas with very sparse weather stations (i.e. most of the globe outside western Europe and North America) are filled with numbers that are really nothing but artifacts of the gridding process.

    And that is the point of what I’m trying to say: by changing the gridding parameters, you can radically affect the resulting distribution of cell values in sparse-data areas. In the exploration game, you tend to choose parameters that give patterns that you think best represent the geological patterns that you’re trying to elucidate. But in climate biz – you have to wonder.

    There are different gridding methods and I won’t go into them because I don’t understand the mathematics that well (actually, I don’t understand it at all!). The point is, gridding is the hidden step between the surface weather station data and the “final product” – the global average temperature that they keep waving in our faces. The details never seem to be disclosed, and minor tweaking of the input parameters can produce significant changes in the final numbers. And when “they” quote temperatures to 3 decimal places of a degree Celsius to get “the warmest April 12th ever”, to what extent is their tweaking going to affect the end result? It might even be unconscious tweaking.

    Perhaps there is standard method that all cimateers use. I wonder? Someone may know. But it’s a part of cli-sci that should be scrutinized (by someone who knows more than I do – not a particularly challenging threshold, eh?).

  19. The figure (4) indicating the energy balance on earth surface is missing a important component.

    Internally generated heat from the nuclear fusion reactor that we all live on top of is not accounted for.

    See https://www.nature.com/articles/srep37740. (And many more papers out there)

    I know this has been discussed in IPCC and defined as “too small to be measured”. Probably this is not the correct answer. The deep ocean floor would have a much lower temperature if the internal heat was not active. Its not “residual” heat from the earth’s creation, it is constant generated heat from below.

    Try look what happen to this internally generated heat when the earth is affected to gravitational changes.

    • the energy balance on earth surface is missing a important component.
      =============
      yes, there is no accounting made for potential energy.

      If you neither add nor remove total energy from a gas, as potential energy of a gas increases, the temperature decreases. while if you decrease potential energy the temperature increases, without any change in the total energy of the gas.

      convection does this all the time. it warms the surface while cooling the atmosphere, without adding or subtracting any energy from the system. the force behind the convection is the differential heating provided by the sun and the rotation of the earth.

      but the reason convection changes the air temperature has nothing to do with energy added or removed. the energy remains the same, only the form is changed. the net effect of convection is to create a lapse rate, which makes the surface 32.5C warmer than it would be otherwise, without any need for GHG.

  20. The center of mass of the troposphere is 5 km. The lapse rate is 6.5 C/km.

    5km x 6.5 C/km = 32.5 C.

    The so called radiative 33 C GHG effect does not exists, because 32.5 C is already accounted for. It is a result of the conversion of between Potential energy and Kinetic Energy during convection.

    Potential energy plus kinetic energy is a constant. Kinetic energy raises temperature, while potential energy does not raise temperature. Thus the lower troposphere is warmer due to convection, as potential energy is converted into kinetic energy, and the upper troposphere is cooler as kinetic energy is converted into potential energy.

  21. Dr. Ball, where is the conversion between Potential Energy and Kinetic Energy accounted for in Figure 3?

    The problem is that Figure 3 appears to make no allowance for warming/cooling that does not increase/decrease energy. Instead, Figure 3 assumes that warming reflects a net increase in energy, without considering that warming can result from a change of form of energy, without any change in net energy.

    For example, if you decrease the potential energy of a gas, while increasing the kinetic energy, the gas will get warmer, without any change in overall energy. And similarly, if you decrease the kinetic energy of a gas while increasing the potential energy, the gas will get cooler without ant change in net energy.

    Thus, unless Figure 3 accounts conversion between energy types, without any change in total energy, the results are going to be misleading.

  22. Where does Figure 3 account for Potential Energy? There is a box for “Temperature”, which is kinetic energy, but it appears that potential energy is simply ignored.

  23. What would happen if there was no vertical circulation in the Troposphere? The lapse rate would disappear and the troposphere would become all the same temperature top to bottom as a result of conduction. The bottom of the troposphere would cool by 32.5 C along with the surface, and the upper troposphere would warm by a similar amount, after correction for density.

    There is no radiation or increased energy required to explain why the surface is warmer due to convection that it would be otherwise. The warming is a result of the conversion of potential energy into kinetic energy, without any change in the overall energy.

    The problem with Figure 3 is that is fails to account for potential energy. Thus all temperature change is though to require a “driver” to add or remove energy. However, you can change the temperature of a gas without any change in the overall energy, simply by convection. As the gas moves up it cools, as it moves down it warms, without the any significant change in its total energy.

  24. The GCMs have minimal natural variability. In the absence of an external driver, they basically show a stable climate. That doesn’t match reality at any time-frame, now or 500 years ago.

    I’m currently intrigued with the Stadium Wave as a major source of internal variability with quasi-stable period of 60-years. Marcia Wyatt originated the Stadium Wave Theory and it was the basis of her PHD dissertation. From the dissertation abstract:

    ===
    The atmospheric, lagged-oceanic teleconnection sequence, as identified in the original study, comprises:
    -AMO → +AT → +NAO → +NINO3.4 → +NPO/+PDO → +ALPI → +NHT → +AMO.
    Lags of two to eight years temporally separate these regionally diverse nodal links.
    ===

    The dissertation was in 2012. Newer research has added addition nodal links. The research seems to being advanced by Kravtsov and Tsonis most recently. The were part of the PHD advisory team.

    Judith Curry has been a joint author on some of the supporting papers and an outspoken advocate of the theory.

  25. We have all been told about the IPCC. The IPCC uses peer-reviewed papers only they say.

    The IPCC’s confidence has grown to 95% that the globe is suffering greater than 50% warming since the mid-twentieth century.

    John Cook has done that fabulous report on the peer-reviewed papers.

    “(1) Explicitly endorses and quantifies AGW as 50+%”

    https://skepticalscience.com/tcp.php?t=search&s=a&a=&c=&e=1&yf=1991&yt=2011

    That is 65 papers out of 4011 or 1.6%, according to John Cook et al.

    Just imagine how confident they would be if it got to 3% of the papers.
    Back to top

  26. too funny..

    All these years and now Ball calls on skeptics to come up with ideas of natural forcing.

  27. Where I live in Lincoln UK this was close to the maximum extent of the last glaciation ,it is an interesting question to me why it was so cold then but not now. The standard answer is that it is due to Milankovitch cycles but some use the tilt of the Earth alone in determining relative changes between summer and winter at various latitudes and ignore the curvature of the earth itself given that if the inclination of the Earth is decreased with respect with the Sun then summers would be much cooler but winters not so much warmer. the curvature of the Earth increases as it nears the poles and in the Arctic circle we have 6 months sunlight and 6 months night, the Earths curvature only really starts increasing once you go further north then where I am in Lincoln. How the surface temperature of the earth changes due to changes in solar radiation is very complicated and all the latitudes need to be considered and not just the latitude of 65 degrees north ,the standard explanation seems to imply that we were warm but then we were swallowed up by a massive glacier coming from the north and became cold our role in the glaciations was passive but even though at the latitude I am the level of solar radiation is always positive at yearly solar minimum that does not mean though that the surface temperature must be above freezing at this yearly solar minimum ,it does not even follow that the entire year needs to be above freezing if solar radiation was low enough so the mid latitudes are not passive.
    The surface temperature is also affected by height above sea level and during this geological period we witness tectonic plate movements which have created the alps and the Himalayas for instance and glaciations could be seen as periods of rapid erosion where the rate of erosion is greater than the uplift and cooling created by colliding tectonic plates and will lead us back again to a warmer world that we have seen in the past.

  28. Greenhouse gas theory implies that water vapour raises surface temperature by 33ºC.
    Water vapour reduces the gravitational lapse rate of 9.8ºC/km to 6.5ºC/km on average.
    This results in an increase of potential temperature with height of 3.3ºC/km, compared to the gravitational/dry lapse rate which has no change of potential temperature with height if moved up or down adiabatically.
    It is water vapour’s ability to radiate warmth from warmer to cooler levels in the atmosphere that reduces the difference between the extremes at either end of the thermal gradient.
    This results in a cooling of surface air by 16.5ºC and a warming at the tropopause by the same amount around the average of -18ºC at 5km high.
    An overall shrinkage of the thermal gradient by 33ºC.

    See this from Monthly Weather Review 1933..
    https://docs.lib.noaa.gov/rescue/mwr/061/mwr-061-03-0061.pdf

  29. The jet stream over the Atlantic is looking interesting over the next 7 days.
    Because it may show how the splitting of the jet stream could lead to cooling. There will be a run of cold air and low pressure moving over the northern Atlantic and northern europe. But because of this splitting of the jet there will also be a run of cool air with areas of low pressure causing cloud cover and rain to track along the Azores area of the Atlantic. l have noticed that this splitting of the jet allows colder air and cloud cover to extend further to the south then would otherwise be the case. Now the jet stream behaving this way looks like a very good climate cooler to me.

  30. Tim Ball:

    You say

    I am asking anyone who wants to identify data that puts the IPCC claimed human CO2 impact on global temperature in perspective.

    I could cite much, but I choose to nominate the data on cloud cover measured from satellite data by Pinker et al..
    (ref. Pinker, R. T., B. Zhang, and E. G. Dutton (2005), Do satellites detect trends in surface solar radiation?, Science, 308(5723), 850– 854.)

    Good records of cloud cover are very short because cloud cover is measured by satellites that were not launched until the mid-1980s. But it appears that cloudiness decreased markedly between the mid-1980s and late-1990s.

    Over that recent period of less than two decades, the Earth’s reflectivity decreased to the extent that if there were a constant solar irradiance then the reduced cloudiness provided an extra surface warming of 5 to 10 W/sq metre. This is a lot of warming. It is between two and four times the entire warming estimated to have been caused by the build-up of human-caused greenhouse gases in the atmosphere since the industrial revolution. (The UN’s Intergovernmental Panel on Climate Change says that since the industrial revolution, the build-up of human-caused greenhouse gases in the atmosphere has had a warming effect of only 2.4 W/sq metre).

    Richard

    • See this Figure ( Original from Humlum) from
      http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html

      Fig.11 Tropical cloud cover and global air temperature (29)
      The global millennial temperature rising trend seen in Fig11 (29) from 1984 to the peak and trend inversion point in the Hadcrut3 data at 2003/4 is the inverse correlative of the Tropical Cloud Cover fall from 1984 to the Millennial trend change at 2002. The lags in these trends from the solar activity peak at 1991-Fig 10 – are 12 and 11 years respectively. These correlations suggest possible teleconnections between the GCR flux, clouds and global temperatures.

      • Norman Page:

        Yes. It really is a ‘bummer’ for advocates of the notion that changes to radiative forcing have direct effect on global temperature. That is why – as I said – I chose to nominate the data on cloud cover measured from satellite data by Pinker et al..

        Richard

  31. Climate is controlled by natural cycles. Earth is just past the 2004+/- peak of a millennial cycle and the current cooling trend will likely continue until the next Little Ice Age minimum at about 2650.See my Energy and Environment paper at http://journals.sagepub.com/doi/full/10.1177/0958305X16686488
    and an earlier accessible blog version at http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
    Here is an excerpt for convenience :
    “Collins et al 2006 (3) discus the implications for interpreting variations in forcing and response across the multi-model ensemble of coupled Atmosphere-Ocean General Circulation Models (AOGCMs) used in the IPCC AR4 report. Because of the complexity of the processes included in these models, it is necessary to parameterize or simplify these processes. The lack of observational or theoretical constraints has resulted in a diversity of parameterizations for many forcing components of the climate system. Different AOGCMs have different atmospheric profiles. The calculations in the Collins paper omit the effects of stratospheric thermal adjustment to forcing derived by using fixed dynamical heating. The inter-comparison is based upon calculations of the instantaneous changes in clear-sky fluxes when concentrations of the well mixed greenhouse gasses are perturbed. While the relevant quantity for climate change is all-sky forcing, the introduction of clouds greatly complicates the inter-comparison exercise and therefore clouds are omitted from Collins RTMIP (The Radiative Transfer Model Inter-comparison Project) study. Collins states: “in many cases, there are substantial discrepancies among the AOGCMs and between the AOGCMs and LBL codes.” Collins concludes: “The reasonable accuracy of AOGCM forcings at Top of Model and the significant biases at the surface together imply that the effects of increased WMGHGs on the radiative convergence of the atmosphere are not accurately simulated.”
    For the atmosphere as a whole therefore cloud processes, including convection and its interaction with boundary layer and larger-scale circulation, remain major sources of uncertainty, which propagate through the coupled climate system. Various approaches to improve the precision of multi-model projections have been explored, but there is still no agreed strategy for weighting the projections from different models based on their historical performance so that there is no direct means of translating quantitative measures of past performance into confident statements about fidelity of future climate projections.The use of a multi-model ensemble in the IPCC assessment reports is an attempt to characterize the impact of parameterization uncertainty on climate change predictions. The shortcomings in the modeling methods, and in the resulting estimates of confidence levels, make no allowance for these uncertainties in the models. In fact, the average of a multi-model ensemble has no physical correlate in the real world.

    The IPCC AR4 SPM report section 8.6 deals with forcing, feedbacks and climate sensitivity. It recognizes the shortcomings of the models. Section 8.6.4 concludes in paragraph 4 (4): “Moreover it is not yet clear which tests are critical for constraining the future projections, consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed”

    What could be clearer? The IPCC itself said in 2007 that it doesn’t even know what metrics to put into the models to test their reliability. That is, it doesn’t know what future temperatures will be and therefore can’t calculate the climate sensitivity to CO2. This also begs a further question of what erroneous assumptions (e.g., that CO2 is the main climate driver) went into the “plausible” models to be tested any way. The IPCC itself has now recognized this uncertainty in estimating CS – the AR5 SPM says in Footnote 16 page 16 (5): “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” Paradoxically the claim is still made that the UNFCCC Agenda 21 actions can dial up a desired temperature by controlling CO2 levels. This is cognitive dissonance so extreme as to be irrational. There is no empirical evidence which requires that anthropogenic CO2 has any significant effect on global temperatures.

    The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense. It is exactly like taking the temperature trend from, say, February to July and projecting it ahead linearly for 20 years beyond an inversion point. The models are generally back-tuned for less than 150 years when the relevant time scale is millennial. The radiative forcings shown in Fig. 1 reflect the past assumptions. The IPCC future temperature projections depend in addition on the Representative Concentration Pathways (RCPs) chosen for analysis. The RCPs depend on highly speculative scenarios, principally population and energy source and price forecasts, dreamt up by sundry sources. The cost/benefit analysis of actions taken to limit CO2 levels depends on the discount rate used and allowances made, if any, for the positive future positive economic effects of CO2 production on agriculture and of fossil fuel based energy production. The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modellers – a classic case of “Weapons of Math Destruction” (6).

    Harrison and Stainforth 2009 say (7): “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies. The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality ……. that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.” A new forecasting paradigm is required

    • Bruckner8:

      Nobody has done ALL this work already. That is why Ball’s suggestion of a page that would collate all contributions to “this work” deserves consideration.

      Richard

  32. The assumption of “The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values..” is absurd.

    Think about it…lapse rate? Then atmospheric boundaries? The primary reason for the distinct atmospheric layers is heat transfer rates in each mode..for example the troposphere has dominant convective heat transfer and most of that is latent as water vapor performs work on the permanent gas component.

  33. IPCC quantifies natural forcing (NAT) and natural variability from 1951 to 2010 to zero with a uncertainty range of 0,1 Deg C. That amount of natural variation is by far enough to explain early 20th century warming. IPCC don´t have a clue, but are still very confident:

    “Figure TS.10 | Assessed likely ranges (whiskers) and their midpoints (bars) for warming trends over the 1951–2010 period due to well-mixed greenhouse gases (GHG), anthropogenic forcings (ANT) anthropogenic forcings other than well-mixed greenhouse gases (OA), natural forcings (NAT) and internal variability….”

    «The instrumental record shows a pronounced warming during the first half of the 20th century. Consistent with AR4, it is assessed that the early 20th century warming is very unlikely to be due to internal variability alone. It remains difficult to quantify the contributions to this early century warming from internal variability, natural forcing and anthropogenic forcing, due to forcing and response uncertainties and incomplete observational coverage.» IPCC;AR5;WGI; Page 66

  34. Stratospheric changes in ozone concentrations caused by both anthropogenic and volcanic processes seem to be a big unknown in climate science as to how much additional UV is reaching the surface and it’s effect on global temperatures. Some volcanoes seem likely to have a long term warming effect on surface temperatures.

    • Ozone half life in the troposphere is only a few seconds. Its formation is endothermic while dissociation exothermic. UVa forming it is but a tiny fraction of total insolation. Ozone and it changes in the stratosphere don’t add up to measurable warming.

      The surface of earth is cooled by evaporating water. This disrupts lapse rate and reduces the normally insulative effects of the troposphere. Yes, clouds also increase albedo but this is secondary to heat transfer via mass transit.

      Integrate precipitation over the surface. Then compute the enthalpy change required to effect that mass of water. Compare to total insolation energy.

  35. Is there any effort being made to try to break things down to the point where an explanation can be given to a group of regular people? I have a scientific degree and can figure things out and explain things to my friends. I have found that when I get into a discussion with others I can quickly sort them into two groups by asking them to point to any predictions which have been made in the last 25 years or so which have actually come to pass. Those who immediately lose their minds are the kool aide drinkers and everyone else are the ones worth continuing to have a discussion with. If there were a video or article or a set of either which laid out the case with specific references then others without scientific degrees would also be able to hold their own in discussions.

  36. Why did glaciers around the world start to retreat in 1850? 100 years BEFORE the IPCC says CO2 could have been the cause? Isn’t the advance of glaciers worldwide prior to 1850 proof that the Little Ice Age was global? Isn’t the retreat of glaciers since 1850 proof that natural climate change is MUCH LARGER than the IPCC estimates?

    https://www.cs.mcgill.ca/~rwest/link-suggestion/wpcd_2008-09_augmented/wp/r/Retreat_of_glaciers_since_1850.htm

    Retreat of glaciers since 1850
    2008/9 Schools Wikipedia Selection. Related subjects: Climate and the Weather

    The retreat of glaciers since 1850, worldwide and rapid, affects the availability of fresh water for irrigation and domestic use, mountain recreation, animals and plants that depend on glacier-melt, and in the longer term, the level of the oceans. Studied by glaciologists, the temporal coincidence of glacier retreat with the measured increase of atmospheric greenhouse gases is often cited as an evidentiary underpinning of anthropogenic (human-caused) global warming. Mid-latitude mountain ranges such as the Himalayas, Alps, Rocky Mountains, Cascade Range, and the southern Andes, as well as isolated tropical summits such as Mount Kilimanjaro in Africa, are showing some of the largest proportionate glacial loss.
    The Little Ice Age was a period from about 1550 to 1850 when the world experienced relatively cooler temperatures compared to the present. Subsequently, until about 1940, glaciers around the world retreated as the climate warmed. Glacial retreat slowed and even reversed, in many cases, between 1950 and 1980 as a slight global cooling occurred. However, since 1980 a significant global warming has led to glacier retreat becoming increasingly rapid and ubiquitous, so much so that some glaciers have disappeared altogether, and the existence of a great number of the remaining glaciers of the world is threatened. In locations such as the Andes of South America and Himalayas in Asia, the demise of glaciers in these regions will have potential impact on water supplies. The retreat of mountain glaciers, notably in western North America, Asia, the Alps, Indonesia and Africa, and tropical and subtropical regions of South America, has been used to provide qualitative evidence for the rise in global temperatures since the late 19th century. The recent substantial retreat and an acceleration of the rate of retreat since 1995 of a number of key outlet glaciers of the Greenland and West Antarctic ice sheets, may foreshadow a rise in sea level, having a potentially dramatic effect on coastal regions worldwide.

  37. Excerpts from the following post:
    All that really matters [in this analysis] is that CO2 lags temperature at ALL measured times scales and does not lead it, which is what I understand the modern data records indicate on the multi-decadal time scale and the ice core records indicate on a much longer time scale.

    It also does not mean that increasing atmospheric CO2 has no impact on global temperature; rather it means that this impact is quite small.

    What we see in the modern data record is the Net Effect = (ECO2S minus ECS). I suspect that we have enough information to make a rational estimate to bound these numbers, and ECS will be very low. My guess is that ECS is so small as to be practically insignificant.
    Regards, Allan
    https://wattsupwiththat.com/2017/01/24/apocalypse-cancelled-sorry-no-ticket-refunds/comment-page-1/#comment-2406538
    [excerpts]
    I have stated since January 2008 that:
    “Atmospheric CO2 lags temperature by ~9 months in the modern data record and also by ~~800 years in the ice core record, on a longer time scale.”
    {In my shorthand, ~ means approximately and ~~ means very approximately, or ~squared).
    It is possible that the causative mechanisms for this “TemperatureLead-CO2Lag” relationship are largely similar or largely different, although I suspect that both physical processes (ocean solution/exsolution) and biological processes (photosynthesis/decay and other biological processes) play a greater or lesser role at different time scales.
    All that really matters is that CO2 lags temperature at ALL measured times scales and does not lead it, which is what I understand the modern data records indicate on the multi-decadal time scale and the ice core records indicate on a much longer time scale.
    This does NOT mean that temperature is the only (or even the primary) driver of increasing atmospheric CO2. Other drivers of CO2 could include deforestation, fossil fuel combustion, etc. but that does not matter for this analysis, because the ONLY signal that is apparent in the data is the LAG of CO2 after temperature.
    It also does not mean that increasing atmospheric CO2 has no impact on global temperature; rather it means that this impact is quite small.
    I conclude that temperature, at ALL measured time scales, drives CO2 much more than CO2 drives temperature.
    Precedence studies are commonly employed in other fields, including science, technology and economics.
    Does climate sensitivity to increasing atmospheric CO2 (“ECS” and similar parameters) actually exist in reality, and if so, how can we estimate it? The problem as I see it is that precedence analyses prove that CO2 LAGS temperature at all measured time scales*. Therefore, the impact of CO2 changes on Earth temperature (ECS) is LESS THAN the impact of temperature change on CO2 (ECO2S).
    What we see in the modern data record is the Net Effect = (ECO2S minus ECS). I suspect that we have enough information to make a rational estimate to bound these numbers, and ECS will be very low. My guess is that ECS is so small as to be practically insignificant.
    Regards, Allan
    *References:
    1. MacRae, 2008
    http://icecap.us/images/uploads/CO2vsTMacRae.pdf
    Fig. 1
    https://www.facebook.com/photo.php?fbid=1200189820058578&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
    Fig. 3
    https://www.facebook.com/photo.php?fbid=1200190153391878&set=a.1012901982120697.1073741826.100002027142240&type=3&theater
    2. http://www.woodfortrees.org/plot/esrl-co2/from:1979/mean:12/derivative/plot/uah5/from:1979/scale:0.22/offset:0.14
    3. Humlum et al, January 2013
    http://www.sciencedirect.com/science/article/pii/S0921818112001658

  38. “Water vapour in the air, as is well known, decreases the diurnal temperature range at the surface by decreasing the maximum and increasing the minimum temperatures. The maximum temperature is decreased largely (1) by the depletion of solar energy in passing through the water vapour, and (2) by the evaporation of surface moisture. The minimum is increased (1) by the absorption of ground radiation, and (2) by the condensation of water vapour to form dew or frost.
    At levels well above the surface in an atmosphere which is fairly uniformly humid throughout, water vapour presumably should serve to increase the diurnal range because moist air is a much better absorber and radiator than dry air.”

    from the March 1933 Monthly Weather Review…
    https://docs.lib.noaa.gov/rescue/mwr/061/mwr-061-03-0061.pdf

    Most of the night time accumulation of CO2 at the surface is consumed by plants in the first few hours of daylight…..
    http://joannenova.com.au/2013/09/plants-suck-half-the-co2-out-of-the-air-around-them-before-lunchtime-each-day/

Comments are closed.