A TSI-Driven (solar) Climate Model

Guest essay by Jeff Patterson

Temperature versus CO2

Greenhouse gas theory predicts a linear relationship between the logarithm of CO2 atmospheric concentration and the resultant temperature anomaly. Figure 1 is a scattergram comparing the Hadcrut4 temperature record to historical CO2 concentrations.

image

Figure 1a – Hadcrut4 temperature anomaly vs. CO2 concentration (logarithmic x-scale); (b) Same as Figure 1 with Gaussian filtering (r=4) applied to temperature data

UPDATE: Thanks to an alert commenter, this graph has now been updated with post 2013 data to present:

fig1 updated

Figure 1a – Hadcrut4 temperature anomaly vs. CO2 concentration (logarithmic x-scale); (b) Same as Figure 1 with Gaussian filtering (r=4) applied to temperature data

At first glance Figure 1a appears to confirm the theoretical log-linear relationship. However if Gaussian filtering is applied to the temperature data to remove the unrelated high frequency variability a different picture emerges.

Figure 1b contradicts the assertion of a direct relationship between CO2 and global temperature. Three regions are apparent where temperatures are flat to falling while CO2 concentrations are rising substantially. Also, a near step-change in temperature occurred while CO2 remained nearly constant at about 310 ppm. The recent global warming hiatus is clearly evident in the flattening of the curve above 380 ppm. These regions of anti-correlation were pointed to by Professor Judith Curry in her recent testimony before the Senate Subcommittee on Space, Science and Competitiveness:[6]

If the warming since 1950 was caused by humans, what caused the warming during the period 1910 –1945? The period 1910-1945 comprises over 40% of the warming since 1900, but is associated with only 10% of the carbon dioxide increase since 1900. Clearly, human emissions of greenhouse gases played little role in causing this early warming. The mid-century period of slight cooling from 1945 to 1975 – referred to as the ‘grand hiatus’, also has not been satisfactorily explained.

A much better correlation exists between atmospheric CO2 concentration and the variation in total solar irradiance (TSI). Figure 2 shows the TSI reconstruction due to Krivova[2] .

image

Figure 2- (a) TSI reconstruction (Krivova); (b) The input driving time series u(t)

When the TSI time series is exponentially smoothed and lagged by 37 years, a near-perfect fit is exhibited (Figure 3).

image

Figure 3- Logarithmic CO2 concentration vs. lagged and exponentially smoothed variation in TSI (a = .001; G=6.65e-3 ; t=37}

Note that while in general correlation does not imply causation here there is no ambiguity as to cause and effect. Clearly the atmospheric concentration of CO2 cannot affect the sun spot number from which the TSI record is reconstructed.

This apparent relationship between TSI and CO2 concentration can be represented schematically by the system shown in Figure 4. As used here, a system is a black box that transforms some input driving function into some output we can measure. The mathematical equation that describes the input to output transformation is called the system transfer function. The transfer function of the system in Figure 4 is a low-pass filter whose output is delayed by the lag td1 . The driving input u(t) is the demeaned TSI reconstruction shown in Figure 2b. The output v(t) is the time series shown in Figure 3a (blue curve) which closely approximates the measured CO2 concentration (Figure 3a, yellow curve).

image

Figure 4- Laplacian representation of the TSI-toCO2 concentration transfer function

In Figure 4, the block labeled 1/s is the Laplacian representation of a pure integration. Along with the dissipation feedback factor a1 it forms what system engineers call a “leaky integrator”. It is mathematically equivalent to the exponential smoothing function often used in time series analysis. The block labeled td1 is the time lag and G is a scaling factor to handle the unit conversion.

In a plausible physical interpretation of the system, the dissipative integrator models the ocean heat content which accumulates variations in TSI; warming when it rises above some equilibrium value and cooling when it falls below. As the ocean warms it becomes less soluble to CO2 resulting in out-gassing of CO2 to the atmosphere.

The fidelity with which this model replicates the observed atmospheric CO2 concentration has significant implications for attributing the source of the rise in CO2 (and by inference the rise in global temperature) observed since 1880. There is no statistically significant signal of an anthropogenic contribution to the residual plotted Figure 3c. Thus the entirety of the observed post-industrial rise in atmospheric CO2 concentration can be directly attributed to the variation in TSI, the only forcing applied to the system whose output accounts for 99.5% ( r2=.995) of the observational record.

How then, does this naturally occurring CO2 impact global temperature? To explore this we will develop a system model which when combined with the CO2 generating system of Figure 4 can replicate the decadal scale global temperature record with impressive accuracy.

Researchers have long noted the relationship between TSI and global mean temperature.[5] We hypothesize that this too is due to the lagged accumulation of oceanic heat content, the delay being perhaps the transit time of the thermohaline circulation. A system model that implements this hypothesis is shown in Figure 5.

image

Figure 5- System model

As before, the model parameters are the dissipation factor a2 that determines the energy discharge rate; input offset constant Ci representing the equilibrium TSI value; scaling constants G1, G2 which convert their inputs to a contributive DT, and time lag td2. The output offset Co represents the unknown initial system state and is set to center the modeled output on the arbitrarily chosen zero point of the Hadcrut4 temperature anomaly. It has no impact on the residual variance which is assumed zero mean.

The driving function u(t) is again the variation in solar irradiance (Figure 2b). The second input function v(t) is the output of the model of Figure 4 which was shown to closely approximate the logarithmic CO2 concentration. Thus the combined system has a single input u(t) and a single output- the predicted temperature anomaly Ta(t). Once the two systems are combined the CO2 concentration becomes an internal node of the composite system.

Y(t) represents other internal and external contributors to the global temperature anomaly, i.e. the natural variability of the climate system. The goal is to find the system parameter values which minimizes variance of Y(t) on a decadal time scale.

Natural Variability

Natural variability is a catch-all phrase encompassing variations in the observed temperature record which cannot be explained and therefore cannot be modeled. It includes components on many different time scales. Some are due to the complex internal dynamics of the climate system and random variations and some to the effects of feedbacks and other forcing agents (clouds, aerosols, water vapor etc.) about which there is great uncertainty.

When creating a system model it is important to avoid the temptation to sweep too much under the rug of natural variation. On the other hand, in order to accurately estimate the system parameters affecting the longer term temperature trends it is helpful to remove as much of the short-term noise-like components as practicable, especially since these unrelated short-term variations are of the same order of magnitude as the effect we are trying to analyze. The removal of these short-term spurious components is referred to as data denoising. Denoising must be carried out with the time scale of interest in mind in order to ensure that significant contributors are not discarded. Many techniques are available for this purpose but most assume the underlying process that produced the observed data exhibits stochastic stationarity, in essence a requirement that the process parameters remain constant over the observation interval. As we show in the next section, the climate system is not even weak sense stationary but rather cyclostationary.

Autocorrelation

Autocorrelation is a measure of how similar a lagged version of a time series resembles the unlagged data. In a memoryless system, correlation falls abruptly to zero with increasing lag. In systems with memory, the correlation will decrease gradually. Figure 6a shows the autocorrelation function (ACF) of the linearly de-trended unfiltered Hadcrut4 global temperature record. Instead of the correlation gradually decreasing, we see that the correlation cycles up and down in a quasi-periodic fashion. A system that exhibits this characteristic is said to be cyclostationary. Despite the nomenclature, a cyclostationary process is not stationary, even in the weak sense.

image

Figure 6- (a) Autocorrelation function of linearly detrended Hadcrut4, (b) Power spectral density

With linear detrending, significant correlation is exhibited at two lags, 70 years and 140 years. However the position of the correlation peaks is highly dependent on the order of the detrending polynomial.

Power spectral density (spectrum) is the discrete Fourier transform of the ACF and is plotted in Figure 6b. It shows significant periodicity at 71 and 169 years but again the extracted period will vary depending on the order of the detrending polynomial (linear, parabolic, cubic etc.) and also slightly on the data endpoints selected.

Denoising the Data

From the above it is apparent that we cannot assume a particular trend shape to reliably isolate the “main” decadal scale climatic features we hope to model. Nor can we assume the period of the oscillatory component(s) remains fixed over the entire record. This makes denoising a challenge. However, a technique [1] has been developed for denoising data which makes no assumptions regarding the stationarity of the time record which combines wavelet analysis with principal component analysis to isolate quasi-periodic components. A single parameter (wavelet order) determines the time scale of the retained data. The implementation used here is the wden function in Matlab™. The denoised data using a level 4 wavelet as described in [1] is plotted as the yellow curve in Figure 7.

image

Figure 7-Hadcrut4 with wavelet denoising

The resulting denoised temperature profile is nearly identical to that derived by other means (Singular Spectrum Analysis, Harmonic Decomposition, Principal Component Analysis, Loess Filtering, Windowed Regression etc.)

Figure 8a compares the autocorrelation of the denoised data (red) to that of the raw data (blue). We see that the denoising process has not materially affected the stochastic properties over the time scales of interest. The narrowness of the central lobe of the residual ACF (Figure 8b) shows that we have not removed any temperature component related to the climate system memory.

image

Figure 8- (a) ACF of the denoised data (original in blue); (b) ACF of the residual

The denoised data (Figure 7) shows a long-term trend and a quasi-periodic oscillatory component. Taking the first difference of the denoised data (Figure 9) shows how the trend (i.e. the instantaneous slope) has evolved over time.

image

Figure 9- Instantaneous slope estimate from the first difference of the denoised Hadcrut4 record

There are several interesting things of note in Figure 8. The period is relatively stable while the amplitude of the oscillation is growing slightly. The trend maxed out at .23 ⁰C /decade circa 1994 and has been decreasing since. It currently stands at .036 ⁰C /decade. Note also that the mean slope is non-zero (.05 ⁰C /decade) and the trend itself trends upward with time. This implies the presence of a system integration as otherwise the differentiation would remove the trend of the trend.

A time series trend does not necessarily foretell how things will evolve in the future. The trend estimated from Figure 9 in 1892 would predict cooling at a rate of .6 degrees-per-century while just 35 years later predict 1.5 degrees-per-century of warming. Both projections would have been wildly off base. Nor is there justification in assuming the long-term trend to be some regression on the slope. Without knowledge of the underlying system, one has no basis on which to decide the proper form of the regression. Is the long term trend of the trend linear? Perhaps, but it might just as plausibly be a section of a low frequency sine wave or a complimentary exponential or perhaps it is just integrated noise giving the illusion of a trend. To sort things out we need to approximate the system which produced the data. For this purpose we will use the model shown in Figure 5 above.

Model Parametrization

As noted, the composite system is comprised of two sub-systems. The first (Figure 4) replicates the atmospheric CO2 whose effect on temperature is assumed linear with scaling factor G1. The parameters of the first system were set to give a best-fit match to the observational CO2 record (see Figure 3).

The remaining parameters were optimized using a three-step process. First the dissipation factor a2 and time delay td2 were optimized to minimize the least-squares error (LSE) of the model output ACF as compared to the ACF of the denoised data (Figure 10, lower left), using a numerical method [7] guaranteed to find the global minimum. In this step the output and target ACFs are both calculated from the demeaned rather than detrended data. This eliminates the dependence on the regression slope and, since the ACF is independent of the scaling and offset, allows the routine to optimize to these parameters independently. In the second step, the scaling factors G1, G2 are found by minimizing the residual LSE using the parameters found in step one. Finally the input offset Ci is found by solving the boundary condition to eliminate the non-physical temperature discontinuity. The best-fit parameters are shown in Table 1. The results (figure 10) correlate well with observational time series (r = .984).

image

Figure 10- Modeled results versus observation

Figure 10- Modeled results versus observation

 

Dissipation Factor a1 .006
Dissipation Factor a2 .051
Scaling Parameter G1 .0176
Scaling Parameter G2 .0549
CO2 Lag (years) td1 37
TSI Lag (years) td2 84
Input Offset (W/m2) C0 -.045
Output Offset (K) C1 .545

Table 1- Best fit model parameters

The error residual (upper right) remains within the specified data uncertainty (± .1⁰C) over virtually all of the 165 year observation interval. The model output replicates most of the oscillatory component that heretofore has been attributed to the so-called Atlantic Multi-decadal Oscillation (AMO). As shown in the detailed plots of Figure 11, the model output aligns closely in time with all of the major breakpoints in the slope of the observational data, and replicates the decadal scaled trends of the record (the exception being a 10 year period beginning in 1965), including the recent hiatus and the so-called ‘grand hiatus’ of 1945-1975.

image

Figure 11- Modeled versus Hadcrut4 (detailed)

Figure 12 plots the scaled, second difference of the denoised data against the model residual. The high degree of correlation infers an internal feedback sensitive to the second derivative of temperature. That such an internal dynamic can be derived from the modeled output provides further evidence of the model’s validity. Further investigation of an enhanced model that includes this dynamic will be undertaken.

image

Figure 12- Scaled, second difference of the denoised Hadcrut4 temperature anomaly (gold) vs. model residual

 

Climate Sensitivity to CO2

The transient climate sensitivity to CO2 atmospheric concentration can be obtained from the model by running the simulation with G2 set to zero, giving the contribution to the temperature anomaly from CO2 alone (Figure 13a).

image

Figure 13- Contribution to temperature anomaly due to CO2 (left); Regression on CO2 concentration (right)

A linear regression on the modeled temperature anomaly (with G2 = 0) versus the logarithmic CO2 concentration (Figure 13b) shows a best fit slope of 1.85 yielding an estimated transient climate sensitivity to doubled CO2 of 1.28 ⁰C. Note however that assuming the model is relevant, the issue of climate sensitivity is moot unless and until an anthropogenic contribution to the CO2 concentration becomes detectable.

Discussion

These results are in line with the general GHG theory which postulates CO2 as a significant contributor to the post-industrial warming but are in direct contradiction to the notion that human emissions have thus far contributed significantly to the observed concentration. In addition, the derived TCR implies a mechanism that reduces the climate sensitivity to CO2 to a value below the theoretical non-feedback forcing, i.e. the feedback appears to be negative. Other inferences are that the observed cyclostationarity is inherent in the TSI variation and not a climate system dynamic (because a single-pole response cannot produce an oscillatory component) and that at least over the short instrumental time period, the climate system as a whole can be modeled as a linear, time-invariant system, albeit with significant time lag.

In a broader context, these results may contain clues to the underlying climate dynamics that those with expertise in these systems should find valuable if they are willing to set aside preconceived notions as to the underlying cause. This model, like all models, is nothing more than an executable hypothesis and as Professor Feynman points out, all scientific hypotheses start with a guess. The execution of a hypothesis, either by solving the equations in closed form or by running a computer simulation is never to be confused with an experiment. Rather a simulation provides the predicted ramifications of the hypothesis which falsify the hypothesis if the predictions do not match empirical observations.

An estimate of the future TSI is required in order for this model to predict how global temperature will evolve. There are some models of this in development by others and I hope to provide a detailed projection in a future article. In the meantime, due to the inherent system lag, we can get a rough idea over the short term. TSI peaked in the early 80s so we should expect the CO2 concentrations to peak some 37 years later, i.e. in a few years from now. Near the start of the next decade, CO2 forcing will dominate and thus we would expect temperatures to flatten and begin to fall as this forcing decrease. Between now and then we should expect a modest increase. This no doubt will be heralded as proof that AGW is back and that drastic measures are required to stave off the looming catastrophe.

Comment on Model Parametrization

It is important to understand the difference between curve fitting and model parametrization. The output of a model is the convolution of its input and the model’s impulse response which means that the output at any given point in time depends on all prior inputs, each of which is shaped the same way by the model parameter under consideration. This is illustrated in Figure 14. The input u(t) has been decomposed in to individual pulses and the system response to each pulse plotted individually. Each input pulse causes a step response that decays at a rate determined by the dissipation rate, set to .05 on the left and .005 on the right. The output at any point is the sum of each of these curves, shown in the lower panels. The gain factor G simply scales the result and does not affect the correlation with the target function. Thus, unlike polynomial regression, it is not possible to fit an arbitrary output curve given specified forcing function, u(t). In the models of Figures 4 and 5 it is only the dissipation factor (and to a small extent in the early output, the input constant) which determine the functional “shape” of the output. The scaling, offset and delay do not effect correlation and so are not degrees of freedom in the classical sense.

Figure14

Figure 14 -Illustration of convolution for a=.05 (left) and .005 (right)


References:

1) Aminghafari, M.; Cheze, N.; Poggi, J-M. (2006), “Multivariate de-noising using wavelets and principal component analysis,” Computational Statistics & Data Analysis, 50, pp. 2381–2398.

2) N.A. Krivova, L.E.A. Vieira, S.K. Solanki (2010).Journal of Geophysical Research: Space Physics, Volume 115, Issue A12, CiteID A12112. DOI:10.1029/2010JA015431

3) Ball, W. T.; Unruh, Y. C.; Krivova, N. A.; Solanki, S.; Wenzler, T.; Mortlock, D. J.; Jaffe, A. H. (2012) Astronomy & Astrophysics, 541, id.A27. DOI:10.1051/0004-6361/201118702

4) K. L. Yeo, N. A. Krivova, S. K. Solanki, and K. H. Glassmeier (2014) Astronomy & Astrophysics, 570, A85, DOI: 10.1051/0004-6361/201423628

5) For a summary of many of the correlations between TSI and climate that have been investigated see The Solar Evidence (http://appinsys.com/globalwarming/gw_part6_solarevidence.htm)

6) STATEMENT TO THE SUBCOMMITTEE ON SPACE, SCIENCEAND COMPETITIVENESS OF THE UNITED STATES SENATE; Hearing on “Data or Dogma? Promoting Open Inquiry in the Debate Over the Magnitude of Human Impact on Climate Change”; Judith A. Curry, Georgia Institute of Technology

7) See Numerical Optimization from Wolfram. In particular, the NMinimize function using the “”NelderMead” method.

8) See wden from MathWorks Matlab™ documentation.

Data:

Hadcrut4 global temperature series:

Available at https://climexp.knmi.nl/data/ihadcrut4_ns_avg _ 00_ 1850:2015.dat

Krivova TSI reconstruction:

Available at http://lasp.colorado.edu/home/sorce/files/2011/09/TSI_TIM_Reconstruction.txt

CO2 data

Available at http://climexp.knmi.nl/data/ico2_log.dat

566 thoughts on “A TSI-Driven (solar) Climate Model

  1. In support of your results:

    1. changes in atmos co2 not related to the rate of emissions
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2642639

    2, rate of warming not related to the rate of emissions
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2662870

    4. rate of ocean acidification not related to the rate of fossil fuel emissions
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2669930

    5. uncertainty in natural flows too high to detect fossil fuel emissions in the IPCC carbon budget
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2654191

    6. The much hyped correlation between cumulative emissions and surface temperature is spurious
    http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2725743

    • Jamal,

      Sorry, but that doesn’t hold. Your first reference shows the following sentence:
      A statistically significant correlation between annual anthropogenic CO2 emissions and the annual rate of accumulation of CO2 in the atmosphere over a 53-year sample period from 1959-2011 is likely to be spurious because it vanishes when the two series are detrended.

      By detrending you simply removed the influence of human emissions out of the equation! The remainder is only the noise caused by the influence of fast temperature variations on (tropical) vegetation, with a high correlation but hardly any (even negative!) influence on the CO2 trend: vegetation is an increasing net sink for CO2…

      • “Sorry, but that doesn’t hold.”
        Whether it holds or not depends completely on the variance of the uncorrelated components. The figures below compare two identical exponential functions with different added Brownian noise (so that the valiance keeps pace with the exponential). The adjusted NOVA stats are given for sigma (the std dev of the noise prior to integration) = .01,.05,1 and 1. The top of each panel shows the two functions and their scattergram. The bottom panel shows the same for the linear detrended data.

        As you can see, the correlation survives detrending for sigma < .1. By eyeball, the CO2 v emissions regression loos a lot closer to the first plot than the last.

        The first paper is interesting. It confirms two results printed here; no correlation between human emissions and CO2 concentrations and the slope of the regression of CO2 v Temp (he gets 1.88, I get 1.85).

      • Jeff Patterson,

        The correlation between temperature and the CO2 rate of change survives detrending, because most of the trend is not caused by temperature (it has some influence, but very limited), but temperature causes almost all of the variability in the CO2 rate of change. That is mainly the reaction of tropical plants on ocean temperatures (El Niño) and volcanic events (Pinatubo). That vegetation is responsible for most of the variability is visible in the opposite CO2 and δ13C changes:

        If the oceans were responsible, CO2 and δ13C changes would parallel each other.

        The problem for the trend is that vegetation is a proven net -increasing- sink for CO2 since at least 1990:
        http://science.sciencemag.org/content/287/5462/2467.short

        That doesn’t prove that humans are the main cause, but it definitively proves that the trend and the variability around the trend are caused by different processes, where the variability is certainly heavily influenced by temperature, but the trend may be or not caused by temperature, anyway by a different process than what caused most of the variability.

        The variability is also peanuts compared to the trend, even with an overblown 5 ppmv/K short term reaction of CO2 to temperature variability in Wood for Trees: +/- 1.5 ppmv around a trend of 70+ ppmv 1959-2012.

        Human emissions show very little variability, even not detectable with the current accuracy in the atmosphere after detrending. Thus shows no correlation with the variability in rate of change. Detrending the CO2 rate of change thus effectively removes any influence of human emissions…

        As human emissions are about twice what remains in the atmosphere and fit all observations, there is little doubt about what is cause and effect in this case…

      • @Ferdinand Engelbeen February 10, 2016 at 9:54 am “but temperature causes almost all of the variability in the CO2 rate of change.”

        I disagree. First let’s examine the reasons for the de-correlations shown in fig1 of the original post, repeated in the upper left panel of the plot below.

        Comparing the two graphs in the top row we conclude the large de-correlation near x=1.06 in the upper left panel can be attributed to TSI variance which has been removed from the left graph by subtracting the modeled TSO contribution from the de-noised Hadcrut4 data. The lower right graph plots the modeled temperature anomaly vs the post-lag CO2 concentration converted to power density, assuming 3.7 W/m2 for 2x CO2. We conclude the oscillatory de-correlation that remains in the upper right are due to the 11 year CO2 forcing lag which has been removed in the final plot (lower right).

        Now let’s look at the rate of change in the anomaly vs rate of change in CO2, again with the 11 year lag removed

        This shows clearly that on this multi-year time scale 1) temperature lags CO2 (remember the lag has been removed in the plot by shifting it by 11 years) and that temperature follows the change in CO2 forcing immediately when the forcing arrives (after an 11 year delay). Thisould appear to contradict the long CO2 residency time meme.

        It could be that on the shorter time scales you are looking at temperature can cause variance but on longer time scales it doesn’t appear to be so.

        Regards,
        JP

    • Human emissions growth fell (went negative) at the time of the global financial crisis (GFC) but the global growth rate of atmospheric CO2 composition continued positive. Human emissions do not drive atmospheric levels of CO2.

      The IPCC provides a conversion factor that enables a direct comparison in ppm when given GtC:

      2.12 GtC yr–1 = 1 ppm

      From:
      7.3.2 The Contemporary Carbon Budget – IPCC
      https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch7s7-3-1-3.html

      Emissions data source:

      Annual Global Carbon Emissions
      https://www.co2.earth/global-co2-emissions

      Growth declined from 2008 to 2009 due to the global financial
      crisis (GFC) driven recession:

      2008 9.666 GtC
      2009 9.567 GtC
      Change: -0.099 GtC = -0.047 ppm (0.099/2.12 GtC x 1 ppm)

      This human emission change can then be directly compared to the global
      atmospheric level change from ESRL:

      Annual Mean Global Carbon Dioxide Growth Rates
      http://www.esrl.noaa.gov/gmd/ccgg/trends/global.html

      2006 1.74
      2007 2.11
      2008 1.77
      2009 1.67
      2010 2.39
      2011 1.69

      2009 -0.047 ppm – human emissions declined
      2009 +1.67 ppm – global levels increased

      Negative growth cannot drive positive growth.

      • “2009 -0.047 ppm – human emissions declined
        2009 +1.67 ppm – global levels increased”

        Elementary fallacy here. The second figure is a growth, ppm/year. The first is a change of growth rate, ppm/year/year. The figure that corresponds to 1.67 is 9.567/2.12=4.51 ppm. It’s bigger because of airborne fraction.

      • richardcfromnz,

        As Nick Stokes already said: the growth rate of human emissions declined and so did the growth rate of CO2 in the atmosphere, but both still were positive, so O2 still increased in the atmosphere, be it at a slower speed:

      • Nick Stokes

        >”Elementary fallacy here. The second figure is a growth, ppm/year. The first is a change of growth rate, ppm/year/year”

        No. The first is the human emissions growth rate (-0.047 ppm/year) in respect to 2008 i.e. the rate of growth for the 2009 year with respect to 2008 gross flow, or, with 2008 gross flow as the base year..It is simply the difference in gross flows.

        The second is the total atmosphere growth rate (+1.67 ppm/year) for the 2009 year as per ESRL (see below). Again, it is simply the difference in gross totals.

        >”The figure that corresponds to 1.67 is 9.567/2.12=4.51 ppm”

        No. 4.51 ppm is the gross flow of human emissions in 2009. Growth (or decline) is a change in the gross flow. There was negative growth in human emissions in 2009 (-0.047 ppm). Yes there was a positive gross flow in 2009 but it was LESS than the gross flow in 2008.

        The total atmospheric CO2 level increased as a result of all the combined gross flows including human emissions:

        2008 384.78
        2009 386.29
        Change +1.51 ppm

        ESRL calculates the growth rate:

        “The annual mean rate of growth of CO2 in a given year is the difference in concentration between the end of December and the start of January of that year”

        January 2009 is the base month for the 2009 growth rate (+1.67 ppm) which is a difference in gross amounts January/December. This differs slightly to the growth rate where 2008 is the base year (+1.51 ppm).

        So a restatement would be simply in terms of changes from a 2008 base:

        2009 -0.047 ppm – human emissions declined in respect to 2008.
        2009 +1.51 ppm – global CO2 levels increased in respect to 2008.

        In respect to 2008, it is impossible for human emissions to have produced the following increase in global CO2 levels in 2009 if human emissions were the sole cause of the increase. To have done so, human emissions would have to have increased by +1.51 ppm but they didn’t, they decreased -0.047 ppm. Obviously gross flow(s) other than human emissions caused the total increase from 2008 to 2009.

      • Ferdinand Engelbeen

        >”As Nick Stokes already said:”

        Nick has his wires crossed. See my reply to him above.

        “….the growth rate of human emissions declined and so did the growth rate of CO2 in the atmosphere, but both still were positive”

        No, both were NOT still positive. You are confusing gross flow with growth rate (or change in gross flow).

        My example is in respect to 2008. It is the GROSS FLOW in human emissions that was still positive in 2009, but the GROWTH RATE with respect to 2008 was negative. I repeat from my reply to Nick:

        In respect to 2008, it is impossible for human emissions to have produced the following increase in global CO2 levels in 2009 if human emissions were the sole cause of the increase. To have done so, human emissions would have to have increased by +1.51 ppm but they didn’t, they decreased -0.047 ppm. Obviously gross flow(s) other than human emissions caused the total increase from 2008 to 2009.

      • richardcfromnz,

        You are comparing the changes in the second derivative of CO2 emissions with the first derivative of the increase in the atmosphere… If you use the right dimensions, it may be clear what is going on.

        Emissions:
        2008: 9.666 GtC/year = 4.56 ppmv/year
        2009: 9.567 GtC/year = 4.52 ppmv/year
        growth rate change: -0.047 ppmv/year/year

        Increase in the atmosphere:
        2008: 1.77 ppmv/year
        2009: 1.67 ppmv/year
        growth rate change: -0.1 ppmv/year/year

        If you compare emissions per year with increase per year in the atmosphere (the “airborne fraction”):
        2008: 38.8%
        2009: 36.9%

        Conclusion: the growth rate change in the atmosphere and the airborne fraction both are more negative in 2009 compared to 2008 than the growth rate change of human emissions…
        That doesn’t say much in itself, as the growth rate in the atmosphere is heavily influenced by temperature fluctuations, which influences the CO2 uptake rate by oceans and especially by vegetation…

      • If it is not known why the LIA ended, and what powered temperature rise ever since and at what a degree (at least to 1900s, or perhaps 1950s), all the calculations of the CO2 forcing etc. are just a simple waste of time.

      • “No. 9.567 Gtons is the gross flow of human emissions in 2009”
        Yes, it’s the amount of extra carbon that humans added to the air that year. And 3.54 Gtons (=1.67*2.12) is the amount by which C in the air increased. Those are the comparable figures.

      • Nick Stokes says:

        …it’s the amount of extra carbon that humans added to the air that year.

        Thanks for the good news, Nick! More CO2 is better in our CO2-starved atmosphere. The biosphere heartily approves.

    • In a nutshell it means that global warming is caused by an increase in the energy coming from the sun (who’d a thunk) acting indirectly through increased GHG being released naturally, most likely from the ocean and that thus far, human emissions have had no detectable impact on the climate.

      • Jeff Patterson
        February 8, 2016 at 4:14 pm

        In a nutshell it means that global warming is caused by an increase in the energy coming from the sun (who’d a thunk) acting indirectly through increased GHG being released naturally, most likely from the ocean and that thus far, human emissions have had no detectable impact on the climate.
        —————————————-
        Hello Jeff.
        A very interesting post of yours…
        But I think, in my opinion and understanding that you have over-run with your interpretation.
        Don’t misunderstand me, your approach is very helpful as far as I can tell in contributing to prove that the CO2 concentration increase has being due and because of solely due to natural CO2 emissions mainly from the oceans.
        But never the less your interpretation suffers from the problem that it can not be true for other instances and periods like LIA for example, or even for much longer periods of time.

        The main base and the logical construct that holds all of your argument (the main link in the chain) is that the sun can warm the oceans, in climatic term, a long term.
        A basic guff that Nick Stokes had it as the base line in one of his arguments against Lord Monkton.
        Both the paleoclimate data and the latest modern climate data plus the GCMs do not support, confirm or validate it, while the pleoclimate data and the GCMs do actually contradict it.
        A very weak indeed link.

        What you actually most likely have proved there, and which is very amazing in its own term, is that the CO2 concentration pattern (in the increasing at least) shows to hold and reflect clearly the TSI signal for the period in question, meaning that the CO2 emission is purely natural and coming from the oceans,,,,, as in short term, like minutes long, hours long or even days or weeks long the TSI will effect or affect most probably the SST and therefor force the pattern of CO2 emissions to carry the TSI signal,, but remember the TSI can not influence the heat content of the oceans in long term and therefor cause the CO2 emissions to increase…there is a lot standing against it. TSI does not cause or control the warming of the oceans in the climate term, either in short or long term meaning of the climatic periods. It simply effects the pattern of the emissions path or trend, it does not cause it.

        Never the less, to me it seems that you most likely have proved that the period in question the overall CO2 emissions and the CO2 concentration is purely natural. To me this much seems very promising to hold out.

        Hopefully this helps and hopefully at least you understand my point made, regardless of finding it acceptable or not.

        Please do not be another Nick Stokes in this by stating as undeniable truths and basic facts some complete fallacies. .

        Thank you..:)

        cheers

      • Patterson: YES, There is most likely a cause and effect of the sun warming the oceans which then release CO2 and then the trees and plants eat this up and we have these wonderful ‘Interglacials’ which happen to be the NORMAL climate in the past.

        Then the sun energy output drops and so does the CO2 and it is drier and colder and huge glaciers form mainly over Canada and parts of Europe and we have Ice Ages. This is logical, clean, clear cause and effect.

      • I would say that global warming (if it occurs at all) is caused NOT by how much TSI is coming from the sun; (yes it varies over the year due to earth orbit, but they compute an annual average value for the whole year) but it would result from changes (increases) in the amount of that TSI (over the year) gets captured by the earth, rather than returned to space, by reflection from clouds (60% global cloud cover), scattering by the atmosphere (the blue sky), and reflections from the earth surface.

        None of your fancy “feedback” system diagrams covers the feedback due to solar energy absorbed by the oceans (mainly) modulating the cloud cover through evaporation and precipitation. There is no need to invoke any minor GHG change such as possible ocean outgassing of CO2, when the direct feedback due to water is so obvious.

        And such direct feedback to the TSI attenuator (clouds) can easily take care of any fluctuations in solar output, that affect TSI.

        Satellite measures of TSI cycling over solar cycles only amounts to about 0.1% of mean TSI value, which works out to about 72 milli deg. C change in black body equivalent Earth Temperature. That is before the cloud feedback takes over control.

        Any CO2 changes simply result in a slightly different water feedback signal.

        G

      • Jeff Patterson,

        A little late in the party and already 270 reactions…

        As usual, there are 101 mathematically possible causes for the recent increase in CO2, but there is only one that fits all observations: human emissions.

        Correlation is not causation. Indeed, in this case many items are going up together with CO2. Solar activity and ocean heat/temperature is only one of many. Mathematically strong, observationally wrong:

        – The ocean surface temperature may have been warming with about 1°C since the LIA. That gives an increase of ~16 ppmv in dynamic equilibrium with the atmosphere. That is all. That is the change in solubility of CO2 in warmer seawater per Henry’s law, confirmed by over 3 million seawater samples over several decades.

        That makes that from the 110 ppmv CO2 increase since ~1850, maximum 16 ppmv comes from the ocean warming and near 100 ppmv from human emissions, which were over 200 ppmv in the same time span.

        – Further, the increase can’t be from the oceans, as the 13C/12C ratio in the oceans (0-1 per mil δ13C) is (much) too high. Any extra release of CO2 from the oceans would give an increase of the 13C/12C ratio in the atmosphere, but we see a firm decrease in complete ratio to human emissions: -6.4 per mil δ13C pre-industrial to below -8 per mil today…

        Last but not least, the many million samples over time show that the oceans are a net sink for CO2, not a source. See the compilation made by Feely e.a.:
        http://www.pmel.noaa.gov/pubs/outstand/feel2331/exchange.shtml
        and following pages…

      • Ferdinand Engelbeen @ February 9, 2016 at 11:08 am

        “The ocean surface temperature may have been warming with about 1°C since the LIA. That gives an increase of ~16 ppmv in dynamic equilibrium with the atmosphere. That is all.”

        It isn’t, because this is a dynamic flow, and throttling the egress of CO2 within the THC causes an imbalance which produces continuous accumulation in the surface system.

        “Further, the increase can’t be from the oceans, as the 13C/12C ratio in the oceans (0-1 per mil δ13C) is (much) too high.”

        There are many potential explanations for the isotope ratio.

        “Last but not least, the many million samples over time show that the oceans are a net sink for CO2, not a source.”

        Studies which begin with assumptions tend to confirm those assumptions. But, there is no guarantee that these tallies are exhaustive.

        I do not have time to spar with you today, and regular denizens are no doubt already familiar with our epic battles here, Ferdinand, so I will let it go at that.

      • Bart,

        Indeed we have been there many times…

        Dynamic: a lot of CO2 (~40 GtC/year) is released by the upwelling waters near the equator and absorbed by the downwelling waters near the poles.
        The dynamic equilibrium between ocean surface and the atmosphere changes with about 16 ppmv/°C, no matter if that is static of a sample in a closed bottle or dynamic over the global oceans. That is a matter of (area weighted) average pCO2 difference between the oceans surface and the atmosphere.

        There is no “throttling” of CO2 in the surface waters anywhere, as any temperature change only changes the pCO2 of the surface waters locally with 16 μatm/°C. An increase of ~16 ppmv in the atmosphere fully restores the outflow of CO2 into the deep oceans at the THC downwelling area after a 1°C increase in local temperature of the surface waters.

        There are many potential explanations for the isotope ratio

        No, there are none which changes the sign of adding something with a higher level into a lower one in the atmosphere. It is like adding an acid to a solution and expecting that the pH goes up…

        Studies which begin with assumptions tend to confirm those assumptions. But, there is no guarantee that these tallies are exhaustive.

        Observations are what they are. pCO2 measurements of the ocean waters (surface and down to 2000 m depth) were already done in the ’30s of last century, long before any climate change hype.
        Except for the upwelling zones, almost all of the ocean surfaces are net sinks for CO2 over a full year. See:
        http://www.pmel.noaa.gov/pubs/outstand/feel2331/maps.shtml
        and next section.

    • Well when x goes up, so does log(x).

      So you can’t have Temperature go down and CO2 go up if it is a log relationship, and then both go the same way.

      So it certainly isn’t a log relationship in practice. Isn’t in theory either because the captured photons don’t stay dead.

      Beer’s absorption Law presupposes that captured radiant energy (absorbed) doesn’t propagate any further, so it doesn’t apply to materials which reradiate (even if it is at “heat” wavelengths)

      It might be non-linear, if they are even related at all; but it certainly is not logarithmic.

      A logarithm is a very precisely defined mathematical function; not just some bent line.

      g

    • @ Tom, 3:58 pm feb 8. I am in the same boat Tom, but the one graph that stood out was fig 9 where the time span between the highs and lows seems (to me anyway) tho be lengthening. The bottom of the graph at year 1850 to the next one 1895 appears to be 45 years the next low point shows a 65 year difference (1895 -1960), from 1960 to the next one seems to imply a 70- to 80 year difference, @ Jeff is there something there? Does this mean there is a slow down in the cycle due to the sun slowing down a bit? As I said I am not a math guy so can you try to simplify this a bit?, Thanks., Tobias.

    • @whitten – Your point regarding time scales is well taken. The underlying assumptions of this model is linearity and time invariance, neither of which is valid for the climate system over the long term. Over the instrumental period covered here, the assumptions seem to be validated by the results but I should have been clear that the model’s utility is limited by these assumptions.

      Regarding the little ice age, I’ve back casted the model against the CET time series which, while not global, is the only one I know of that provides real data back to the 17th century. It hold up remarkably well to about 1710 prior to which the spin-up period of the simulation doesn’t provide valid output.

  2. Unfortunately, the TSI reconstruction in Figure 2 is probably not correct, as the reconstruction is based on the now obsolete Hoyt&Schatten Group Sunspot Number. A modern reconstruction based on the revised Groups Numbers [and the reconstruction of the magnetic field in the solar wind] looks more like this:

    The reconstruction finds support from an unexpected source: The Waldmeier Effect:

    http://www.leif.org/research/The-Waldmeier-Effect.pdf

    • Unfortunately, the TSI reconstruction in Figure 2 is probably not correct

      Should be easy for Jeff to enter the revised TSI reconstruction in his model; looking forward to the result.

    • I’ve re-run with your tsi time series. Not sure how to post an image in a comment here but you can see it at https://montpeliermonologs.wordpress.com/2016/02/09/re-run-with-updated-tsi-values/

      I optimized manually (the optimizer takes about 8 hours to run) so this should be considered preliminary. Some initial observations:
      The residual is slightly larger and more periodic
      The biggest change aside from the expected scaling was to move the CO2 lag value from 37 years to 3 years which seems more likely
      There is more TSI ripple in the but they actually time align pretty well with the raw (not denoised) data (lower right)

      I’ll look at the CO2 correlation later today

      • Take the TSI series from 1700 to today and reverse it, so that you use 2015’s value for 1700, 2014’s for 1701, and so on. Then repeat your analysis. Show us what you get.

      • “No matter what you put in, you always get the desired result.”

        Seriously? The transfer function is a single pole low pass filter with scaling and some lag. You think such a system can create an arbitrary output with a random input??

      • @Jeff Patterson – just cut and paste the link to the image on a line by itself, and WordPress will do the rest … like this …

        a2 = .059 (.051)
        g1 = .041 (.018)
        g2 = .061 (.055)
        d1 = 3 years (37 years)
        d2 = 73 years (84 years)
        Ci = -.041 degs

        Cheers,
        Tom

      • @”Take the TSI series from 1700 to today and reverse it, so that you use 2015’s value for 1700, 2014’s for 1701, and so on. Then repeat your analysis. Show us what you get.”

        Here’s the reversed, demeaned TSI series

        Running the simulation with the (manually) optimized values from the original series we get:

        We get the expected negative correlation.

        Step one of the optimization (fit to ACF) works just fine because it is insensitive to scale including a scale factor of -1!

        but at this point the residual is not so hot :)

        Step 2 (minimize the residual fails). The optimizer can find no solution where the correlation is positive. It dutifully sets both scaling factors to zero and calls it a day (but at least it finds the “solution” quickly :)

        Removing the step 1 restriction and let the optimizer have control over all paramters (I cheat here a little and use a method not guaranteed to find a global minimum. Takes too long with all parameters)

      • @lsvalgaard “Take the TSI series from 1700 to today and reverse it, so that you use 2015’s value for 1700, 2014’s for 1701, and so on. Then repeat your analysis. Show us what you get.”

        Here’s the reversed TSI series used below.

        Using the parameters from the prior optimization, correlation is negative as expected.

        First step of the optimization (match the acfs) works because it is insensitive to scaling, including a scaling of -1 :

        The fit though is not too is not too hot :)

        Let’s skip the first step and give the optimizer control of all parameters, best fit (not guaranteed global minimum – that method takes too long)

        …and it took a TSI lag of 147 years to do that well and ‘course now the acf is all horked up

        Convinced?

      • Our records really only begin around 1700. Before that, the data is extremely poor; so poor that Wolf didn’t dare assign a sunspot number to each year. Nothing magical about that.

      • You still don’t do it right. Let me be a bit more pedestrian:
        For the year 2015 use the value for 1700
        For the year 2014 use the value for 1701
        For the year 2013 use the value for 1702
        For the year 2012 use the value for 1703
        For the year 2011 use the value for 1704

        For the year XXXX use the value for 3715-XXXX
        ….
        For the year 1702 use the value for 2013
        For the year 1701 use the value for 2014
        For the year 1700 use the value for 2015

        start the integration in 1700. I don’t think values from 400 years ago are useful.

      • It is a small coincidence, related to the fact that the sunspot cycles have on the average about the same length ~11 years. For the next test we’ll of course shift the series a bit, etc. My point is that since there is no trend over the years 1700-2015, there will be no trend in the source function. If you need to, you can replicate the 315 years as many times in the past as you need to get a stationary state.

      • No need to be pedestrian. The plot I posted at 3:12 above is the series reversed from 1700 (yellow) plotted against the unreversed series from 1611 or whatever. I haven’t simulated it but it is so highly correlated to the original that I won’t bother.

    • From the previous thread about the Solar Dynamo:
      lsvalgaard February 1, 2016 at 12:43 pm
      “Yes, all solar cycles look alike as far as solar activity is concerned. By that I mean that if shown a picture of solar activity [the solar disk] from a given day you can’t tell which cycle it is from.”

      I would hazard to say that if shown a picture of meteorologic activity of the Earth disk from a given day you can’t tell which cycle it is from either. The earth climate system is remarkably stable also.

    • Earlier I posted some preliminary results with the correct TSI series which I had attempted to manually optimize. The optimized results with the correct TSI series are much better. The divergence 1965-1975 divergence I got with the old series is no longer there.

      • I certainly am anxious to get the correct data but looking at your plot I’m not sure it will effect things much. The differences seem to be mostly in the peak values, not the timing of the break points in slope which are most determinative. The new data would most likely result in a different scaling parameter but since the ACF is invariant to scale it should not affect the fit (and who knows, may improve the 1965-1975 divergence)

      • I’m not sure
        Then become sure by actually doing it [and your original analysis – rather than with the integral straw man] with the corrected data. Otherwise it is just hand waving.

      • You have to at least get the physics right – temperature varies as the integral of TSI. Then address the issue of how closely sunspot counts approximate TSI.

      • Not at all. As usual, who like to inject a poison pill. The physics is not right. The integral is always increasing towards infinity as time goes on.The integral of the difference between the mean and the series is always zero. If you use something else than the mean to subtract from the series, then that value becomes a free parameter that you can vary until you get the desired curve fit. No physics involved.

      • Isvalgaard.
        Which “series”? I referred to Stockwell integrating TSI to get actual temperatures as necessitated by heat capacity of the ocean, land and atmosphere.
        The integral increases towards infinity IF it is always positive.
        BUT a physically realistic solar insolation (TSI) bounded by black body T^4 radiation to ~4K space temperature is ALWAYS bounded for the reasonably forseeable future (e.g., the next million years, and before the sun decays into a red giant.)
        (The integral only becomes unbounded if you use the always positive surrogate of sunspot counts.)
        Using actual physics of albedo (surface absorptivity/emissivity, and /cloud reflectivity) does not give a “free parameter”. The challenge is to get the reasonable models on the rest of that physics to model reality.

      • The integral only becomes unbounded if you use the always positive surrogate of sunspot count
        Which you say you do. You do not mention the all-important free parameter you subtract before integrating, nor over how long the integral is taken, so you are just curve fitting without physics.

      • About sun spots: way back when the Kitt Peak Solar Observatory was built (it was the first to focus only on our local star) the best way to see if the sun is active versus quiet was via sun spots. We have a much longer sun spot activity data due to astronomers starting with Galileo tracking this off and on, mostly on after 1850, but now we have much better space-based observations as well as from observatories and we can see many more solar activities.

        Generally speaking, when the sun is quiet, the climate changes here on our planet and when it is very active, our planet as well as the other planets reflect this by warming up. So, to see if it will be warmer or cooler, we track solar activity.

        Now…thanks to humans, during this slow down in solar irradiation, we don’t see CO2 dropping due to us burning stuff. But…it is too early to tell if the oceans will soak up even human CO2 levels but I think we will see if this is true if we slide into another Little Ice Age again.

      • “The integral is always increasing towards infinity as time goes on.”
        The system modeled here contains _no_ pure integration. The impulse response is a decaying exponential, not a step.

      • You miss the point: sunspot numbers and TSI are always positive so the integral will always increase with time, unless you constrain it with yet another free parameter.

      • But Leif, if the offset is a step into a pure integrator the output is a linear ramp, the slope of which as you not is a free parameter. But the step response of the system here is not a ramp, it’s a decaying exponential. It’s a transient whose impact on the output dies away. The TDI series starts something like two hundred years prior to the start of the fit comparison in 1850. The transient is long gone by then for the alphas we’re modeling.
        JP

  3. Thanks for the work, Jeff. Unfortunately, bad news … your chosen TSI reconstruction is based on the old sunspot numbers, and if you use the new sunspot numbers your whole claim completely falls apart. The new numbers are available below. I encourage you to redo your work using the correct numbers.

    My best, and your post is appreciated even though it’s incorrect.

    w.

    daily

    monthly

    yearly

      • IIRC it’s mainly in increase of 20% before 1947 or something similar. There’s a global rescaling which is just a case of definition and would not change the outcome. Dr S willl probably give the details.

    • Stop……, you all seem to have forgotten something: – what about the exaggerated and accumulating anthropogenic CO2- contribution to the syste….? (Sarc…)

    • The ‘new numbers’ distort past data! Way back in 1960, the sun spot data was a lot thinner and more casually accumulated. So today, thanks to modern observations, we see much more ‘activity’ and like ALL the major data accumulations, all of this is post-1975 or so.

      The fact here is obvious: when the sun is quiet, temperatures on earth fall. When it is very active, temperatures rise. This is due to the sun being a hot object which is the main reason our planet hasn’t frozen solid like more distant bodies in orbit around this star.

      • No, this is not correct. Sunspots today are still counted with the same type of telescopes used 180 years ago [even including the very same physical telescopes that Rudolf Wolf used since the 1850s and used in Zurich until 1981 and used by Friedli today]:

    • @Willis – I posted some manually optimized results above with the correct TSI series. Overnight I ran the optimize and the results are much better (surprise). With the correct series the divergence from 1950-1970 is gone. I really would like to somehow re-submit the whole article or at least post an update rather than have it buried in the comments and the incorrect data in the main post. Advice?
      JP

      • Thanks, Jeff. As I pointed out above, the “correct” TSI series still has a totally model based and unverified trend in it. The authors of the study themselves say that the trend is “speculative” and that it may be zero.

        It is that speculative trend which forms the backbone of your study. This brings up the old rule called GIGO. We have no observational evidence that the TSI has increased as is claimed by the authors of the TSI reconstruction. Why should we pay any attention to their claims?

        Next, I see that you have diagnosed an 84-year parameter in a 165-year dataset … setting Nyquist aside, in natural datasets I am very, very cautious about ascribing a repeating cycle unless I have four full cycles to look at … and even then I’ve been fooled. So the maximum length cycle I’d put any weight on would be 165 / 4 = about forty years or so. Beyond that it is speculation.

        Next, I hadn’t discussed parameters because we were working on data. I have huge problems with 8-parameter models. I suppose it is time to reprise Freeman Dyson’s visit with Enrico Fermi.

        Then [Fermi] delivered his verdict in a quiet, even voice. “There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and selfconsistent mathematical formalism. You have neither.”

        I was slightly stunned, but ventured to ask him why he did not consider the pseudoscalar meson theory to be a selfconsistent mathematical formalism. He replied, “Quantum electrodynamics is a good theory because the forces are weak, and when the formalism is ambiguous we have a clear physical picture to guide us.With the pseudoscalar meson theory there is no physical picture, and the forces are so strong that nothing converges. To reach your calculated results, you had to introduce arbitrary cut-off procedures that are not based either on solid physics or on solid mathematics.”

        In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, “How many arbitrary parameters did you use for your calculations?” I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

        With that, the conversation was over. I thanked Fermi for his time and trouble,and sadly took the next bus back to Ithaca to tell the bad news to the students.

        Fermi’s objection to the lack of a “clear physical picture” is important in your case. I have no idea, for example, how there might be a 37 year lag between changes in the sun and resulting changes in surface temperatures.

        Most importantly, please note the point at which Fermi talked about parameters. It was when Dyson asked if Fermi was impressed by the agreement between Dyson’s model and the observations. Fermi was not impressed.

        By the same token, you must understand that I am not impressed in the slightest by your correlations and your matches between your model and the observations. In fact, with eight tunable parameters, I’d be impressed only if you could NOT match your model to reality.

        I know you’ve put lots of time and work into this, and the good news is that I’m sure that you’ve learned heaps in the process.

        But fitting arbitrarily chosen data to an arbitrarily chosen dataset using an arbitrarily chosen transformation function in an eight-parameter model is meaningless.

        It didn’t impress Fermi, it doesn’t impress me, it won’t impress anyone paying attention. I cannot advise you strongly enough to take the temperance pledge, rid yourself of chimeric indices and surds, and abjure such intoxicating multi-parameter methods …

        Finally, I know that my tone is sometimes rougher than I intend. So please take all of this in the intended spirit, which is to support you in making the best use of your time.

        Regards,

        w.

        CODA:

        Yet what are all such gaieties to me
          Whose thoughts are full of indices and surds?
             x2 + 7x + 53
                = 11 / 3

        Part of a poem by Lewis Carroll

      • Willis,

        IMO there are clear physical pictures behind solar activity and climate. Variations in the sun’s irradiance and magnetism are demonstrably linked to changes in the climate systems of its planets. This is especially true of earth.

        For instance, in Dr. S’s UV data at February 10, 2016 at 9:01 am, the warm and cooling cycles observed since the end of the LIA are clearly visible. Taking the time integral of sets of three or so of the sequentially higher or lower solar cycles produces a pretty good fit for the mid-19th century warming, late 19th to early 20th century cooling, early 20th century warming, mid-20th century cooling, late 20th century warming and present cooling. There is an anomalous cycle in the mid-20th century, so the fit isn’t perfect. But the effects of UV flux on ozone and seawater heating aren’t the only solar parameter than matters.

        Small fluctuations over decades add up to observable changes.

      • IMO there are clear physical pictures behind solar activity and climate. Variations in the sun’s irradiance and magnetism are demonstrably linked to changes in the climate systems of its planets
        The physics tells us that the effects are less than 0.1C and such changes have not been demonstrated as they are buried in the noise.

      • Gloateus Maximus February 11, 2016 at 2:19 pm

        Willis,

        IMO there are clear physical pictures behind solar activity and climate.

        Thanks, Gloateus, but that’s not responsive to what I said. I said there was no clear physical picture behind his theorized 8-parameter model.

        w.

      • But there are physics behind TSI variations as a rough approximation for major drivers of climate change. Connecting those with the model parameters should be possible.

        For instance, Kepler used Tyco’s observations of the orbits of Mars to conclude its curve fit an ellipse rather than a circle. He didn’t have a good physical model to explain this curve fitting exercise. It took Newton a whole mathematical book to provide and demonstrate one, based upon his theory of universal gravitation, latter refined by Einstein, an upgrade recently reconfirmed.

      • Maybe this will help:

        On the left the optimizer was run pretending we’re in 1976. The CO2 sognal is just barely detectable at this point. On the right, simulate to present with the same parameters found in the training run. You can consider the post 1976 blue curve to be the prediction we would have made in 1976 based on the model. It was never off by more than .1 degs.

        parameters; training/final
        alpha -> 0.0356 / 0.0365,
        g1 -> 0.0543 / 0.0510,
        G -> 0.770 / 0.788
        tau1->11 / 11
        tau2->80 /81

        What would it take beyond this to convince you there is something here worth looking at?

      • With the large difference between the ‘new’ TSI and the ‘old’, you should find large difference in the result. If you do not, that signal that you are not doing what you think you are doing

      • The differences are quite substantial and much improved. The bias in the old data caused too much of the total forcing to be attributed to TSI. With the new bias free data the TCS 2x is substantially higher. With the old data, training had to go to 1995 to get a reliable prediction. Now we are getting excellent results from 1976 and I haven’t yet gone any farther back. Also the divergence circa 1965 is gone.

        I want to thank you for setting me straight on the correct data to use. You’ve been immensely helpful!
        JP

      • :). As I told Willis I’d like to retract it or post an errata but I’m not sure how that’s done around here. At some point though I’d like to show you the co2 v TSI curve I get with the new series. It looks the same as before after 1900 but before that falls off a cliff. Do you have confidence in the TSI data from say 1800-1900?

        [To make an edit (to an original thread posting) or a retraction (to a comment or reply) ,
        (1) Be absolutely clear about what is to be edited, retracted, or changed. Line nbr, paragraph, date-time group id, etc)
        (2) Be absolutely clear about what the corrected words or paragraph or graphic should be.

        (3) Once the change is clear, and on approval and review, the original words are either lined through (the usual and preferred way with a thread header) or replaced within [sq brackets]. .mod]

      • Do you have confidence in the TSI data from say 1800-1900?
        Yes, because we have EUV data to back it up. Before 1700, I am not so sure that we understand what goes on. You could make the argument that with few visible sunspot to drag down TSI during the Maunder Minimum, TSI might have higher rather than lower. We don’t know.

      • With the old data, training had to go to 1995 to get a reliable prediction. Now we are getting excellent results from 1976 and I haven’t yet gone any farther back
        This sounds very suspicious to me. The limits on what to use should be set before the analysis, not just stop when things ‘look good’.

      • Agreed. I attributed the issue to a signal to noise problem (which it was if you consider a spurious bias in the input signal as noise). My original target was 1976 because I read somewhere that this is the year mentioned by the IPCC as when theory said CO2 forcing should become detectable. When that failed with the old series I moved the end date up until I could get decent parametric stability.

    • Thanks Willis. Just to make sure we’re on the same page, I thought your comments re trend were referencing the G. Kopp, N. Krivova, C.J. Wu study which Leif and you warned me off of. The results posted above used the SSN pointed to me by Leif after appling his conversion (to TSI) formula. I’m assuming that’s the best we can hope for.

      Re: 85 years. Its actually the 405 year record TSI record that’s of import. I left pad the CO2 record to match, assuming a constant 285 ppm prior to 1732. Error in this assumption have a negligible effect by 1850 when the comparison starts.

      Re # of parameters: Calling it an eight parameter model (setting aside the fact that its a convolution) ignores that there are 4 different functions being match independently (autocorrelation of model vs denoised temp, CO2 vs TSI, T vs TSI, and boundary condition). But in any case, I’ve abandoned for now the CO2 generation system and am just driving with the second input with actual CO2 time series directly. The CO2 vs, TSI correlation falls apart prior to 1900 with the “correct” TSI series and you all have convinced me that it’s probably non-physical anyway so I’ve set that puzzle aside for the time being and am retracting the CO2 vs TSI hypothesis. I’ve also eliminated the TSI input offset parameter as the optimizer comes up with zero to 4 decimal places anyway. I’ve also set the CO2 scaler to 5.4, assuming 3.7 W/m^2 for zero-feedback doubling of CO2. So long story short, the above results were optimized on five parameters, the TSI dissipation factor,the TSI scaler, the output sensitivity scaler which converts the forcing sum to delta T and the two lags, Of these, obviously the two lags and alpha are optimized against the acf and the output sensitivity to minize the residual LSE. In the end, it’s not like setting the pole position of a single-pole low pass (via alpha) can add artifacts that aren’t in the input signal. After that its just time shifts and scales. And then there’s figure 12…

      I’ll see your Fermi and raise a Kepler. He took the Tycho Brahe’s data on the position of the planets and derived the “law” of gravity by trial and error. He finally found the equations that fit the data (he almost had it one time but Mercury was off by a few arc-seconds so he rejected that equation) and when he did many said “but it doesn’t fit what we think we know”, and in fact we still don’t know _why_ two masses attract (or why a mass bends the space-time continuum if you’d rather). He let the data and the fit and the predictions it made speak for themselves. Most of the great advances in science have come that way. Someone notices a pattern (a correlation) and find an equation that fits. From the equation comes predictions that are either confirmed or falsified by observation. If confirmation happens and the equations don’t fit into the current understanding of the way things are, something’s got to give. Substitute for “equation” in the above the system under consideration (which easily enough can be converted to differential equation form) and you’ll understand how I see the matter. I think climate science more than any I’ve ever encountered suffers from the hubris of thinking they’ve got the physics all figured out.

      All that said (whew) I’m not claiming the model correct (even in the sense that no model is correct). I’m sharing an interesting correlation that deserves the attention of someone who can figure out why the model fits the data so well.

      Thanks for the ear. I can’t begin to tell you how much admiration I have for your work.
      Best
      JP

  4. “The recent global warming hiatus is clearly evident in the flattening of the curve above 380 ppm.”

    Fig1a seems to be have data only to 2013. And the “flattening” is clearly affected by the endpoint treatment, since it goes right to the end. I suspect there is a reflective boundary treatment which forces zero gradient at the end. IOW, if the data was trending up, it is smoothed as if the trend is about to reverse. That is pure arbitrary assumption.

    Fig 7 doesn’t look at all flattened, although there is still an issue of how it is smoothed to the end. The default for matlab wden is “sym” or symmetric padding, which again enforces zero gradient at the end.

    • Gaussian filtration does not suffer from the latency issues of a FIR and indeed does give results right to the end.

      The denoised data from wden was only used as the target for the optimizer. In any case if you follow the matlab link in the references you’ll find examples which show now end point effects. And the slope data in figure 9 show the present trend is near zero.

      • Jeff, gaussian can also be done by FIR, you should probably specify what you are doing.

        BTW , I like the all engineering approach. Real filters instead of running averages. The ‘leaky integrator’ or exponential is the same as negative feedback, eg Planck +/- some also rans. Not sure I agree with all you’ve done but I like the method.

      • Mike says:
        February 8, 2016 at 4:31 pm

        Jeff, gaussian can also be done by FIR, you should probably specify what you are doing.

        BTW , I like the all engineering approach. Real filters instead of running averages.

        A running average is a FIR filter.

    • “I suspect there is a reflective boundary treatment which forces zero gradient at the end. ”

      No the exponential convolution is not a symmetric kernel. It does run up to the last data point. I missed what length gaussian he was using, that may explain 2013.

      • It isn’t exponential, it is Gaussian and symmetric. A 5 point filter would explain ending at 2013, but I don’t believe it would achieve that degree of smoothing. Anyway, it’s not a reason to omit the unsmoothed data for those years in Fig1a.

    • “Fig1a seems to be have data only to 2013.”
      Good catch. This plot was from some earlier work and I neglected to update it with the latest data. I sent a revised plot, hopefully Anthony will indulge me and include it. BTW, The Gaussian filter is the Mathematica implementation with radius 4, fixed padding.

      • Jeff,
        Thanks for the details. I assume “fixed” means padding the future with the final value. That can also damp the final trend, although I think if you end with 2015, the end trend will shift to sharply upward.

  5. An estimate of the future TSI is required in order for this model to predict how global temperature will evolve.

    Our simple orbital resonance model successfully replicates 4000 years of Steinhilber et al’s 10Be based solar reconstruction:

    And predicts this out to 2100

    Details available on request.

      • From Fig 1 annotation of your linked document: “As of Nov 2015, the south has exceeded the 2010 level, suggesting that Cycle 25 would be no weaker than 24.”

        From Fig 2: “The evolution is highly N-S asymmetric.”

        Do you think the ~4:1 asymmetry may result in a lower Cycle 25 SSN outcome, disproportionate to dipole strength? The butterfly diagram doesn’t seem to indicate that north will strengthen much further and reduce the asymmetry.

      • We measure the dipole moment as the difference between North and South. This removes the asymmetry, but as always it is difficult to predict the future, but it seems highly unlikely that the cycle will be as small as Tallbloke predicts, because the magnetic flux is already there. We can even see it in the North on its way to the pole. Look at the blue flux in the second Figure.

      • Sure, but it appears patchy and not very dense. Maybe that’s just a red/blue perception difference. Thanks.

      • ” the south has exceeded the 2010 level”

        Didn’t the same type of precursor based predictions over estimate the strength of cycle 24?

    • tallbloke February 8, 2016 at 4:16 pm

      … Details available on request. …

      Thanks, tallbloke. I hereby request the details, which I assume would include a link to your data and code.

      w.

      PS: 10Be is a very poor proxy for solar strength. See e.g. A COMPARISON OF NEW CALCULATIONS OF THE YEARLY 10Be PRODUCTION IN THE EARTHS POLAR ATMOSPHERE BY COSMIC RAYS WITH YEARLY 10Be MEASUREMENTS IN MULTIPLE GREENLAND ICE CORES BETWEEN 1939 AND 1994 – A TROUBLING LACK OF CONCORDANCE by W.R. Webber , P.R. Higbie and C.W. Webber for one look at the reasons why. As the title suggests, the 10Be records don’t agree with each other, much less with the sun. Heck, the 10Be records don’t show even a trace of any ~11-year cycles …

      • It is now more and more accepted that the climate [e.g. circulation] has a large influence on the 10Be record from a given site, as large or larger than the solar influence.

      • “Earth is in no great peril from the extra cosmic rays. The planet’s atmosphere and magnetic field combine to form a formidable shield against space radiation, protecting humans on the surface. Indeed, we’ve weathered storms much worse than this. Hundreds of years ago, cosmic ray fluxes were at least 200% higher than they are now. Researchers know this because when cosmic rays hit the atmosphere, they produce an isotope of beryllium, 10Be, which is preserved in polar ice. By examining ice cores, it is possible to estimate cosmic ray fluxes more than a thousand years into the past. Even with the recent surge, cosmic rays today are much weaker than they have been at times in the past millennium.”

        “The space era has so far experienced a time of relatively low cosmic ray activity,” says Mewaldt. “We may now be returning to levels typical of past centuries.”
        http://www.nasa.gov/topics/solarsystem/features/ray_surge.html

    • Thanks for publishing my comment and for the replies. 10Be is a good proxy for Solar according to the data I use, which are the internationally accepted sunspot numbers from SIDC rather than Leif’s version, and Ken McCracken’s 10Be data.

      Leif says: “it seems highly unlikely that the cycle will be as small as Tallbloke predicts, because the magnetic flux is already there.”

      Just to be clear, our model predicts sunspot numbers, not magnetic flux. They are usually closely correlated, but when solar activity is anomalously low, as during the Maunder, Dalton, and Current deep minima, there is likely to be more of a disparity between TSI and sunspot numbers. Time will tell.

      Willis says: Thanks, tallbloke. I hereby request the details, which I assume would include a link to your data and code.

      The model we use is specified in R.J. Salvador’s 2013 PRP paper. If Jeff drops a comment at the talkshop, I’ll email him some data he can test his model with. There is no ‘code’, just an excel spreadsheet and planetary orbital data as specified. I assume Willis can drive excel better than Phil Jones can.

      • the internationally accepted sunspot numbers from SIDC rather than Leif’s version
        You are a bit behind the curve as SIDC has accepted my [and other’s work on this], see http://www.sidc.be/silso/home and the papers
        http://www.leif.org/research/Revisiting-the-Sunspot-Number.pdf [long]
        http://www.leif.org/research/Revision-of-the-Sunspot-Number.pdf [short]
        McCracken has also seen the light and have revised his data, as shown on Slide 21 of
        http://www.leif.org/research/The-Waldmeier-Effect.pdf

      • Thanks for the links Leif. I agreed with you that there is a Waldmeier effect some time ago. It doesn’t affect our model-data correlation much in any case.

      • Tallbloke: “10Be is a good proxy for Solar according to the data I use, which are the internationally accepted sunspot numbers from SIDC rather than Leif’s version, and Ken McCracken’s 10Be data.”

        From the figure you posted above, it does appear that McCracken 10Be and SIDC sunspot number are related, but I there are periods of time in that figure that leave me skeptical about how good of a proxy it would be. Shortly after 1800, for example, there’s a 10be peak that is a SSN trough. Likewise right before 1900, there’s a SSN peak that corresponds to a 10Be trough. I’m open to being convinced, though. Could you please plot the 10Be record against the SSN record? Likewise, could you please plot the actual SSN and the 10Be-derived SSN on the same graph? Thanks!

      • Jimmy says: Shortly after 1800, for example, there’s a 10be peak that is a SSN trough. Likewise right before 1900, there’s a SSN peak that corresponds to a 10Be trough.

        Yes. As I said earlier, the relationship between sunspot numbers and TSI doesn’t seem so good when the Sun is in a deep minimum, as it was at the times you mention. The TSI is what affects the amount of 10Be synthesized in Earth’s atmosphere.

        The a0Be curve is inverted to make a solar proxy, as they are inversely correlated. so the ‘peak’ after 1800 is actually less 10Be not more. This might indicate that the Sun was magnetically very active with lots of flares blowing cosmic rays away from the inner solar system, even though there were few sunspots at that time. It going to be very interesting to watch what happens over the next two decades as the current deep minimum progresses towards the nadir in 2035.

      • Thanks much for that info, tallbloke. However, I’m still unable to track down the source of the data in your graphic. You refer it to “McCracken 10Be” in your graphic above, but your link to Salvador’s paper makes no mention of 10Be.

        Following McCracken down the rabbit hole, I find his 2004 paper to be the most cited. It is entitled “A phenomenological study of the long-term cosmic ray modulation, 850–1958 AD”, and is available here. However, in that study he is using a 22-year running average of the data, viz:

        [16] The South Pole data in Figure 1b were derived from 7 to 8 year ice samples [Raisbeck et al., 1990], and as a consequence those original data contained a ±6% pseudonoise due to unresolved 11-year solar cycle variations in the data [McCracken, 2003, 2004], which obscures many of the short- and long-term variations in the data. Sampling theory [e.g., Haddad and Parsons, 1991] shows that averaging three successive samples to yield a ∼22 year average attenuates this pseudonoise to ±1.5%, and this is insignificant compared with other sources of random variability in the data. Each 7–8 year ice sample represents the superposition of the 11 and 22-year variations upon longer-term variations, which may, nevertheless, change significantly from one 7-year sample to the next (e.g., the ∼45% decrease during the interval 1700–1739 in Figures 1 and 4). For this reason the 22-year averages were computed centered on each of the 7–8 year samples (i.e., running means), yielding estimates of the underlying long-term variations at 7–8 year intervals.

        YIKES! That method is guaranteed to badly munge the data. You should never use an 11-year or 22-year running mean on sunspot data, it distorts it beyond recognition. And they’ve made it even stranger, using an overlapping 22-year average “centered” on a moving target of a seven OR eight year original sampling. I can’t even begin to imagine what that does to data.

        But in any case, that is clearly that is NOT the “McCracken 10Be” data you refer to in your graphic, because a 22-year running average of the data won’t ever show an 11-year cycle. In addition, their figure 1 showing the Dye 10Be data and the So at this point I have no clue which 10Be dataset you are using.

        This is why I generally request a link to the data as well as to the study. A link to the actual 10Be data used in your graphic would be much appreciated.

        Regarding the model you linked to, it is a multi-parameter fitted model. The author says:

        This model is simply four interacting waves, but they are modulated to create an infinite possibility for sunspot formation.
        The basic frequencies in years are:
        – a VEJ frequency of 22.14 (varying),
        – a VEJ frequency of 19.528 (varying and forming a beat frequency of 165.5 with 22.14),
        – Jupiter–Saturn synodic frequency of 19.858,
        – one-quarter Uranus orbital frequency equal to 21.005,
        – two modulating frequencies of178.8 and 1253(forming a beat frequency of 208 yr).

        Seriously? Even the author says the model can be tweaked to fit anything. Of course he says it in a nicer and more sophisticated manner, he says the model can be “modulated to create an infinite possibility for sunspot formation” …

        He’s tweaked it to fit the historical sunspots … and we are supposed to be impressed by that? And the results are even less impressive.

        I’m sorry, but as a sunspot forecasting method (red line), I can only laugh. It was out by more than 50% on the cycle peaking around 2001. But it was just getting started. Then it forecasted a teeny, tiny six-year!! cycle peaking in 2007, which in the event was actually not even a peak but a trough in the sunspots. That’s so bad it’s not even wrong.

        w.

      • Thanks for the reply Tallbloke!

        You said “the relationship between sunspot numbers and TSI doesn’t seem so good when the Sun is in a deep minimum”.

        Pardon my ignorance, but how does one go about validating the hypothesis that the relationship between SSN and TSI falls apart at during minima?

      • “Since the half-life of Be-10 is longer, by many orders of magnitude, than its atmospheric residence time, its only sink is deposition onto the surface of the Earth. As tropospheric Be-10 concentrations are very small compared to stratospheric concentrations, the flux of Be-10 from the troposphere into the stratosphere is negligible compared to the in situ production from GCR. At steady state, the entire middle atmospheric production is balanced by the stratospheric flux of Be-10 into the troposphere where it is removed. The residence time of Be-10 in the middle atmosphere, calculated as the ratio of its burden to its production, is slightly more than 11 months. For the troposphere, there are two sources, the GCR-driven production and the stratospheric flux, and hence the residence time has to be calculated with respect to the total source. The residence time is then about 9 d (compared to 23 d if calculated with respect to only the tropospheric production).”
        http://www.tellusb.net/index.php/tellusb/article/view/28582

    • Why would cycle 25 be wrong? Perhaps you are thinking of cycle 22. The best projection for cycle 23 at the time was exactly like 22. No one can argue that the long duration quite after cycle 22 caught many by surprise. Further, the current cycle activity is half of cycle 22. Simply comparing the butterfly diagram, it looks very similar to the decay in the early 1900’s.

      I would say that modern solar physics isn’t paying enough attention to recorded history. Or long term solar cycles. If the current decline in solar activity is correct, it will be very interesting indeed to see if climate is affected. The difference between the actual and the shrill that claim it is “the hottest year ever” will be the biggest problem.

      Because there are a lot of factors that go into solar activity, this current downturn in solar activity was anticipated in the 1970’s when we had that cold spell based on previous solar activity. Which can be reproduced. The current low count and duration of cycle 24 is roughly on time. Nobody expected a direct relationship between temperature and the initial decline in solar activity. The duration of solar quite, repeated lengths of quite, the peaks of activity, ocean heat content, lunar and planetary postions, ( the list goes on) were all thought to be contributing factors in climate.

      In any event, global warming is no where as bad as global cooling. We came pretty close to food rationing during the 1970’s. And it wasn’t as cold as the next few decades are suppose to get if the solar model stands. We haven’t done a thing to prepare for global cooling. Is this going to be like the projection for solar cycle 23, whoops we were wrong? Here’s the thing about a major cooling, it doesn’t matter where you are or how much money you have, it’ll be completely random whether you survive or not. Many will not. Poor or no planning ensures it.

      • I would say that modern solar physics isn’t paying enough attention to recorded history. Or long term solar cycles
        And you would be wrong. We care very much about this and the issue is under intense research and scrutiny.

      • So that you know I understand the divide. I know the difference between seeing a shape in a cloud or on Mars, and definitive science. Yes that is a dog, and that looks like a dog but isnt. AGW says I’m seeing the shape of a dog and not the dog itself.

        Until proven otherwise, as in a pronounced down turn in solar activity, I’ll stand by the original research that solar activity is a major player in climate. Unfortunately, we don’t have a control earth where can control the variables. Certainly solar activity is not the only variable. I find it strange that I am willing to say that and AGW is not. Whatever happens politically I have little or no control over. I do think the current path that the main scientific community is pursuing is wrong. The disaster that AGW is claiming from warming will be far worse from cooling. Time will tell.

      • There was/is a lot of research into solar activity and climate before AGW. It’s all over the place. It was also talked about extensively in the ham radio world. There are a lot of detailed records. The connection between solar activity and climate was fairly evident. Like I said, it depends on whether you are looking at a real dog or one in the clouds or pyramids on Mars. You’ve seen the canals right? Or the greening of Martian NH? More of a green tint wouldn’t you say? Of course it doesn’t do that anymore, and hasn’t in quite awhile. It must be because we have better telescopes or we can now photograph it in color.

        Time will tell. You tell me, if it starts to get really cold in the next decade, what will happen to AGW? So far none of the predictions put forth by AGW has occurred. Scientifically speaking, the predictions don’t match the observed result, it’s not a valid theory. It doesn’t work that way in any other science. I can accurately predict the outcome of a chemical reaction. I can accurately predict a moon shot with a slide rule. What do you think the problem is with climate science? I am a reasonable person, if AGW were correct, I would defend it. The last 20 years it has fallen down. I’m pretty sure the real temperature has fallen. It is definitely not as warm as it should be.

        I fully anticipate that politically AGW will prevail in this argument.

        I hope that in my lifetime and in the far foreseeable future, that the world does not see another LIA.

      • I would also like to point out that in lieu of poorly understood cooling in the past, that there could be other major factors. A dust cloud for instance that is in orbit around the sun. Additionally, their is the possibility that the sun expels a quality of super cooled gas. Evidently they can extract super cooled gas from nuclear reactors. I am not ruling out other factors. Solar activity could just be a huge coincidence that coincides with another event. What I am certain of is that without an explanation of the recent past cooling and warming AGW is a very flawed theory. To make matters worse, it seems that the data is being tampered with. The chain of command from 1997 to now is suspect. They’ve adjusted the records to indicate that it is warmer now than in 1997. If that’s the case, what was the actual gat in 1997. I have no way of knowing. They compared temperatures in 1997 based on previous adjusted temperatures, then adjusted them again now. The original records are in a landfill. I don’t understand that. At the physics museum in Princeton, they keep everything they possible can.

        I suspect that somebody knows that is in a deep dark basement. In my view a lot of this is smoke and mirrors. ( I’ve actually seen a show with smoke and mirrors). If the agenda is to kill a lot of people off, AGW will definitely work .

      • lsvalgaard February 10, 2016 at 8:44 am

        I’ll stand by the original research that solar activity is a major player in climate

        What ‘original research’?

        rishrac February 10, 2016 at 12:56 pm

        There was/is a lot of research into solar activity and climate before AGW. It’s all over the place. It was also talked about extensively in the ham radio world. There are a lot of detailed records. The connection between solar activity and climate was fairly evident. …

        rishrac, when a scientist asks “What original research”, he is asking for links or citations to the research itself. All you’ve done is to repeat in varied forms your claim that there’s original research out there … but where?

        If you could provide a link to whatever it is that you are calling the “original research”, that would move the conversation forwards.

        w.

      • Whatever I put up would be redundant. In fact some of the information here exceeds what was produced in the 1970’s.

        As far as predictions of solar cycles, I printed them off and for a long time had them plastered to the wall which caused a lot of snide comments (such a denier ). I took them down when prediction of cycle 23 looked just like 22 with double peaks and all.
        ” we are questioning your ability to make decisions based on science “…
        ” solar cycles have absolutely no correlation to climate”

        Are there 2 scientific communities in this country? Yes, the prediction was for a weaker cycle 24 AFTER cycle 23 failed to reach the level of 22. The year long quite period was definitely not even thought of.

      • The prediction was done following a procedure suggested in our 1978 paper, and a row of successful predictions since 1978. Nothing to do with cycle 23. Since 23 was smaller than 22 and because an even smaller 24 would presage a long cycle 23, this what not a big surprise [to me at least]. You can find a short explanation here http://www.leif.org/research/swsc130003p.pdf

      • So you thought that starting with solar cycle 23 would begin to weaken in 1978? I couldn’t say for certain. It could have been 2 or 3 cycles beyond cycle 23…. or how long it would last. However, I still think that a prolonged downturn in solar activity, whether direct or coincidence, has a direct influence on climate. The current downturn is related to several longer term cycles that ended at roughly the same time with cycle 22/23. In my mind, it is a huge concern.

        TSI may be constant but an orbiting dust cloud that remains in a geostationary orbit for awhile that filters 10% of sunlight would make all these ideas useless. Of course, depending on position, if it were behind the earth reflecting, you’d have a condition of dawn through the night.

      • Do you remember the great tree ring debate? Did you know about the paper published in 1976 about isotopic data in tree rings before, during or after the width of tree rings were determined to correlate with temperature.. And I was shocked that Penn State peer reviewed it. The tree ring width not the isotopic data in the tree rings . So was that information material to the debate about global warming or not? The data from the isotopic tree rings supports the warming and cooling periods, but is in direct opposition to AGW. it also supports the case for solar activity. Not unless of course the isotopic caused the sun to change. It appears from recent news releases anything is possible with global warming.

  6. “Gaussian filtration does not suffer from the latency issues of a FIR”
    Really? How do you implement it other than as an FIR? Are you just using smoothts with a ‘g’ flag?

    Any process that smooths to the end must make some assumption of what the future holds. It looks to me as if, as with wden, the assumption is symmetric – ie if it was going up, it is about to turn down.

    • But again, the denoised data was only used as the optimization target. Any endpoint effects would have only minor effects on the best-fit parameters. This is supported by how well the modeled results match the raw data which was not used in the parametric fit.

      • “But again, the denoised data was only used as the optimization target.”

        No, you use it to make direct deductions from the plot. As in:
        “Figure 1b contradicts the assertion of a direct relationship between CO2 and global temperature. Three regions are apparent where temperatures are flat to falling while CO2 concentrations are rising substantially.”

        Two of those regions are at the ends, where you have padded with reflected data forcing zero endpoint trend.

      • Agreed, there’s no magic wand here. This is one of the problems with pre-packed stats tools, it’s easy to use them without knowing what they do. Anyone programming a filter will know what it does.

        If the gaussian is running to the end there is a problem. One of silly things these kind of packages to is start screwing around with the data without telling you. They “assume” you’d like some padding and reflection because you “need” the result to go to the end … and as we all know the end justifies the means.

        This sort of stuff is written for econometrics where that kind of crap gets a free pass. Not so good in science.

        [Well, then, do the means between the ends justify padding the means of the ends? .mod]

      • “gaussian filtered (r=4, fixed padding) ”

        What is a “radius” . This is presumably a 3-sigma low-pass gaussian , what is sigma?

        does fixed padding mean duplication of the last data point, ie it will end up flat?

      • Yes this is known as a gaussian blur in image processing. It produces a misty, soft look. Thanks

        Maybe you could run it with “Padding->None” to eliminate questions of it flattening the ends.

      • Maybe you could run it with “Padding->None”
        No, padding can’t be avoided with a centered filter. That would probably just pad with zeros, which would probably be much worse.

      • Nick Stokes

        Maybe you could run it with “Padding->None”
        No, padding can’t be avoided with a centered filter. That would probably just pad with zeros, which would probably be much worse.

        Of course you can avoid padding , you just don’t to it !! You end up with a shorter time series. R probably won’t like that and will try to silently extend your data but there is nothing in the maths of FIR filters which requires that to happen.

        If the doc says “Padding->None” I would expect exactly that not “Padding->Zeroes”

        If you can’t trust your software package to do what you ask, do it yourself. Gaussian is just a weighted mean. Anyone with a brain and two working fingers should be able to code that.
        https://climategrog.wordpress.com/2013/12/08/gaussian-low-pass-script/

      • Mike,
        If you want to smooth to the end with a symmetric filter, as done here, you must have padding. If you leave out, you are using an unsymmetric filter near the end. Not a Gaussian, and not centered.

      • How do you decide which fluctuations are “signal fluctuations”, i.e. actual real signal values that aren’t all the same, and which fluctuations are “noise” i.e. fluctuations in the measured values that are the result of random processes not related to the signal at all ??

        I understand how noise can be removed from repetitive signals by processing multiple repetitions of the same signal, but that doesn’t work for an entirely unknown signal which only occurs once.

        I’m not aware of ANY climate / weather data signal, which ever happens more than once.

        Only in the mathematical world does anything happen more than once.

        G

    • Nick Stokes February 8, 2016 at 4:36 pm

      Any process that smooths to the end must make some assumption of what the future holds. It looks to me as if, as with wden, the assumption is symmetric – ie if it was going up, it is about to turn down

      Thanks, Nick. Actually, it is possible to make an accurate estimate of the end-point uncertainty by examining simulated end-points throughout the body of the data. This allows us to select between competing smoothing algorithms for any given dataset. See my paper here for details of the method.

      Best regards,

      w.

      • This is interesting. It is a linearised model that will describe relatively small variations in temperature, pCO2 etc.
        In reply to whiten, major excursions of climate such as ice ages may be driven be non-linearties and so this model would not be applicable.

  7. Good analysis. This type of analysis was carried out in 70s & 80s after WMO manual on climate change was brought out in 1966. I [IMD, Pune] also followed them along with Dr. B. Parthasarathy from IITM/Pune. But here we used raw data uncorrupted precipitation data.

    In the present article the analysis was carried out using the global average temperature anomaly which was corrupted-mutilated. Also, the data covers only 20-25% of the globe. The temperature anomaly consists of several components, mostly local-regional in nature. Even the greenhouse effect is not global in nature but it is also local-regional in nature. Prior to 1960s both the temperature and greenhouse gases were measured at very few locations. Thus, with the time the network changed. The greenhouse composition changed with the time.

    Weather is interactive with climate system and general circulation patterns. Sunspot cycles and cycles in global and net [balance] radiation show a clear cut 11 year and its multiple cycles.
    With all these, the fittings show exactly one to one relation. How???

    Dr. S. Jeevananda Reddy

    • Thank you Tony & DR SJR. Try with raw data and back out the CO2 to Temperature tampering.

      And here is the smoking gun of fraud. The adjustments being made correlate almost perfectly to the rise in atmospheric CO2. The data is being tampered with to match greenhouse gas warming theory.

      NASA US temperatures are based on NOAA USHCN (United States Historical Climatology Network) data. The graph below shows the average of their measured temperatures in blue, and the average of their “adjusted” temperatures in red. The entire US cooling trend over the past century is due to data tampering by NOAA and NASA.

      http://realclimatescience.com/2016/01/the-history-of-nasanoaa-temperature-corruption/

      • DD More: thanks. There is a much more effective way of tampering with data than making adjustments. If you are doing an experiment with a group of subjects, you may adjust data but you can also send home those subjects who do not behave as you wish. They have become invisible. GHCN data base. Period 1940-1969: on duty at the start 4266 stations, included 6074, dropped 696. Period 1970-1999: on duty at the start 9644 (of course), included 2918, dropped 9434. Have a look at human beings: a new-born baby does not have a life-history. You can select newcomers as to geography but not on the basis of future behavior. Precisely for that reason you will find no inclusion effects in the GHCN base beyond geography. However, you will find an impressive relationship between drop-out risk and life history: the more a station’s time series deviates from the series of neighboring stations, the higher the risk. Thousands of stations did not disappear at random but were dropped by people who knew those histories. It does not matter how sophisticated you models are if you are using this kind of data.

      • I’m interested in your first graph of adjustments versus CO2.

        The Mauna Loa CO2 data started in the IGY of 1957/58 at a value of 315 ppm CO2.

        So where does your data from 295 ppm up to 315 ppm come from?? Or just what CO2 data source are you using, since it clearly isn’t Mauna Loa ??

        G

  8. When the TSI time series is exponentially smoothed and lagged by 37 years,

    why would one do that?

    The exponential treatment is not just a “smoother” for visual convenience, it implies a negative feedback and introduces a lag in it’s own right ( though much smaller than your 37 years ) .

    Why would SSN drive CO2. I can see logic in temperature drive CO2 ( rate of change of CO2 is proportional to temperature ) and possibly hypothesising SST as an exponential response to SSN but way it is presenented does not seem to be saying that.

    Why does SSN drive CO2?

    • From the post:
      “In a plausible physical interpretation of the system, the dissipative integrator models the ocean heat content which accumulates variations in TSI; warming when it rises above some equilibrium value and cooling when it falls below. As the ocean warms it becomes less soluble to CO2 resulting in out-gassing of CO2 to the atmosphere.”
      I refer to exponential smoothing only as shorthand for the model implemented in figure 4 which includes the integration and feedback you point to.

      • Thanks Jeff.

        What does the power spectrum of the residuals look like? By eye, I’d say there is a strong 60y and 30y component and probably the circa 9y peak seen in figure 6.

      • Jeff Patterson February 8, 2016 at 5:36 pm
        From the post:

        “In a plausible physical interpretation of the system, the dissipative integrator models the ocean heat content which accumulates variations in TSI; warming when it rises above some equilibrium value and cooling when it falls below. As the ocean warms it becomes less soluble to CO2 resulting in out-gassing of CO2 to the atmosphere.”

        Interesting, Jeff.

        Mmmm … that has a bit too much what-if and not enough numbers for my plausibleometer. My immediate concern is that on a global average basis the ocean temperature doesn’t vary much. Even at the surface, where it varies the most, the global average temperature of the ocean only changed by less than one degree C over the 20th century.

        The ice core records indicate an increase in CO2 of about 16 ppmv per degree C of warming. This number is in general agreement with Henry’s Law. With less than a degree of warming, that is on the order of a 10 ppmv thermally driven change in CO2 over an entire century … not enough to explain much of anything.

        Regards,

        w.

      • “The ice core records indicate an increase in CO2 of about 16 ppmv per degree C of warming.”
        Indeed. There is also the problem that there is a perfectly satisfactory reason why there is now about 230 Gtons of extra C in the air – we have burnt about 400 Gtons. And while you might try to argue that TSI driving CO2 from the sea is a better explanation (can’t see how), you then have to explain where the 400 Gtons went, if not into the sea. It’s a lot to hide.

      • “””””…..
        Willis Eschenbach

        February 8, 2016 at 10:22 pm

        Jeff Patterson February 8, 2016 at 5:36 pm
        From the post: …..”””””

        I’m in complete agreement with Willis’s cautional reticence on this.

        As I have described several times at WUWT, the ocean Temperature gradient versus depth (at least above the thermocline), and the CO2 solubility versus Temperature, results in a CO2 pumping to the ocean depths that keeps the near surface waters depleted of CO2 relative to the surface interface Henry’s Law equilibrium value.

        Consequently “small” surface Temperature increases, such as Willis mentions, do NOT necessarily result in ANY outgassing of CO2. Larger Temperature increases probably do; but small ones simply reduce the amount of depletion that exists at the surface.

        I don’t believe that the ocean/atmosphere interface is in any way involved in a net zero sum game of CO2 ping pong. It is a net one way transport of atmospheric CO2 to the ocean depths from whence it likely never ever sees the light of day again; well in our lifetimes.

        G

    • What means ” exponentially smoothed ” , since exponential is just the reverse of
      ” logarithmic “, and both of those are precisely defined and well known mathematical functions ??

      I see flashing red flags, whenever I see either of those words anywhere near a story relating to weather or climate; or almost anything else in the real universe.

      G

  9. “Greenhouse gas theory predicts a linear relationship between the logarithm of CO2 atmospheric concentration and the resultant temperature anomaly. ”

    err NO.

    Theory predicts a linear relationship between the log c02 and the resultant FORCING!!!
    look at the fricking formula

    The temperature is the result of ALL FORCINGS..

    In short. There are many sources of forcings: positive and negative. C02 is one forcing.

    • “Theory predicts a linear relationship between the log c02 and the resultant FORCING!!!”
      Huh? 5.4 log(Co2/Co) ) _is_ the forcing. The relationship in question is between that and temperature. Are you saying feedback explain the significant regions of non-correlation pointed to by Prof. Curry?

      • Jeff Patterson February 8, 2016 at 5:14 pm

        “Theory predicts a linear relationship between the log c02 and the resultant FORCING!!!”

        Huh? 5.4 log(Co2/Co) ) _is_ the forcing.

        Jeff, Steven’s claim is that in the long-term evolution of the climate, there is a linear (more precisely lagged linear) relationship between the totality of all forcings and the temperature, and NOT between just CO2 forcing and the temperature.

        He is right. The relevant linear formula is not the one you gave, but

        ∆T = λ ∆F

        where ∆ is the “change in” operator, T is temperature, F is forcing, and lambda (λ) is climate sensitivity. And as Steven points out, the F is NOT just CO2 forcing, it is the sum of all the forcings.

        Me, I think both claims are wrong, because at a thermal steady state like the earth is in (± 0.1% change in temperature over the 20th century), changes in temperature are regulated by emergent phenomena, not by forcing … but that’s just me.

        w.

      • Willis Eschenbach,

        Our present ‘thermal steady state of the 20th century’ was thanks to a more active sun. During the Little Ice Age, a number of astronomers (using the new-fangled ‘telescope’ Galileo invented) thought he was hallucinating when he claimed to have seen ‘spots’ on the pristine sun. Co-incidental to this was the fact that the Little Ice Age was a ‘steady state’ set at a cooler level and these two events were considered ‘normal’.

        Indeed, others were furious with Galileo for saying the sun was besmirched with spots and they wished to never see any spots and when spots returned regularly in the second half of the 19th century, this was disturbing news to everyone who worried about what these spots meant.

        This worry has been translated into hysteria in the 20th and even more so, early 21st century which is sad to me. We have little perspective when it comes to the climate/solar relationship due to having direct access to incoming information only the last few decades.

      • @Willis Eschenbach “The F is NOT just CO2 forcing, it is the sum of all the forcings.”

        Sure but that’s what figure 5 represents. The final summation is summing forcings. I’ve simply pushed the final gain block that converts forcing to temperature through the summer and lumped it in with G1 and G2. I’ve probably confused everyone with with my labeling but I wanted to show what I simulated, not the physical equivalent. This of course assumes linearity – and the associate law of arithmetic :)

      • EMS,

        To pick a nit, Galileo didn’t invent the telescope, but was among the first to train one of his own devising on the heavens.

      • “””””….. 20th century), changes in temperature are regulated by emergent phenomena, not by forcing … but that’s just me. …..”””””

        Willis is quite wrong on this !

        That is NOT just Willis; it IS ME too !!

        G

    • “Greenhouse gas theory predicts a linear relationship between the logarithm of CO2 atmospheric concentration and the resultant temperature anomaly. “

      Quite untrue. I wish people here would reference stuff. As Mosh says, it’s a linear relation to a forcing. That’s a flux. The temperature response is gradual; relates if anything to the integral of flux.

      • Nick Stokes February 8, 2016 at 6:54 pm

        The temperature response is gradual; relates if anything to the integral of flux.

        Thanks, Nick. According to the climate models, temperature can be modeled very accurately as a lagged linear response to the forcing. Whether this is true of the real world is open to much question. I say it is not true for a regulated system at or near thermal steady state, such as the earth.

        Regards,

        w.

      • Also as to Willis comment above ” February 8, 2016 at 10:44 pm ”

        I’m under the impression relative to the climate models that ALL of them have one fatal problem.

        That is they do not (NONE of them) accurately model the Temperature. Which is no surprise, since none of them is a model of any real planet.

        That’s why I don’t believe ANY of the climate models.

        g

  10. There is no statistically significant signal of an anthropogenic contribution to the residual plotted Figure 3c. Thus the entirety of the observed post-industrial rise in atmospheric CO2 concentration can be directly attributed to the variation in TSI,

    I don’t see the justification for this “attribution” , just because you can fit a curve with arbitrary parameters and a huge arbitrary lag. You have done a curve fit you have not shown attribution. That would require at least a hand-waving account of partial pressures and some chemistry.

    Not saying it’s wrong but I see no justification for calling this attribution.

    • Not really a curve fit (see the final section on parameterization), The convolutional nature of the time domain response drastically limits the degrees of freedom. One way to look at it is to imagine you were doing the fit in the frequency domain (where convolutions become multiplications) and you have two parameters + a scaling constant to fit the complex spectrum. Good luck!

      • “””””….. You don’t run across r^2=.995 too often in nature. …..”””””

        Seems like a pretty good reason to be skeptical of the conjecture.

        G

      • Jeff Patterson:

        You don’t run across r^2=.995 too often in nature.

        Human emissions do even better: R^2 of 0.9977 (*):

        The difference: human emissions are very plausible as cause and fit all observations, while TSI/temperature/ocean releases violate several observations…

        (*)
        CO2 data before 1959 from ice cores, from 1959 on from Mauna Loa.
        CO2 emissions from fossil fuel sales (taxes!) and burning efficiency.

      • @Jeff Patterson,

        CO2 levels are measured data, with reasonable accuracy: +/- 1.2 ppmv, resolution less than 10 years in ice cores, very accurate +/- 0.2 ppmv since 1959 with daily to hourly resolution.
        CO2 emissions are from sales inventories, which were collected from sales by the financial departments in the past, because of tax revenues. Nowadays specifically from normalized inventories of sales. Accuracy +1/-0.5 ppmv in the past decades, probably more underestimated than overestimated due to human nature to avoid taxes…

        The average remaining CO2 (the “airborne fraction”) is around 50-55%, the rest is absorbed by vegetation, ocean surface and deep oceans. All three main natural sinks/sources are net absorbers for CO2. There is simply no room for any substantial net release from oceans or vegetation.

        Your “source” is far more questionable: data, smoothing, lag,… Just curve fitting, which is mathematically correct but lacks any physical explanation that is confirmed by observations. To the contrary: the observations show that your theory is wrong…

      • @Ferdinand Engelbeen

        See my reply to you above https://wattsupwiththat.com/2016/02/08/a-tsi-driven-solar-climate-model/#comment-2141555. First, the referenced paper finds a much lower correlation (r2=.973) than you show and secondly, provides pretty convincing evidence that the correlation is spurious as it doesn’t survive detrending.

        The process outlined here is standard system identification techniques. Throwing mud does not constitute a valid critique. If you have objections to the method, state them clearly and provide a mathematical basis for your rationale.

      • My criticism is that your model claims very high correlation between two data series (TSI and Temps) that are both wrong. I don’t think that two wrongs makes a right. You can test your model and method on something we [hopefully] could agree is wrong [TSI in reverse time-order]. Please do as this kind of sensitivity testing is an important part of doing science. If I were a referee of your paper [if submitted] I would recommend and insist that such analysis be done. The expected result should be that there is no correlation.

      • @lsvalgaard February 10, 2016 at 9:15 am My criticism is that your model claims very high correlation between two data

        I re-did the analysis with the TSI you claim is right and got similar results- but you know that.

        The bootstrap you suggest is on my todo list. The optimizer takes a day to run and I haven’t had time yet.;

      • Jeff Patterson:

        First, the referenced paper finds a much lower correlation (r2=.973) than you show and secondly, provides pretty convincing evidence that the correlation is spurious as it doesn’t survive detrending.

        Jamal shows the correlation between the derivatives of CO2 emissions and increase in the atmosphere, I do compare accumulated emissions with accumulation in the atmosphere… The latter is extremely high, much better than with temperature:

        and

        Where a short term temperature change of halve the scale has hardly any influence on CO2 levels, but the full trend would give + 80 ppmv? Even without any temperature increase in the past 1.5 decade, still increasing CO2?

        The process outlined here is standard system identification techniques.

        One need to take into account the limits of any method used to identify cause and effect.
        In this case one has two causes of what happens with CO2 in the atmosphere: temperature with a lot of variability and little trend and human emissions with twice the observed trend in the atmosphere and little variability.
        If you choose an identification based on variability, you will simply miss the cause…

        My math is very rusty, but as a practical engineer have been involved in trouble shooting of a lot of chemical processes. Mostly finding the cause of trouble much faster by eliminating the impossible causes than by looking at the possible causes…

        Temperature is a small cause at not more than 16 ppmv/°C for the oceans and negative for vegetation. Both are impossible causes for the bulk of the 110 ppmv increase since the LIA…

    • We have documented instances of “curve fitting” if you will or number fidgetation, that has matched well known “data” or values of very accurately known real universe numbers; like as close as theory matching measured data to eight significant figures.

      Those theories were total hogwash; sheer balderdash, with absolutely no information from the real physical universe anywhere in their derivation. So those theories were completely wrong, although received significant acceptance when first published.

      So yes; without justification for attribution, you CAN say it is wrong.

      Getting the right value for the wrong reason, is no better than getting the right value for no reason at all.

      G

  11. Did you try creating a model based on a subset of the data, and see if you get the same 37 year lag? This looks like just curve fitting.

  12. BREAKING NEWS!

    The greenhouse conjecture about CO2 has now been smashed by experiments.

    It has been discovered that the assumption that 390W/m^2 of solar radiation and radiation from the colder atmosphere is not enough to explain the mean surface temperature of 288K because we now know by experiment that the Earth’s surface is not a flat blackbody upon which the Sun shines uniformly night and day from equator to pole, as would be required to get 288K. Sadly (for those whose income depends on the old 20th century false assumption by James Hansen) we now realize that the Earth is spherical and thus receives variable flux which, even if it did have a mean of 390W/m^2, would only produce a mean temperature less than 5°C. This comes from new understanding (never apparently known in climatology circles) that the Stefan Boltzmann calculations are based on temperature being proportional only to the fourth root of flux. Hence all the high (well above average) flux in the tropics isn’t pulling its weight as it doesn’t drag the mean temperature up in proportion to its contribution to the mean flux. It’s not hard to understand – except by those with pecuniary interests in not understanding. In fact we shouldn’t add the cold atmospheric “back” radiation, but never mind. You may read about other experiments with centrifugal force and sealed cylinders of gas showing temperature gradients if you search for them.

    • I really do appreciate Jeff’s hard systems engineering analytic to this problem of CO2 and temperature.

      But Dr Alex you hit on the head with, “except by those with pecuniary interests in not understanding”.

      The Free World and science desperately need regime change in Washington DC to sweep out the dishonest “pecuniary interests”.

      Then initiate outside independent data investigations at NASA/GISS, NOAA/NCEI, LLNL, etc, and watch the number of staff resignations, retirements, and 5th Amendment pleadings soar, while computer hard drives crash and get mutilated.

      Then actual science can return to Climate Science practiced at NOAA, NASA, DOE. Then UKMO, Aussie BOM can follow.

  13. simply run a correlation between temperature and CO2 over the past 50 to 150 years and run a correlation between CO2 and temperature before that time. My bet is the relationship between CO2 and temperature before and after the man make CO2 remains basically non-existent.

  14. Jeff, let me recommend the following:

    The Impact of the Revised Sunspot Record on Solar Irradiance Reconstructions
    G. Kopp, N. Krivova, C.J. Wu

    However, I’d be quite cautious about their conclusions. Their prior reconstruction followed the sunspot cycle very closely, as one would assume given the very high measured modern correlation between TSI and sunspots.

    Their new reconstruction, on the other hand, is not a whole lot different from their previous one … but is quite different from the new sunspot record. I can see nothing in their paper to explain this difference. so it makes me … mmm … well, “inquisitive” is a good word.

    w.

    • Thanks for the link. I’m a bit confused though. The header on the reconstruction data file I used reads

      ; TSI Reconstruction from IPCC AR5 (based on Krivova et al., JGR, 2010, Ball et al., A&A, 2012, & Yeo et al., A&A, 2014)
      ; Offset -0.2529 W/m^2 to match SORCE/TIM absolute value
      ; from Kopp & Lean (GRL, 2011)
      ; Extended using SORCE/TIM annual averages from 2003 onward
      ; Computed by Greg Kopp using TIM V.17 on Mon Jun 22 13:07:38 2015

      This is mostly greek to me but it looks like they are using a recent model. Is the one you are pointing too more recent than this?

      • The Svalgaard readjustment of the pre-1947 portion has only recently been accepted by SIDC. He has been pushing it for a number of years.

      • Jeff, that is from 2012. The paper I linked to is from 2015. Krivova et al. used the old sunspot numbers in the study you quote.

        Best regards,

        w.

    • Figure 5 of the Kopp et al. paper:

      shows a very large difference.

      Now, the rest of the paper is a somewhat desperate attempt to ‘save’ their old models and everything [grants, students, etc] depending on them.

  15. BTW all these FIR filters exponential “lags” , spectral analyses all REQUIRE and assume a linear system. You cannot apply these techniques to a bastard mix of land air and SST data.

    Land and sea have very different specific heat capacity and can not be meaningfully added or combined in a linear way as is done when taking a global mean.

    https://climategrog.wordpress.com/land-sea-ddt/

    You should probably just use SST .

    • “all REQUIRE and assume a linear system”
      What does that even mean? The filter is just a linear operator applied to a series of numbers. There’s no assumption about “linearity” of the data. Is the signal from a radio station “linear”. Soprano and bass guitar? Is an image “linear”?

      • EM waves , acoustic waves and thermal energy are linear systems, so they can be analysed using linear techniques. Temperatures do not add ( unless they represent equivalent volumes the same medium in which case they can be taken as a proxy of energy which is additive.).

        Land and sea temps are not additive and the mean global temp derived from them is already physically meaningless as a quantity in terms of energy budget, which since this is about TSI causing the warming is what he’s doing.

        One could *define* climate sensitivity as dRad vs dT relationship with a linearised T^4 feedback over a small range. But since CS is not the same for land and sea either , that is fudge as well.

      • The filter is just a linear operator applied to a series of numbers. There’s no assumption about “linearity” of the data.

        The maths does not have any knowledge of the physical system but that does not mean that you can apply it without considering whether it makes sense for the physical system.

        Approximating CO2 forcing as logarithmic, you cannot take the average of 10 ppmv concentrations then take the log to find the ‘average’ forcing. You need to take logs then take the mean.

        If you calculate the frequency spectrum of wind speed and wind speed squared you don’t get the same result , so which is correct? They can’t both be right.

  16. The 37y lag is a bit of cheat unless you can give some clear reason for doing it ( rather than a post hoc excuse ). It effectively shifts the problematic, post 1970 decline in TSI off the end of the temperature record.

    Sort of ‘hide the decline’ trick ;)

  17. Hi Jeff,

    You mentioned the 37 year lag which you used to correlate TSI and CO2, and then you mention various periodicities in your spectral analysis. Do you have any thoughts on what would cause the lag?

    I was looking at the PDO and AMO, as the cyclicity is within the ballpark of both the 37 yr lag and the periodicities that you mention.
    -Sun heats up ocean
    -ocean circulates
    -colder CO2 laden water rises and warms
    -releases CO2 to atmosphere

    Very thought provoking,

    Thank You,

    John R.

    • Yes I think it is something like that. One additional rank speculation come from the fact that the dissipation factor models energy that is decoupled from the climate. It seems plausible that subducted equatorial water warmed by UV become decoupled because IR is absorbed by the opacity of the water above. It cools as it transits poleward but the heat is transferred to the ocean depths.

      • I have an issue with the 37 year lag too. Unless you can really explain a mechanism that makes it relevant, I think you really just have a random good fit. That is even a non-symmetric fit 2+ solar cycles away. Either that or you found that missing ocean heat. And if you have a full cycle PDO/AMO, that would seem to negate any mechanism? I think you want to focus on why 37 years comes up to point to how this fits to better explain the mechanism.

      • @Mike “Could you explain what you mean by decoupled?”
        Imagine an impulsive increase in the TSI. If a=0, this would result in a step change in forcing at the integrator output. For 0<|a|<1, i.e. a (passive, dissipative system) the response is an decaying exponential (see fig. 14). The difference between a step and the real output represent energy that entered the system but did not result in an increase in temperature. That's what I mean by decoupled.

      • @Mike,
        The system Greg describes in the paper you linked (thanks) is identical to the leaky integrator used here. In time domain form it is described by the differential equation dX/dt= -k*X ; where k is a constant of proportionality. It’s operation is easier to see in the Laplacian domain shown in my figures. The integrator (the 1/s block) having infinite DC gain, holds the mean value of its input at zero at equilibrium. Input perturbations cause a deviation and the response will be a return to the equilibrium at a rate determinedby the dissipation (relax) time constant, The time constant in Greg’s paper is much shorter that that derived here.

        One of the advantages of using the ACF as the target function is that it itself is a convolution-like operator on the input forcing with the system response. Thus “events” in the data not related to either the input or the system response gets pushed into the residual (where it belongs :>). This helps to keep the optimization from trying to follower short time scale events like Greg is showing. You can see this in the detail plot. The huge temperature spike in 1878 was not TSI related and so ends up in the residual because it had virtually no impact on the ACF.

      • Does it matter at all to anyone, that only 25% of a black body spectrum energy is at wavelengths below the spectrum peak (say 500 nm for the sun (on a wavelength scale)), and only 1% of the BB energy is at less than half the spectrum peak wavelength (250 nm).

        So when you talk of equatorial water heated by UV, you are talking about a small part of a small part of TSI that can even make it through the equatorial atmosphere in the UV, and reach the ocean surface, where it gets absorbed at least as quickly as does the longer wavelength visible spectrum (red).

        Not a lot of deep heating going on by UV in the equatorial oceans. Solar spectrum absorption in ocean water is very well documented and available everywhere. Just look in ” The Infrared Handbook ”

        How do equatorial UV heated waters get subducted ?

        g

  18. BTW. You have missed one number from your parameter list: the SSN average you subtracted. This is in fact just an additional constant. That makes it a 9 parameter model. None of which seem to be empirically based.

    I suspect you could do as well with three harmonic functions ( requiring three params each ) periods would likely be 9.1 ; 22; 60 years.

  19. but are in direct contradiction….

    I’ve always found it odd…..that when you plot a CO2 graph over a temp graph
    …if you shift the CO2 graph to the right…you can easily show that CO2 follows temps

  20. And the Lunar climate Model!

    Oh well. Our NASA, NOAA, East Anglica, Met Off et al. climate model ignore the Moon!

    Thanks to the Moon we have Plate Tectonics!

    Ha ha [pruned] Climate Geneses !

    Ha ha

    [Cut out that kind of language. .mod]

  21. I’ve been thinking so hard about the post here that my fork stuck to my forehead and stayed there until I yanked it off. Electromagnetism is a wonderful thing.

      • Sometimes I wonder if the ancients knew a lot more than we usually give them credit for concerning their gods and their descriptions of them. Our ignorance of our ignorance is probably, one day, going to be a reason for a big laugh for our descendants… Hopefully.

  22. OT
    Recently I have noticed an odd pattern. Via Weather Underground. It is very cold for the tropics right now. The low temperatures in real time are in the lower 50Fs. The uppers in the mid to 75. And this is reported as such in real time. It also corresponds to the thermometer. However when reported in the almanac, each low and high is reported as 3F higher. In spite of the media early reporting of the first set on a contemporaneous basis .
    Just thought I would mention it.

    • This ultimate corruption of science–cooked book “observations”–has been noted here before. “Adjusting” the “data” after being reported just wasn’t working well enough to support “climate” Lysenkoism. I hope that someone is keeping clean books somewhere, for the benefit of future generations of real meteorologists, climatologists and atmospheric scientists.

  23. So here’s a little high school physics lesson.

    The typical atmospheric heat balance is figured in watts per square meter, W/m^2. Technically this is not a heat balance, but a power flux balance. A watt is a power unit, not energy, i.e. 3.412 Btu/h. So using power flux really confuses the issue. Is that power flux spread out over a spherical surface or a circular surface? And over 24 hours for a complete rotation? Does it consider day time only when the sun is shining or does it consider the negative outward power flux during the night or is a total net? Pretty confusing, huh, but that’s their point.

    What counts is the energy input to the system. Energy is Btu or kilo Joules. Energy, heat, work same-same.
    So let’s assume a spherical surface and 24 hours. 340 W/m^2 ToA (top of atmosphere) equals 1.43E19 Btu of energy. CO2’s 2 W/m^2 of RF equals 8.39E16 Btu of energy. That’s 0.6% of ToA. Third or fourth decimal point, lost in the magnitudes and uncertainties of the major fluxes.

    The heat capacity of air is 0.24 Btu/lb-°F. That’s means that 1 Btu will raise 0.24 pound of air 1 °F. Air is a terrible heat transfer medium.

    The heat capacity of water is 1.0 Btu/lb-°F. That’s means that 1 Btu will raise 1 pound of water about 1 °F. Water does a much better job of moving heat which is why it is so popular in industrial applications.

    You have seen those pictures of the water vapor clouds over power plants. Water is being used to condense the steam turbine exhaust and release it to the atmosphere. Why water? Because evaporating a pound of water absorbs 1,000 Btu. Wow!!!!! This is how evaporative swamp coolers so popular in the southwest cool the air.

    IPCC says the added radiative forcing caused by the CO2 increase between 1750 and 2011 is about 2.0 W/m^2. The same IPCC AR5 report says clouds have an RF of -20 W/m^2. That’s ten times a much cooling as CO2’s 261 years of heating. And by IPCC’s own admission in AR5 TS.6 they don’t really understand how the water vapor cycle works.

    Bottom line:
    1) Anthropogenic CO2 is trivial
    2) CO2’s RF is trivial
    3) IPCC’s GCMs can’t model a system as complex as the atmosphere.

    • IPCC’s GCMs can’t model anything because we don’t understand evaporation, convection, condensation or precipitation well enough to model it. end of.

      The rest is just : if you can’t dazzle ’em with science , baffle them with BS”.

    • “The heat capacity of air is 0.24 Btu/lb-°F. That’s means that 1 Btu will raise 0.24 pound of air 1 °F. Air is a terrible heat transfer medium.” Nearly right. 1BTU will raise a pound of air 4.166°F. So much for High School Physics!

      • Wait, wait! That’s not correct either. 1 Btu isn’t going to raise more pounds to a higher temp.

        0.24 Btu/lb-F

        1 pound increases 0.24 F OR 0.24 pound increases 1 F

        OK, I think that’s correct now.

      • What high school [did] you go to, and when, that [used] Rod, Stone, Fortnight system of units ??

      • Ok, ok, ok.

        Sometimes those of us who wander are actually, really lost. I know how to do this if not explain it well.
        Air heat capacity=0.24 Btu/(lb-℉)
        1.0 Btu*(lb-℉)/(0.24 Btu)
        4.167 (lb- ℉)
        With an energy input of 1.0 Btu, 1.0 lb of air will have a temperature increase of 4.167 °F.
        With an energy input of 1.0 Btu, 2.0 lb of air will have a temperature increase of 2.084 °F.
        Etc.
        With an energy input of 1.0 Btu, 4.167 lb of air will have a temperature increase of 1.0 °F.
        With an energy input of 1.0 Btu, 2.084 lb of air will have a temperature increase of 2.0 °F.
        Etc.
        Consider 1.0 watt or 3.412 Btu/h.
        One watt will raise 1.0 lb of air 14.22 °F (3.412*4.167) in one hour.
        One watt will raise 14.22 lb of air 1.0 °F in one hour.
        So, up for more R&C.

  24. Leif’s Law: Relative Sunspot Number=Delta East Component=SQRT Solar EUV=SQRT F10.7 flux.

    Water is well known to respond to EUV and Microwaves. Not so much to visible light:

    The correlation between CO2

    • The EUV is on the RIGHT side of the read graph and can be seen to vanish as the number of waves go up. The energy in the microwaves is completely negligible. The total accumulated energy of all the radio waves observed by all our instruments and telescopes since the beginning of radio astronomy in the 1930s is less than the kinetic energy of a single snowflake falling to the ground.

      • Right on Dr. S. We have a veritable avalanche of radio noise. Do solar Physicists wear hard hats in the lab to protect against injury from incoming radio photons ??

        G

        I guess coefficient of absorption in sea water, doesn’t have any units. cm^-1 doesn’t work. Maybe it’s m^-1.

        Also Radiant Intensity is W/sr, so graph is NOT “Surface Solar Intensity”.

        Also the units they give are units of ” spectral irradiance “; not units of irradiance, or intensity. And they are bastard units to boot, since they give mW/m^2/nm (wavelength) rather than mW/m^2/wavenumber. If they want to use /nm of wavelength for the spectral increment, they should use wavelength for the horizontal axis, and NOT wave number.
        That sea water absorption peak at 3 microns wavelength gives a 1/e penetration depth of less than 1.2 microns (water depth).

        Otherwise water absorption graph not too bad. Solar spectrum all garbled. Not a lot of nm of spectrum out there in the UV.

  25. and TSI follows from the time derivative per Murry Salby. It explains nothing more than ocean outgassing per Henry’s law and whatever increment of biological respiration.

    The logarithmic diminution is approximate and pertains to a wide variety of transition intensity. Much depends on how far we have already progressed in the overall logarithmic curve.

    You tell me…

    • Thanx Gymno for that CO2 rattlechart. Completely wonderful, although I am going to have to read up on what all exactly all that racket consists of.

      Never seen that picture before . Not much music from symmetric stretch

      G

    • Gymnosperm:

      TSI follows from the time derivative per Murry Salby. It explains nothing more than ocean outgassing per Henry’s law and whatever increment of biological respiration.

      ???
      As far as I remember, Dr. Salby integrated temperature to calculate the increase in CO2. But a temperature step increase, per Henry’s law, does only increase CO2 asymptotically to a new value, the integral is not against an arbitrary baseline, it is towards a new level at about 16 ppmv/°C, the increase rate is decreasing over time…

      • Thanks Ferdinand, I was thinking TSI in that context could be the driver of the ocean warming that causes the well known time dependency of CO2 in the ice and benthic cores.
        Leif has kindly pointed out that the EUV and radio fluxes are “snowflakes” and my own graphic shows the otherwise very poor correspondence between the surface solar spectrum intensity and the absorptive properties of liquid water. Likely my idea is wrong.

        My general sense that physical basis Murry Salby’s CO2 rate of change work is the same as the ice core temperature dependency…

  26. Jeff, here’s a precis of the problem that I have with the Krilova TSI reconstruction. They use an entirely simulation-based method for estimating the long-term change in TSI, what they call the “background component” which is responsible for their claimed increase in TSI over time. The problem is that people seem to be unaware of the bolded part:

    According to simulations of eruption, transport, and accumulation of magnetic flux on the Sun’s surface since 1617 using NRL’s magnetic flux transport model including variable meridional flow, a small accumulation of total magnetic flux and possibly the rate of emergence of small, magnetic bipolar (“ephemeral”) regions on the quiet Sun can produce a net increase in facular brightness (Wang, Lean, and Sheeley, 2005). The resulting modeled increase in TSI from the Maunder Minimum to the present-day quiet Sun is about 0.04 % (see estimates by Lean et al., 2005). Since this background component is speculative, the associated uncertainty in the reconstructed TSI on these time scales is equal to the magnitude of the adopted background component itself.

    Ibid.

    In other words, the authors clearly state that their claimed “background component”, meaning the trend in the reconstruction, is only “speculative”, and they say it might actually be zero.

    Apart from that, what they are measuring is a residual accumulation in modeled flow, what they call “a small accumulation of total magnetic flux”. This is trouble. Of all parts of a model, the residual accumulations are the least trustworthy—they can easily result from some tiny overlooked factor. This is particularly true since the net change in TSI is only 0.04% … which would mean that their model would have to be accurate to a few parts in ten thousand. Doubtful.

    As a result, I place little to no weight on their claimed TSI reconstruction. Hey, I might be wrong, or Leif might not agree, but that’s how I read the tea leaves.

    Regards, and again, thanks for all of your work on the question.

    w.

    • Furthermore, the assumed background is just proportional to the 11-year running mean of the sunspot number so if there is no long-term upward trend in sunspots, there will be no upward trend in the background, regardless of their speculation.

  27. Jeff: Interesting analysis. However, there is one big problem. The units on the vertical axis of Figures 1 and 2 our are not arbitrary – they can’t be multiplied by a “scaling” factor. Climate is a physics problem – conservation of energy – not signal processing.

    For TSI, the units are W/m2 – an energy flux. For GHGs, your units are the logarithm of the change in CO2. Calculations based absorption coefficients measured in the laboratory and applied to the atmosphere indicate that each doubling of CO2 is equivalent to a inward flux increase of about 3.7 W/m2. Furthermore, that downward flux increase is applied to the the entire surface of the planet (4*Pi*r^2), whereas the earth intercepts only Pi*r^2 of TSI. So, when changes in TSI are converted into global forcing, they must be divided by 4.

    You show a 2 W/m2 change in TSI since the Dalton minimum, which is a forcing of +0.5 W/m2. We’ve seen a rise in CO2 equivalent to 2.3 W/m2; more than 3 W/m2 for all GHGs, but offset to some extent by aerosols. Conservation of energy demands that W/m2 of increased inward SWR from the sun and W/m2 of reduced from outward LWR be treated equivalently. In that case, the warming effect from changes in GHGs far dominates the warming effect from the change in TSI, especially in the second half of the 20th century.

    If the planet behaved like a blackbody, heat capacity (traditionally W/K, here W/m2/K) is the final conversation factor needed to calculate warming for forcing (energy fluxes). Heat capacity depends on how deep heat is convected into the ocean. Seasonal warming and cooling penetrate roughly the top 50 meters, making temperature respond over months as there were a simple 30 m mixed layer present. For longer periods, there is no simple way to account for heat transfer deeper into the ocean. That is why AOGCMs are needed.

    Since feedbacks exists, one needs ab additional factor, climate sensitivity. However, feedbacks arise from physics, they don’t have arbitrary values either.

    The Pause and other fluctuations require no explanation, because GMST is subject to deterministic chaos or internal/unforced variability. Chaotic fluctuations in ocean currents that exchange heat between the ocean surface and the cooler deep ocean produce changes in GMST without forcing from the outside. ENSO is one of these fluctuations. See a short clear article by Lorenz: “Chaos, Spontaneous Climatic Variation, and Detection of the Greenhouse Effect.”

    http://www.sciencedirect.com/science/article/pii/B9780444883513500350

    • Frank – I agree that changes in the direct energy flux from TSI are outweighed by the direct energy flux from GHGs. But direct energy flux is not necessarily the whole equation. To arrive at their very much higher ECS (Equilibrium Climate Sensitivity) than can be explained by GHG energy flux, the IPCC et al tap into some rather spurious “feedbacks”, IOW indirect effects. But the possibility of solar indirect effects (other than from the energy flux itself) were too nonchalantly dismissed. I think that there is a lot more solar influence yet to emerge from the woodwork (or from wherever).

      • I suspect that the small variation in UV, which creates an order of magnitude change in depth of the ionosphere, can result in other changes. Despite the tenuous nature of the ionosphere, it can’t be penetrated by a photon without a collision.

    • @Frank – Thanks for the post. Way back in college I learned that power density integrated over surface area integrated over time (by accumulation in the ocean ) gives Joules. It’s a big area over a long time so small increments add up. If that energy is transported to cooler climes and re-coupled to the surface it seems plausible that it could have the small effect in temperature we’re talking about.

      The model/observation match, the alignment of breakpoints and slopes, the nearly identical ACF all seem compelling. As a systems guy, Figure 12 is most startling (and yet no one has commented on it). How can it be accidental that the residual just happens to match the second derivative of the raw data? It is very curious and if it were me, I’d want to know the whys and hows. Likewise the lags involved. It is amazing how quickly the correlation falls apart as the lag is moved from the optimum value. Likewise the dissipation factor. I’ve explored the optimization surface and its smooth as a bowl unlike problems I’ve worked on where many local minima can fool you.

      I’ll certainly defer to the experts here on climate dynamics as it’s not my area of expertise. It seems to me though that ignoring the cumulative effect when looking at TSI sensitivity is straining gnats and swallowing camels.

      Cheers,
      JP

      • Jeff Patterson – “How can it be accidental that the residual just happens to match the second derivative of the raw data? The answer is as given by Mike [a different Mike!] in comment https://wattsupwiththat.com/2016/02/08/a-tsi-driven-solar-climate-model/#comment-2140359 : “The ‘wiggle’ is on top of a steady rate of change which could arguably be anthropogenic“. The CO2 rate of change is steady enough over multi-year periods that it doesn’t show up in the second derivative. Believe me, I have done a fair amount of work on this since seeing Frank Lansner’s first graph, and although it may seem counter-intuitive, the very obvious effect at an annual level really does have little impact at a multi-year level.

      • Formatting stuffed up, end italics after “raw data?”, start again before “The ‘Wiggle'”.

        george e. smith – point taken but it doesn’t alter the argument.

      • @Mike Jonas “The CO2 rate of change is steady enough over multi-year periods that it doesn’t show up in the second derivative. ”

        You have it backwards. The second derivative _does_ show up (and to a scaling constant matches the residual). That’s the point.

      • Sorry, I meant first derivative (or just the data for that matter). Obviously it has to actually be there, in order to be seen in the second derivative, but much larger stuff is removed as you go from first to second, making the wiggles nicely visible. I suggest you test it, to see if it makes sense. Here’s the graph for CO2, and again the graph for delta CO2 – it isn’t at all easy to see in the CO2 what is easily seen in the delta CO2:

      • @Mike Jonas – You’re still not following me. Figure 12 _predicts_ the following modification to the model would almost cancel the residual shown in figure 10.

        I’ve been a bit more explicit in the model used, showing the common gain block G that converts forcing to temperature. The s^2 block in the feedback path is the Laplacian form of the second derivative operation.

        Adding this block and adjusting K would reduce the residual error to the difference between the two plots of figure 12. When a completely empirical model reveals clues like this about the internal dynamics of the system its usually a very good sign that you’re on the right track.

  28. The Greenhouse Theory predicts junk! One look at areas of high humidity against comparable areas with low humidity should have seen off this pseudo science decades ago. Observation of the planet Venus or the moon Titan should also have relegated climatologists to the Astrology level of respect!

    • Talking of Astrology, they also have plenty of charts and wonderful levels of detail and calculations to explain why you have a mile on your left nipple. Which is how I view all of this post and the discussion in the comments following it.

    • “The Greenhouse Theory predicts junk! One look at areas of high humidity against comparable areas with low humidity should have seen off this pseudo science decades ago. ”

      I suggest you study the hydrological cycle …. and basic meteorology would help too.

  29. Since I’ve been working on essentially the same ideas presented here, and constructed a working solar supersensitivity accumulation model, named for David Stockwell’s basic ideas, all the issues here are very familiar. As a climate empiricist and an electronics system designer, I can appreciate Jeff’s formal systems approach, and most definitely agree with his explanation for what happened with modern warming being solar driven through heat accumulation in the ocean.

    Reality check. What caused last year’s high temps? You say El Nino, I say TSI (they are connected). TSI peaked a year ago, and was the highest last year since the peak of solar cycle #23, which predated the SORCE TSI data, http://lasp.colorado.edu/data/sorce/tsi_data/daily/sorce_tsi_L3_c24h_latest.txt

    Year TSI
    2015 1361.4321
    2014 1361.3966
    2013 1361.3587
    2016 1361.2829
    2012 1361.2413
    2011 1361.0752
    2003 1361.0292
    2004 1360.9192
    2010 1360.8027
    2005 1360.7518
    2006 1360.6735
    2007 1360.5710
    2009 1360.5565
    2008 1360.5382

    Using Leif’s TSI equation from above, the calculated TSI values differ considerably more than I consider acceptable from the actual yearly TSI shown above. At the highest actual TSI values, the calculated TSI results are off by -30% to +20% of the total TSI variation between the min and max values of the above list. For last year, Leif’s model was off -30%.

    20-30% is far too much error when considering an accumulation model, where errors are cumulative! That model is too simple IMHO.

    Another thing, if we use http://www.sidc.be/silso/DATA/SN_y_tot_V2.0.txt, the annual sunspot number for last year was far less than it was in 2014, 69.7 to 113.3, but TSI was higher in 2015! Go figure.

    2003 1361.0291 99.3
    2004 1360.9192 65.3
    2005 1360.7518 45.8
    2006 1360.6734 24.7
    2007 1360.5709 12.6
    2008 1360.5382 4.2
    2009 1360.5564 4.8
    2010 1360.8026 24.9
    2011 1361.0752 80.8
    2012 1361.2413 84.5
    2013 1361.3587 94
    2014 1361.3966 113.3
    2015 1361.4320 69.7

    I’m also not so sure TSI was as high during the late 1940’s and late 1950’s as either the Kopp or Svalgaard reconstructions indicate, unless there’s solar IMF and/or terrestrial GMF data that support it.

    The reason for this question stems from the very clear SORCE TSI vs F10.7cm flux relationship since Feb 2003, where all days with F10.7cm observed flux above 165 sfu averaged TSI of 1361.1000 or less, and there were no TSI days with values over 1361.100 on F10.7cm flux days over 185 sfu, with progressively higher values of F10.7cm flux correlating to ever lower TSI.

    During 1947-49 and 1956-1960 were eight years of high F10.7cm near or above those very levels, which if correlated to post-2003 SORCE TSI, would be as low as 1359.5 for all values of F10.7 above 205 sfu.

    Did we have eight years of really low TSI or really high TSI during these years with high average F10.7?
    ftp://ftp.geolab.nrcan.gc.ca/data/solar_flux/monthly_averages/solflux_monthly_average.txt

    Year F10.7cm
    1947 215
    1948 174
    1949 177

    1956 183
    1957 232
    1958 232
    1959 210
    1960 162

    If there is any solar or geomagnetic data that shows those years were in fact years with higher MF, IMF, or GMF levels than anytime during 2003-2015, then I’d say high TSI. And if not? Low TSI. Next stop, geomagnetic data.

    • The entire reason the Kitt Peak Solar Observatory was built was to finally get regular, sun-specific data focused on sun spot activity. So before the 1960’s, sun spot observation was spotty. This is why we can’t do cause and effect studies that are perfect due to this lack of direct observation and then, before the proper satellites were set in motion, we couldn’t see the sun at all during nights, of course so sun spot numbers depended on other observatories being alert on the other side of the planet during nights at Kitt Peak.

      • emsnews, please listen to what Leif (lsvalgaard) has to say. Like your Kitt Peak claim, many of your claims about sunspots are simply incorrect. As Leif pointed out elsewhere in this thread, sun spots are studied today with the same type, and in some cases the very same instruments used nearly two centuries ago, specifically so that the counts can be compared.

        w.

        PS—lsvalgaard is Leif Svalgaard. He is one of the few participants here to have a scientific effect named after him, in his case regarding the sun’s effect on the earth. Just sayin’ … he knows his stuff, has over 250 research papers, I’ve learned heaps from him, and the opportunity is there for you to do the same.

    • TSI in solar cycle 24 is indeed anomalous. This is something we are actively investigating at the moment. Here is the evidence for that:

      We compare five TSI series (ACRIM3 SORCE, PMOD, RMIB, TCTE),adjusted to match SORCE up through 2008 [necessary because there are small systematic differences between them] with five solar indices (new sunspot number SN, Sunspot areas SA, Group Number GN, Magnesium II UV MGII, F10.7 flux) scaled to the SN scale and matched to their cycle 23 values. As you can see, everything matches in SC23, but the TSIs are too high in SC24. We believe this is correct, and not just problems with the data. The Sun may be telling us that we are entering a new regime.

      • For me, this is the focus of my interest. Global climate will do what it will do, being far less predictive and dare I say, understood, than the Sun. TSI has me sitting on the edge of my seat with hot popcorn at the ready.

      • So if the “hottest years on record” (not counting 2015) are 2014, 2010, 2013, 2005 and 2009 then certainly it is not TSI or SSN who done it.

      • If TSI diverged to the upside of SN in a scant 24-year period, then we must conclude that SN cannot always be an accurate historic proxy for TSI all of the time. Or to put it another way, it would be foolish to assume we were fortunate enough to live in the only period where the two diverged. Is it not also, logical to conclude that the TSI could possibly diverge to the downside of SN?

        Does this not at the very least, open the door to the possibility that “it’s the Sun stupid” crowd may be on to something; inasmuch as we now know TSI has diverged in an era when superior instrumentation was available. Of course, owing to the uncertainties of reconstructions, would you not also agree that even your improved pre 19th century reconstructions are likely to be less accurate that contemporary observations?

        Therefore, owing to the uncertainties of historic reconstructions coupled with the contemporary diversion, we cannot rule out the possibility that MWP and MIA were caused by shifts in TSI that cannot be gauged with historic reconstructions.

      • Indeed, but we can only go with the data we have.
        One could also speculate the during the little ice age [Maunder Minimum] TSI was much higher than today since there were no darker spots to drag it down.
        The reason we believe the reconstruction is good [at least back to the 1740s] is that the EUV shows that it is.

      • lsvalgaard February 9, 2016 at 9:35 am

        “Well, weather is not climate…”

        Understood but isn’t that the point? The 20 years covered by the graph show TSI doing what it does within a very small range yet in that time frame and variance you either have the hottest years or the pause, which ever side of the fence one sits on. It doesn’t appear that either position is affected by TSI.

      • Fascinating, Leif, thanks for posting that. Shows both how much and how little we know, at the same time.

        Regards,

        w.

      • Dr. S, 2 questions:
        TSI as graphed refers to Total Solar Irradiance?
        Do you have data that show Total Spectral Irradiance and any variation there of?
        3 for the price of 2: Does it matter?

      • The Spectral Irradiance is much harder to get and there is still a lot of debate [most of it useless] about whose observations are the ‘best’ or even close to the truth. The EUV back to 1740s shows that the EUV just follows the sunspot Group Number, and it would then be a stretch to claim that the spectral outside of EUV behave otherwise. But if one is grasping for straws, perhaps there are some straws [or even a straw man] to be had here.

  30. A system that exhibits this characteristic is said to be cyclostationary. Despite the nomenclature, a cyclostationary process is not stationary, even in the weak sense.

    Whenever I have seen the term cyclostationary used, it has been used to indicate a system whose statistical properties are not constant (as in stationary) but which vary cyclically with time. A stationary noise process modulated by a sinusoid is a simple example.

  31. Long term climate natural variability can be judged against number of proxies.
    CET is the longest reasonable quality temperature record. Some may argue that the CET is only a regional set of data. In my view that is a far better starting point, than either too short and questionable so called ‘global data’ or any quasi-data obtained from various much longer periods of unreliable proxies.

    From the above two graphs (by those who are genuinely interested in causes of natural variability) number of valued observations can be made, there is no need for my biased views to attempt sway you in any direction.
    I have finally decided to write a long due article with all information regarding N. Atlantic tectonics, so it can be fully scrutinised.

    • Great news Vuk’. Can I just suggest that you get a decent LP filter if you are still using running averages. They can invert peak and troughs and will likely decorrelate any correlation that is there.
      https://climategrog.wordpress.com/2013/12/08/gaussian-low-pass-script/
      If you are not already aware of the problem I suggest you have a look here:
      https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

      Look at what the classic 13mo RM used by SIDC does to the timing of peaks in the current cycle:

      • Hi Mike
        I got a good LPF and HPF filters, modified so you can drive to the last data point if you wish (I know about the data ends limitations)

        CET is LP filter out. Tectonics is not filtered for simple reason that good coincidence with sunspot max values during the last few decades is lost.

      • Nice ! I mentioned that because a while ago you were posting stuff with RM filters. Glad you took the hint.

        ” good coincidence with sunspot max values during the last few decades is lost.”

        Probably the end-fill algo. You cannot infill the future, sometimes that kind of thing guesses right, sometimes not. The problem is, you never know until the future, so what it shows is not much help. I don’t see much point in that kind of trick.

      • None of the data in the graphs are perfect. I would say that the ‘tectonics’ is less dubious than the others.
        Although the Althing had no real power since 1600 (your lot took it away), it still functioned and it was well aware and recorded what was going on in their small island.

        Since 1900 the CET has gradually increasing delay, likely caused by slowing down of the Subpolar gyre (possibly) causing N. Atlantic SST warming. Applying appropriate correction the CET is brought back into line with the tectonics.

      • Agree, both of us are pushing personal agenda, the agenda to show that sun is the same in 18, 19 and 20th centuries, hence climate change nothing to do with it. In my case agenda is more basic, look at data and show what might be hiding in there.
        Nothing odd about ocean currents speeding-up or slowing down, number of researchers have noted both oscillations and slow down in the subpolar gyre
        ( see link )
        The North Atlantic’s Subpolar gyre is the engine of the heat transport across the North Atlantic Ocean. This is a region of the intense ocean – atmosphere interaction. Cold winds remove the surface heat at rates of several hundred watts per square meter, resulting in deep water convection. These changes in turn affect the strength and character of the Atlantic thermohaline circulation (THC) and the horizontal flow of the upper ocean, thereby altering the oceanic poleward heat transport and the distribution of sea surface temperature (SST).

  32. This post brings us back to this again:

    http://joannenova.com.au/2015/01/is-the-sun-driving-ozone-and-changing-the-climate/

    Where changes in TSI (very small) serve as a proxy for the real culprit (much larger) which is the change in the mix of wavelengths and particles from the sun affecting the balance of the ozone creation / destruction process differently at different heights and latitudes and thereby altering the gradient of tropopause height between equator and poles which drives changes in global cloudiness.

    • Good point Stephen, however, volcanoes have had a much stronger effect on stratospheric ozone in the last 50y. Unfortunately the last two major events were close in timing with the solar cycles and were about ten years apart. This is an open door to mis-attribution if based on simplistic and defective multivariate regression ( or arbitrarily tweaking model “parameters” ).

      TLS give us the clue as to the real cause.


      https://climategrog.wordpress.com/uah_tls_365d/

      • The bottom of that line of investigation is that changes to the chemical composition of the stratosphere due to major volcanoes was the cause of the late 20th c. warming got everyone crapping themselves.

        But before the IPCC accept that the long term effect of major volcanoes is warming and not cooling, there will be snowflakes settling in the underworld.

      • I should have emphasised that TLS tends to be the opposite of tropo temperature, note the initial warming of TLS at each event.

        Inverting TLS and comparing to SST:

      • Mike, I accept the short term effects of volcanos. My hypothesis deals with the longer term climate shifts such as from MWP to LIA and LIA to date.

        I discuss the various levels in the atmosphere here:

        http://www.newclimatemodel.com/must-read-co2-or-sun-which-one-really-controls-earths-surface-temperatures/

        and mention volcanic effects in the process.

        One advantage of my approach is that it sidesteps all Leif’s objections about TSI by placing the ‘blame’ with the much larger solar wavelength / particle variations and operating via chemical interactions creating or destroying ozone rather than involving the energy of the relevant wavelengths and particles.

      • Stephen, right at the top of that article you say: ” They do not appear to affect the background trend.”

        This is typical of the kind of false conclusions that one comes to when drawing straight lines through everything. This is probably the biggest problem in climatology. Everything is a “trend”.

        Far from “not affecting the trend” they ARE the trend, if you insist on fitting one. That is made clear by my TLS graph above. That’s the same data you are using but with a low-pass filter. The effect of the volcanoes becomes apparent: a 0.5K drop after each event. There is no “trend”. As you note it is flat after 1995.

        This mindset pervades climatology and is nothing more or less than an a priori ASSUMPTION that there is a dominant linear “trend” due to AGW plus “noise” which will average out. ie they are only looking for what they “know” be the answer before they start looking. The biased method leads to bias confirmation.

        Even that is mistaken since if there is a random element to climate ‘forcings’ it will be a radiative term of which T(t) is the integral. The integral of while noise is a random walk and trends in a random walk are meaningless.

        After 30y of intense effort they have not even got the basics right.

        I would suggest you remove the straight line from your graph and look at the data. If you onto the ozone connection you’re ahead of the field but drop the talk of “trends” and stop misleading your own eyes by drawing lines on the data. They are a mental trap, reflecting unstated assumptions which impose interpretations on the observer. Look at the data first.

      • stephen

        The fact that the change in background trend to flat occurred in the stratosphere a few years before the change in background trend to flat in the troposphere is probably due to the thermal inertia of the oceans.

        Yes, the ocean basically rule the troposphere, not so for TLS. However, you cannot meaningfully add the two. Temperatures are not additive quantities for different media. Especially global oceans an rarefied stratosphere. That’s a definite no-no.
        http://climategrog.wordpress.com/land-sea-ddt/

      • That graph says it all, since 1980 the stratosphere is 1 Deg. K cooler today, which implies more solar energy is entering the troposphere and or the surface today than before those eruptions. Until that energy imbalance is accounted for there can be no meaningful analysis of climate metrics associated with solar forcing’s at the troposphere all the way down to the surface.

    • Mike,

      The effects of volcanoes don’t give rise to the observed 1000 to 1500 year climate cycling such as Roman Warm Period, Dark Ages, MWP, LIA and current warm period so far as I can see. That periodicity is much better correlated to solar variations as per Jeff’s head post.

      Thus there is a solar induced background trend (albeit irregular) underlying volcanic influences.

      As regards the Temperature of the Lower Stratosphere (TLS) and the temperature of the Troposphere I did make it clear that they appear to vary in opposite sign. I have not ‘added the two’.

      Climate change on all time scales is primarily the global air circulation response to top down solar effects above the poles and bottom up oceanic effects at the equator.

      Volcanoes just disrupt the pattern temporarily.

      • The effects of volcanoes don’t give rise to the observed 1000 to 1500 year climate cycling such as Roman Warm Period, Dark Ages, MWP, LIA and current warm period so far as I can see.

        Which is why you should never extrapolate way out side the data ! The recent effects may even be due to volcanoes flushing out anthropogenic pollution and increasing the transparency of the stratosphere.

        I only regard that as being relevant to the late 20th c. when it happened. But since that it what everyone is getting excited about, it’s the most relevant to the discussion.

        I have not ‘added the two’.

        I’m afraid you did: fig 3. ” temperature of the troposphere and stratosphere”
        Single line : you added them, one way or another.

      • I didn’t add the two. The changes are of equal and opposite sign so they were subtracted to leave a zero net change.

        The extrapolation beyond the data that you object to is reasonable if the temperatures in the stratosphere are solar induced via ozone reactions.

        Essentially, an active sun reduces ozone above 45km and towards the poles and a quiet sun increases it which is contrary to current climatology.

        I only have a problem if it turns out that ozone above 45km is NOT increasing at a time of active sun. The data shows that it did increase from 2004 to 2007 whilst the sun was quiet. I have not yet seen any updated data.

      • I didn’t add the two. The changes are of equal and opposite sign so they were subtracted to leave a zero net change.

        If they were equal and opposite they will cancel when you add not when you subtract, but hey, add subtract it’s the same thing. Tropo is regulated by the oceans , stratos is rarefied gas. Adding subtracting , averaging, whatever: no way Jose.

        Would you try to ‘average’ two data records: one if deg F the other in deg C?
        If you did not read it above, I suggest you do so:

        The average of an apple and an orange is a fruit salad !

        https://climategrog.wordpress.com/land-sea-ddt/

  33. Great job Jeff, wish I’d done it! A few thoughts:

    1. You might care to compare your residuals to some (?delayed, integrated?) function of El Nino, which I believe dominates temperature anomalies on shorter timescales than your 4-Wavelet average, at frequencies below pure noise.

    2. I’m guessing, but I suspect your wavelet analysis will give a different smoothed curve [over all time] as a function of the end point. It would be fantastic if your derived physical parameters (TCS, CO2 evolution function) had been stable over time. Consider for example, a chart which shows what these parameters would have been if you had done the calculation in, say, every year for which you have at least 100 years worth of prior data. So maybe a series from 1951, 52, etc. This would give some grounds to believe that the answers might be the same when you calculate them next year, and also give some error bars on what next years answers might be.

    3. If the Wavelet function is in any way predictive, and you use a Sunspot/TSI model, might you get into the temperature forecasting business?

    4. I’m nervous on the ‘No Anthropogenic Contribution to CO2’. The long term average (say 30 year) average growth in CO2 is a stubborn match for about half the similarly long term CO2 emissions. I don’t know how you square this with what you’ve done.

    Again, well done and good luck with getting this peer reviewed :-)!

    R.

  34. A blind man goes shopping.
    He wants to buy a bottle of sparkling (carbonated) water.
    On the shelf in the shop are bottles of still water right next to bottles of carbonated water.

    He knows to pick up the one he wants because it feels warmer than the one he doesn’t want.
    right?

    • I love the blind man analogy for climatologists. They buy the bottle closest to the window, whose light they cannot see.

    • No. If the bottles have been on the shelf for an extended period of time they are in thermal equilibrium with the immediate surroundings – since they share surroundings they are the same temperature.

      I assume you’re being ironic – but there are many irony deprived here.

      • No? No what?

        The incident radiation is part of their environment and will contribute to their equilibrium temperature. There are more physics-deprived than irony-deprived at times. It usually starts with a reference to thermodynamics.

  35. Jeff Patterson – I am having a little difficulty understanding your article, because it uses a whole heap of complicated stuff. Maybe because I learned my maths back in the 1950s and 60s I feel very uncomfortable if something that should be simple needs a complicated explanation. Anyway, the bit I’m most concerned about is this : “When the TSI time series is exponentially smoothed and lagged by 37 years, a near-perfect fit is exhibited (Figure 3) [to the logarithmic CO2 concentration].”.

    Now to my mind – correct me if I’m completely misunderstanding you – the TSI time series exponentially smoothed is a form of TSI integral which you are interpreting as delivering a temperature change which in turn delivers a change in CO2. In Figure 3b, you show a direct linear relationship between TSI variation and ΔCO2 (that’s delta CO2 if the ‘delta’ doesn’t display properly). Now because you’ve got a lot of complicated stuff, I can’t be sure about this, but it seems that you are really arriving at a strong relationship between temperature and ΔCO2, from which you infer that it is TSI which is in fact driving CO2.

    It was Frank Lansner I think who first alerted WUWT, a loong time ago, to the relationship between temperature and ΔCO2, which is easily seen here:

    I suspect that it is this relationship that you have found.

    But it does not mean that temperature drives CO2. It only means that temperature drives a bit of wiggle in the CO2. Basically, what I think you have done is to do some very complicated model fitting and removed first order data to reveal second order data, and then mistakenly taken the second order data as first order. In other words, your ‘proof’ that TSI drives CO2 is incorrect.

    As always, I’m happy to be proved wrong, but if you do prove me wrong please can you keep it simple.

    • I think you are correct. The ‘wiggle’ is on top of a steady rate of change which could arguably be anthropogenic.

      https://climategrog.wordpress.com/ddt_co2_sst/

      As with most relaxation mechanisms where there is a linear negative returning “force” or feedback, the magnitude of the effect reduces with increasing period. See discussion here
      https://climategrog.wordpress.com/d2dt2_co2_ddt_sst-2/

      this can all be characterised as the kind of exponential convolution that is being called exponential ‘lag’ here. It is one way of calculating the feedback inclusive response of such a system to any input ‘forcing’.

    • This may make it clearer how that works, it’s just weighted mean:
      https://climategrog.wordpress.com/2013/03/03/scripts/

      It is helpful to see how the orthogonal ( rate of change ) relationship dominates the H.F reaction and how this slowly slides to being in phase at very long periods.
      https://climategrog.wordpress.com/lin_feedback_coeffs/

      That is reflected in the halving of the ratios found for inter-annaul d/dt(CO2) when going to inter-decadal. The question is what is it at the centennial scale.

    • I share your misgivings. It looks to me like an exercise in curve fitting.

      John von Neumann famously said:

      “With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”

      By this he meant that one should not be impressed when a complex model fits a data set well. With enough parameters, you can fit any data set.

      It turns out you can literally fit an elephant with four parameters if you allow the parameters to be complex numbers. link

      The author supplies the elephant and the python code that generated it.

      • The system I’ve shown is a simple low pass filter. The parameter of import is the dissipation constant which sets the cut-off frequency of its one and only pole. The offset and scaling parameters do not effect the ACF. So if we drive the composite system with the actual CO2 data (instead of the approximation from system 1), there are two parameters a, and td which can effect the ACF match seen in the lower left panel of figure 10.

    • “It only means that temperature drives a bit of wiggle in the CO2.”
      Wiggles are relative (to the time scales involved. The plot shows wiggles that are apparent on a 12 month scale (the interval over which he averaged). If small changes in temperature can cause wiggles over a short period of time then a trend in temperature over a long period of time can cause drive a trend in DCo2.

  36. lsvalgaard February 8, 2016 at 7:27 pm
    “It is now more and more accepted that the climate [e.g. circulation] has a large influence on the 10Be record from a given site, as large or larger than the solar influence.”

    There is a good correlation between 10Be and solar data up to about 1880 but afterwards it completely fails.
    https://wattsupwiththat.com/2016/02/08/a-tsi-driven-solar-climate-model/comment-page-1/#comment-2140301
    There could be number of reasons:

    – onset of industrialisation in the N. Hemisphere driven too much particles into the Arctic area, affecting precipitation and 10Be nucleation.

    -10Be and solar (either or both) adjusted (bidirectional feedback by the data adjusters of both variables) so correlation looks good.

  37. lsvalgaard said:

    “There is no long-term evidence for a changing ‘mix’ of solar output.”

    There is evidence of large changes (up to 20%) in the UV/EUV and the particles change too. We have no long term evidence of the changing mix but we do have observations over the past 60 years showing poleward zonal jets when the sun is active and wavy meridional jets when the sun is inactive.

    The only means to achieve that is to alter the gradient of tropopause height from equator to poles and tropopause height is ozone related.

    The ozone creation / destruction process is sensitive to wavelengths and particles from the sun.

      • Thanks for that admirable historical survey.

        This is from 2008, so might be outdated, but its NRL authors, perhaps colleagues of your acquaintance, find such evidence, while also listing questions requiring further inquiry.

        http://solar.physics.montana.edu/SVECSE2008/pdf/floyd_svecse.pdf

        They write:

        Solar UV and Earth’s Climate

        Climate and weather data shows connections to solar activity,
        e.g. QBO, NAO, and SST.

        Models show possible solar UV connections to dynamical
        changes descending from the stratosphere to the troposphere.

        Cosmogenic isotopes show correlations to climate over the
        past two millennia, independent of Milankovich (orbital and
        terrestrial attitude) changes.

        Solar causal connections to climate are poorly understood.
        Solar UV variation is a leading candidate.

      • I don’t limit my hypothesis to UV alone hence the reference to the entire mix of particles and wavelengths.

        It has been observed that the solar effect on ozone (however caused) amounts is reversed above 45km and air from that height descends into the stratosphere above the poles in the polar vortices.

        That is sufficient to alter tropopause heights above the poles and thus alter the gradient of tropopause height between equator and poles so as to produce the observed shifts in the jets and climate zones.

        The observed shift in the last 15 years is the opposite to that predicted by the CO2 theory.

      • I don’t limit my hypothesis to UV alone hence the reference to the entire mix of particles and wavelengths.
        The data shows that that mix has not changed at least back to 1845.

      • lsvalgaard
        February 9, 2016 at 9:05 am

        I think it has. UV varies a lot more than does TSI in general, with demonstrable effects both on the upper atmosphere and sea surface.

      • We know how UV has varied the past 250 years.
        To say that UV ‘varies a lot more’ is to say that Bill Gates’s loose change in his pockets vary a lot more than his total worth. Completely irrelevant.

      • Dr. S,

        IMO the variance is highly relevant, because the climatic effects of more UV relative to visible and IR light are pronounced. The higher energy radiation, for instance, affects ozone levels, while the longer wavelengths don’t.

      • None of that matters as we have shown that the long-term variation of UV is just like that of TSI, i.e. no upwards trend since 1700. Furthermore, the topic here is the “TSI-driven climate model, not the UV straw man.

      • True that the topic is TSI, and I can´t evaluate the validity or lack thereof of the post. But IMO, UV isn’t a straw man. Whether long-term UV varies to the same extent as observed since SORC would be nice to know, but what is known is that it has varied a lot recently and that demonstrable climatic effects follow from that fact.

      • No, that is not known. The UV varies over a solar cycle, so any putative effect would mean that there would be a solar cycle variation of climate. Climate is defined as weather over 30 years which washes out an 11-yr variation, so the only thing of interest is whether there is a long-term variation of UV. We know from observations that there has not been any such variation since the 1740s.

      • But even during the few solar cycles for which UV variance has been directly observed, there are differences. Thus, should it be that three or more solar cycles in a row produced higher than average UV flux, climate would be affected.

        http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20110023422.pdf

        Abstract.

        Characterization of temporal and spectral variations in solar ultraviolet irradiance over a solar cycle is essential for understanding the forcing of Earth’s atmosphere and climate. Satellite measurements of solar UV variability for solar cycles 21, 22, and 23 show consistent solar cycle irradiance changes at key wavelengths (e.g. 205 nm, 250 nm) within instrumental uncertainties. All historical data sets also show the same relative spectral dependence for both short-term (rotational) and long-term (solar cycle) variations. Empirical solar irradiance models also produce long-term solar UV variations that agree well with observational data. Recent UV irradiance data from the…SORCE, …SIM and…SOLSTICE instruments covering the declining phase of Cycle 23 present a different picture of long-term solar variations from previous results. (Cont.)

      • But even during the few solar cycles for which UV variance has been directly observed, there are differences. Thus, should it be that three or more solar cycles in a row produced higher than average UV flux, climate would be affected.

        Here is the UV flux for 16 cycles in a row:

        Nothing special, just following the sunspot cycle.
        Here is the peer-reviewed paper on which the above curve is based.
        We use direct measurements of the EUV taken with a very large and sensitive instrument: the Earth itself.

      • As I said before, good work, which I enjoyed reading. But IMO, besides the question of what happens at Grand Minima, is the issue of how reliable a reconstruction for AD 1740-1830 can be, which interval includes the Dalton Minimum, or indeed for the period after 1830 until the late 20th century. I’d like to see error bars on that the 18th and early 19th century reconstruction.

    • Anyway, you don’t know how the mix has changed since 1845. Your work does not cover the entire range of particle and wavelength variations as they effect the ozone creation / destruction balance differentially at different heights and latitudes over time.

      Nor does mine but what I do have is clear observational evidence that latitudinal jet stream and climate zone shifting does correlate with variations in solar activity subject to variable lags related to the interplay between the internal oscillations in each ocean basin.

      Something is changing the gradient of tropopause height between equator and poles so as to allow such latitudinal shifting and since ozone creates the tropopause by reversing the lapse rate slope it is clear that whatever it is operates via the ozone creation / destruction process.

      • Anyway, you don’t know how the mix has changed since 1845. Your work does not cover the entire range of particle and wavelength variations
        We know how the solar wind speed, magnetic field, and density have varied. We know how F10.7, EUV, and UV have varied. If there is some magic sauce we don’t know about, tell me.

  38. Hadley CRUT is worse than worthless. The so-called “Grand Hiatus” of c. 1945-75 was in fact a Great Cooling. The station “data” gatekeepers have made the extent of this cooling disappear, but using global temperature observations as reported by NCAR back in the late ’70s, the big chill was pronounced. In fact, to such a degree that scientists then worried that the next Big Ice Age was looming over the northern horizons.

    CAGW thus is easily falsified, indeed it was born falsified, since the response of Planet Earth to rapidly rising CO2 for the first 32 postwar years was to grow pronouncedly colder. Then, from 1978 to c. 1996, natural warming happened to coincide with still rising CO2 levels, but since then global average T, in so far as it can be measured, has stayed flat to cooled, despite still monotonously increasing CO2 levels. The current El Nino might temporarily change the slope of the past 20 years from slightly down to slightly up, but still nowhere near the warming predicted by GIGO computer models.

    • I agree with
      Gloateus Maximus on February 9, 2016 at 5:25 am

      https://wattsupwiththat.com/2016/01/27/wind-farm-study-finally-recognizes-that-all-is-not-well-with-wind-power/comment-page-1/#comment-2131348

      [excerpt]

      To be precise, the threat alleged by the global warming alarmists is from catastrophic manmade global WARMING (“CAGW”) and that hypothesis was effectively falsified by the natural global cooling that occurred from ~1940 to ~1975, at the same time that atmospheric CO2 strongly increased.

      Fossil fuel combustion increased strongly after about 1940, and since then there was global cooling from ~1940 to ~1975, global warming from ~1975 to ~1996, and relatively flat global temperatures since then (with a few El Nino and La Nina upward and downward spikes). This so-called “Pause”.is now almost 20 years in duration, almost as long as the previous warming period. The correlation of global temperature with increasing atmospheric CO2 has been negative, positive and near-zero, each for periods of ~20 to ~30 years.

      This so-called climate sensitivity to CO2 (“ECS”) has been greatly exaggerated by the warmists in their climate computer models – in fact, if ECS exists in the practical sense, it is so small as to be insignificant – less than 1C and probably much less. That means that the alleged global warming crisis is a fiction – in reality, it does not exist.

      The warmists have responded by “adjusting” the temperature data record to exaggerate global warming. Here is one USA dataset, before and after adjustments:

      [end of excerpt]

      *************

      A few more points:

      https://wattsupwiththat.com/2015/06/13/presentation-of-evidence-suggesting-temperature-drives-atmospheric-co2-more-than-co2-drives-temperature/

      Observations and Conclusions:

      1. Temperature, among other factors, drives atmospheric CO2 much more than CO2 drives temperature. The rate of change dCO2/dt is closely correlated with temperature and thus atmospheric CO2 LAGS temperature by ~9 months in the modern data record

      2. CO2 also lags temperature by ~~800 years in the ice core record, on a longer time scale.

      3. Atmospheric CO2 lags temperature at all measured time scales.

      4. CO2 is the feedstock for carbon-based life on Earth, and Earth’s atmosphere and oceans are clearly CO2-deficient. CO2 abatement and sequestration schemes are nonsense.

      5. Based on the evidence, Earth’s climate is insensitive to increased atmospheric CO2 – there is no global warming crisis.

      6. Recent global warming was natural and irregularly cyclical – the next climate phase following the ~20 year pause will probably be global cooling, starting by ~2020 or sooner.

      7. Adaptation is clearly the best approach to deal with the moderate global warming and cooling experienced in recent centuries.

      8. Cool and cold weather kills many more people than warm or hot weather, even in warm climates. There are about 100,000 Excess Winter Deaths every year in the USA and about 10,000 in Canada.

      9. Green energy schemes have needlessly driven up energy costs, reduced electrical grid reliability and contributed to increased winter mortality, which especially targets the elderly and the poor.

      10. Cheap, abundant, reliable energy is the lifeblood of modern society. When politicians fool with energy systems, real people suffer and die. That is the tragic legacy of false global warming alarmism.

      Allan MacRae, Calgary, June 12, 2015

      • People are smart and well-informed when they agree with you.

        IMO there are such things as GHGs, and CO2 is a distant second among them to H2O. I’m also of the opinion that much of the allegedly observed increase in CO2 since c. AD 1850 is real and man-made, although also has occurred “naturally” thanks to the warming since the end of the LIA. My WAG is around 70-100 ppm from human activity and 20-50 ppm from natural warming of the oceans.

        But in the real world of climate, the GHE from rising CO2 is clearly swamped by prompt and longer-term negative feedbacks from other effects, for an ECS possibly even lower than the approximately one degree C experimentally derived without feedbacks, positive or negative. CO2 is thus more an effect than a cause of warming.

      • Good to have your comment Allan.

        The 9mo lag you showed is a quarter cycle of the dominant short-term periodicity of about 3 years. The orthogonal response to surface warming, ie outgassing. This is sat on top of about 1.5ppmv /year steady rise.

        That can be estimated as 8ppm/year/kelvin for inter-annual variation.
        https://climategrog.wordpress.com/d2dt2_co2_ddt_sst-2/

        2.8 / 0.7 from the long term averages gives about 4 ppm/year/kelvin , as the inter-decadal ratio. About half yearly value, that will include out-gassing and residual anthropogenic emissions not absorbed by the biosphere.

        I’d don’t think we have enough accurate data to go back beyond that.

        800y delay is probably more to do with deep ocean over turn and equilibration by diffusions than the temp / CO2 relationship itself.

      • Mike: I would venture to say in light of the greening of the planet and more recent discovery that plankton, too, are increasing contrary to the opposite belief of the faithful, that the uptake by biologic activities will increase the sequestration of CO2 at least modestly exponentially. Green fringe around the Sahel will promote an inner concentric green fringe, etc.

        Re plankton, the White Cliffs of Dover is a thick deposit over an extensive area and other such deposits around the world of these creatures’ shells, formed during the Cretaceous when the ocean was warmer than now and CO2 about four times what it is today (so much for acidification).

        This old earth will lap up all the CO2 you can make – fossil fuel CO2 growth will still be below the Cretaceous level a thousand years from now – indeed, even longer because fossil fuel will be exhausted before that time. Why isn’t this more common knowledge?

      • Thank you Mike,

        I suggest that the seasonal impact of photosynthesis/degradation on the larger Northern Hemisphere landmass is probably a more significant driver of atmospheric CO2 annual variation than ocean solution/exsolution – the annual amplitude of atmospheric CO2 is about 16ppm at Barrow, Alaska and near-zero at the South Pole.

        Best, Allan

        https://wattsupwiththat.com/2015/10/24/water-vapour-the-big-wet-elephant-in-the-room/#comment-2057587

        [excerpt]

        It is interesting to note, however, that the natural seasonal variation in atmospheric CO2 ranges up to ~16ppm in the far North, whereas the annual increase in atmospheric CO2 is only ~2ppm. This reality tends to weaken the “material balance argument”, imo. This seasonal ‘sawtooth” of CO2 is primarily driven by the Northern Hemisphere landmass, which is much greater in area than that of the Southern Hemisphere. CO2 falls during the NH summer due primarily to land-based photosynthesis, and rises in the late fall, winter and early spring as biomass degrades.

        There is also likely to be significant CO2 solution and exsolution from the oceans.

        See the excellent animation at http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4

      • Thank you Gary,

        You may find this discussion of interest:

        https://wattsupwiththat.com/2016/01/30/carbon-and-carbonate/comment-page-1/#comment-2133597

        “THE BIG WHIMPER”

        Damned coccolithophores – they’ll be the death of us all.

        I posted the following musings, starting on 30Jan2009.

        My question: Am I correct is saying the following, and if so, approximately when will it happen?

        “During an Ice Age, atmospheric CO2 concentrations drop to very low levels due to solution in cold oceans, etc. Below a certain atmospheric CO2 concentration, terrestrial photosynthesis slows and shuts down. I suppose life in the oceans can carry on but terrestrial life is done.

        So when will this happen – in the next Ice Age a few thousand years hence, or the one after that ~100,000 years later, or the one after that?

        In geologic time, we are talking the blink of an eye before terrestrial life on Earth ceases due to CO2 starvation.”

        Regards, Allan

        [excerpt]

        I wrote the following on this subject on 18Dec2014, posted on Icecap.us:

        On Climate Science, Global Cooling, Ice Ages and Geo-Engineering:
        [excerpt]

        Furthermore, increased atmospheric CO2 from whatever cause is clearly beneficial to humanity and the environment. Earth’s atmosphere is clearly CO2 deficient and continues to decline over geological time. In fact, atmospheric CO2 at this time is too low, dangerously low for the longer term survival of carbon-based life on Earth.

        More Ice Ages, which are inevitable unless geo-engineering can prevent them, will cause atmospheric CO2 concentrations on Earth to decline to the point where photosynthesis slows and ultimately ceases. This would devastate the descendants of most current [terrestrial] life on Earth, which is carbon-based and to which, I suggest, we have a significant moral obligation.

        Atmospheric and dissolved oceanic CO2 is the feedstock for all carbon-based life on Earth. More CO2 is better. Within reasonable limits, a lot more CO2 is a lot better.

        As a devoted fan of carbon-based life on Earth, I feel it is my duty to advocate on our behalf. To be clear, I am not prejudiced against non-carbon-based life forms, but I really do not know any of them well enough to form an opinion. They could be very nice. :-)

        Best, Allan

        https://wattsupwiththat.com/2009/01/30/co2-temperatures-and-ice-ages/#comment-79524

        [excerpts from my post of 2009]

        Questions and meanderings:

        A. According to para.1 above:

        During Ice ages, does almost all plant life die out as a result of some combination of lower temperatures and CO2 levels that fell below 200ppm (para. 2 above)? If not, why not? [updated revision – perhaps 150ppm not 200ppm?]

        When all life on Earth comes to an end, will it be because CO2 permanently falls below 200ppm as it is permanently sequestered in carbonate rocks, hydrocarbons, coals, etc.?

        Since life on Earth is likely to end due to a lack of CO2, should we be paying energy companies to burn fossil fuels to increase atmospheric CO2, instead of fining them due to the false belief that they cause global warming?

        Could T.S. Eliot have been thinking about CO2 starvation when he wrote:
        “This is the way the world ends
        Not with a bang but a whimper.”

        Regards, Allan :-)

  39. First, let me express my appreciation for another informative solar post. Any time that lsvalgaard is engaged means everyone benefits.
    And to Jeff, thank you for your efforts in making this an excellent post. I hope you knew about the gauntlet you were going to face. This is a tough but astute crowd.

  40. “There are several interesting things of note in Figure 8. The period is relatively stable while the amplitude of the oscillation is growing slightly. The trend maxed out at .23 ⁰C /decade circa 1994 and has been decreasing since. It currently stands at .036 ⁰C /decade. Note also that the mean slope is non-zero (.05 ⁰C /decade) and the trend itself trends upward with time. This implies the presence of a system integration as otherwise the differentiation would remove the trend of the trend.”

    Thanks Jeff, nice to read an engineer’s analysis – the rigor shows. Engineers always have to deliver in the real world and can’t hand wave away problems or they might kill somebody! You can see you came to the right site to publish, too. The best in the world pop in here to criticize, offer advice and data. Peer review here has no peers!

    I see you have already had some advice on how the temperature record has been fiddled by the CAGW grant seekers. I think this factor may be responsible for ‘amplitude of the oscillation growing slightly’. It was changed systematically to cool the past and warm the future to make the trend more congruent with CO2 growth. It would be most instructive to see the analysis repeated using the raw data. Indeed, your analysis and some others I’ve seen on other aspects of climate may be useful in forensic analysis of what has been done to the climate series.

    • I don’t see any a priori reason for the circa 60y periodicity to be of constant amplitude, though it is interesting that Jevrejeva’s sea level rise has it’s amplitude *decreasing* so you may be right about data “corrections”.

      In any case this should not be done on a land+sea average which is meaningless as a calorimeter for incoming radiation. That is the biggest con.

      the world is warming and land temps are more volatile due to less heat capacity, so averaging the two will inflate the warming. The usual 30/70% geographical area weighting implicitly assumes equal heat capacity and is not valid in this context.

  41. “Greenhouse gas theory predicts a linear relationship between the logarithm of CO2 atmospheric concentration and the resultant temperature anomaly”

    No it doesn’t!!

    Wrong on the first line.

    • Wrong on your part.

      Look at GCMs. They all predict such a linear relationship, since the models assume such and are programmed into them.

      • Gloateus Maximus.

        But the model runs themselves never produce a straight line in the temperature anomaly, do they? The curve looks quite ratty at the best of times.

        If they do, then please post a link to the evidence.

        So you are wrong.

      • They look pretty darn linear to me, so IMO you are wrong:

        Linearity however is not just in the eye of the beholder. Any statistical analysis you care to apply the model average or any one of them would find them linear functions.

      • dbstealey.

        “The difference is that Harry Twinotter isn’t pretending.”

        Oh look, one of Anfony’s pet attack dogs has come out for a sniff. Woof!

    • “These regions of anti-correlation were pointed to by Professor Judith Curry in her recent testimony before the Senate Subcommittee on Space, Science and Competitiveness:”

      Dr Curry knows full well the likely explanations. Why she pretends to be ignorant is anyone’s guess.

      The simple answer is internal variability of the climate system. A more complicated answer is varying amounts of heat subducted into the deep ocean, aerosols due to pollution and volcanos etc.

      • Dr Curry pointed out in here testimony that there was an equal rise in temperature at the beginning of the 20th c. and that there was no explanation for this.

        If you have one let’s hear it. I’ll have word with Judith and see whether we can get you a place at the next Senate hearing !

      • HT says:

        Dr Curry knows full well the likely explanations. Why she pretends to be ignorant is anyone’s guess.

        The difference is that Harry Twinotter isn’t pretending.

      • Mike.

        “Dr Curry pointed out in here testimony that there was an equal rise in temperature at the beginning of the 20th c. and that there was no explanation for this.”

        Equal rise? Reference please.

    • ” The execution of a hypothesis, either by solving the equations in closed form or by running a computer simulation is never to be confused with an experiment. ”

      This is wrong too. A computer simulation can be a perfectly good experiment. This is a better definition of an experiment:

      “An experiment is a procedure carried out to verify, refute, or validate a hypothesis”.

      • so if you have a hypothesis about the behaviour of the model you can test it by messing around with the model. Don’t confuse this with an experiment in the real world. That is the usual meaning of the word.

        But yes you can do an experiment to study what a MODEL does and see whether it matches observational evidence.

      • Jeff Patterson.

        “Hint: for a given input the output is predetermined.”

        No, it isn’t. It is clear you do not understand stochastic computer models.

      • “No, it isn’t. It is clear you do not understand stochastic computer models.”
        A computer is a finite state machine. If you claim such a device can produce information you need a refresher on basic information theory.

  42. 37 years is too long a lag time for a process with a 10 year periodicity. Did you mean 37 months? I found 36 months when I did this exercise several years ago.

    • The periodicity of the solar variation has not bearing on the possible time constants and lags in the earth system. They are generally a function of the system itself, not the forcing.

  43. Jeff,

    The precision of the instruments that measure CO2 concentration reports 2 sig figs with error +/- 0.01ppb. I presume.

    I am skeptical that the anomaly in the temp data set has such a low error band as a percentage of the anomaly.

    If my skepticism is the case, wouldn’t the first graph be a solid rectangle of blue, indicating no relationship, showing the uncertainty of the data set?

    X-Y plots, log or otherwise, should show the error band?

    Is the anomaly, actually known to the degree of precision indicated by the height of the dot?

  44. Congratulations on presenting your work in a mathematically coherent manner. It’s very refreshing. (Or in what at least largely is a mathematically coherent manner; I would have to do some digging before I could determine whether the de-noising stuff hangs together.)

    On the conceptual level, I have one big-picture problem: the optimization based on measure CO2 concentration. The overall model that results from combining Figs. 4 and 5 obtains a scalar response (temperature) from a scalar stimulus (total solar insolation); there are no other variables. I don’t have a sense of the computational cost, but, if that wasn’t a factor, it isn’t clear why you didn’t just base the optimization of all parameters on only the resultant relationship between those scalars, rather than use measured CO2 concentration for some of them.

    That would seem to be a mathematical question that deserves an answer on its own.

    In one sense, though, that’s not just a mathematical question; I’m also wondering if it doesn’t also point to a conceptual inconsistency. The model treats temperature as dependent only on insolation: to the extent that CO2 is a factor, that factor is only the component of CO2 that is dictated by insolation, i.e., that is independent of man’s activity. Yet, unless you are denying that there is any significant anthropogenic component, i.e., are dismissing the various compelling arguments made at this site by Ferdinand Engelbeen, you are also using the anthropogenic component to arrive at optimal parameter values. That seems logically inconsistent.

    Whatever the case may be, I again congratulate you on your post’s mathematical clarity. I don’t see a lot of that.

  45. I like the model and the approach to deriving it. I hope you submit it to a peer-reviewed journal and are successful in getting it published. I look forward to future comparisons of the model to conditional model results, that is conditional on the observed TSI data, but with parameters unchanged in the mean time.

    I think this is disingenuous: Thus, unlike polynomial regression, it is not possible to fit an arbitrary output curve given specified forcing function, u(t). In the models of Figures 4 and 5 it is only the dissipation factor (and to a small extent in the early output, the input constant) which determine the functional “shape” of the output. The scaling, offset and delay do not effect correlation and so are not degrees of freedom in the classical sense.

    All of the constants in the model, as well as the functions chosen for the modeling, are dependent on the studies of the data and model that are already available, and have been chosen to provide a good fit of the model to those extant data. They ought to be regarded as “degrees of freedom”, even though not in the “classical sense”. The only difference between these and the classical degrees of freedom is that these can not be counted.

  46. My main criticism of this analysis:

    1. In a 150 year record you have no hope of discerning 100 year cycles. You can barely discern 60-80 year cycles. Nyquist says you need at least two periods, in practice with overlapping 60-80 year cycles you need about 5 cycles. Yes you show cycles there, but I’m not sure if they are real or an artifact.

    2. Edge effects when filtering. Please, just throw away the edge data so that you or others aren’t tempted to interpret invalid data. You are predicting data in the future and the past that you don’t have. It doesn’t matter whether you use extend, mirror, zero pad or whatever. I have found that the LEAST variance in error (given Monte Carlo simulation of pink noise signals) is the reflection method. The extend method looks tempting (has a low mean error) but there can be huge outliers when there’s a large slew at the ends of the signals being analyzed. Best would be to not filter at all. The Mark 1 eyeball is pretty good at picking out the signal from the noise, you don’t need to help it all that much.

    3. Combining (1) and (2) above in the frequency domain: trying to find low frequency signals below 2-5 periods per the length of the data set is basically trying to guess beyond the lower edge of the measurable frequencies. You’re just extrapolating. It’s just as bad of Nyquist violation as trying to guess the high frequency signals beyond samplerate/2.

    4. As noted above, if you integrate any signal that doesn’t cross zero you get an increasing trend. The temperature happens to be going up. False correlation is likely the real result here.

    5. You made an assertion that two signals were correlated, but you didn’t do a Monte Carlo analysis to see if your correlation rises above a noise floor. See this paper here for an example of how to do that. Executive summary is that you must test against noise whose sprectrum matches that of your original signal. In this paper they manage to discern ENSO in the SST, but everything else is noise. Note how the noise floor goes up with decreasing frequencies…

    http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf

    Peter

    • I don’t use FFT but a specially designed filter I used for some years (looking at analogue and frequency modulation signals)
      CET is strongly influenced by the ‘next door’ N. Atlantic temperatures, and yet the CET has no 60 year component. Here I looked at AMO (1880-2014) and the CET (1700-2014) data.

      My interpretation of this is that the AMO is most likely subject to the same two components as the CET (53 and 67 years) but it could be that the ocean averages them out. I am not aware of anyone showing AMO except with one periodicity in the range around 60 years. AFAIK the source of it has not been identified as yet.

    • Thanks for the critique.
      1) There is no cycle discerning here nor claim of periodicity. There are no FFT’s involved so no worries about Nyquist.
      2) The edge effect of a 4 point Gaussian filter is small (and IMHO the very best that can be achieved) and was only used in figure 1. Figure 8 says all there is to say about any deleterious effects of the denoising algorithm. It is scholastically transparent except for the BW reduction (the impulsive central peaks are gone)
      3) N/A. See above
      4) But the signal does cross zero, over and over as it is demeaned (see figure 2b)
      5)” Executive summary is that you must test against noise whose sprectrum matches that of your original signal”.
      If you have a stochastic process that can replicate the TSI spectrum I’m all ears. :)

      Cheers,
      JP

  47. Folks, none of the details or the math about smoothing, fitting, etc matters if the input data [TSI] is wrong. Since there is good evidence that the TSI used by Jeff is wrong [even if pushed by the IPCC – as it is] the whole issue is moot, except perhaps to show the opposite conclusion that the influence of TSI is minor and hard to even detect.

      • Same basic shape as before but smaller variations.

        How does mere scaling down of the same old solar variability remove correlations with observed climate variations ?

        All the troughs still coincide with cool spells.

        Anyway, TSI is a mere proxy for other aspects of solar variation which are sufficient to affect the climate system.

      • All the troughs still coincide with cool spells.
        Solar activity now is on par with what it was 100 years ago, and 200 years ago, and 300 years ago so we are in a ‘trough’, yet current temps are ‘the highest evah’…

      • You aren’t allowing for the delay caused by oceanic thermal inertia and current surface temperatures are skewed by poor recording quality plus thus far unjustified ‘adjustments’ that cool the past and warm the present.

        The satellite records do appear to show that the late 90s was the thermal peak arising from the run of several active cycles during the late 20th century.

        Give it time.

      • I’ll try to post the results of running with Lief’s TSI model, assuming I can just embed an html image tag. Here goes…

        Assuming that worked..

        We get a pretty decent match to the raw data albeit the correlation is slightly lower. The residual exhibits a strong periodicity meaning a lower AMO signal is present in Leif’s reconstruction.

        The biggest parametric change aside from the expected scaling change was to move the CO2 lag value from 37 years to 3 years which seems a more realistic figure. There is more TSI ripple in the modeled output but they actually time align pretty well with the raw (not denoised) data (lower right).

      • Lief’s TSI model
        It is not really MY model, but the result of modelling TSI using the new sunspot series by Kopp et al.
        So, what you saying is that even if you use a TSI that shows no trend since 1700, it does not make any significant difference. Does this not simply say that the climate is not driven by TSI?
        The crucial test would be to use TSI since 1700 in reverse as I suggested. If you so that, there should not be any valid TSI-signal in your result. If there still is, then the variation of TSI doesn’t matter.

    • Folks, none of the details or the math about smoothing, fitting, etc matters if the input data [TSI] is wrong.

      Oh I agree that’s a huge problem too. But even he corrects the TSI he’s still going to have the other problems.

      The TSI problem is a matter of wrong data. The problems I and other pointed out are methodology problems. You have to fix both to get a proper analysis.

      Actually, it’s not fixable. We don’t have a long enough history of temperature data to discern the underlying cycles accurately. Humans have a real hard time with “we don’t know, have to wait”. But that, IMHO, is exactly where we are at. I note this applies to correlations with C02, TSI, whatever you care to look at whose cycles are longer than about 30 years.

      Peter

      • I have in mind all processes that impinge upon the ozone creation / destruction process.

        I don’t think anyone has a complete grip on that at present but there is a critical diagnostic indicator in that from 2004 to 2007 ozone increased above 45km (in the mesosphere) at a time of quiet sun whereas conventional climatology proposes decreased ozone at all levels when the sun is quiet.

        The mesosphere supplies the flow of descending air into the polar vortices so that is an observation of critical significance as recognised by Joanna Haigh:

        http://www.nature.com/nature/journal/v467/n7316/full/nature09426.html

        “our findings raise the possibility that the effects of solar variability on temperature throughout the atmosphere may be contrary to current expectations”

        but not contrary to my hypothesis :)

        I have the only hypothesis that accommodates her observations.

      • I know, I know: whatever data comes to light, whatever flaws are found, whatever happens, NOTHING will EVER falsify your hypothesis as pure hand waving cannot be assailed in any way.

  48. I would advise that, instead of conjecturing a first order model, you actually deconvolve the data to find a more precise form of the transfer function. I would recommend using Wiener deconvolution with the FFT:

    https://en.wikipedia.org/wiki/Wiener_deconvolution

    I like to transform to the time domain to get an estimated impulse response. The impulse response naturally grows more uncertain as the number of data points available to estimate the correlation become fewer. You can often identify a transition region where the impulse response becomes less coherent, and the estimate becomes dominated by noise.

    A tapered window applied to the data then eliminates the noise dominated portion, and inverse transforming back to the frequency domain then produces an estimate of the transfer function smoothed by the transform of the window response. A a rational transfer function can then be fitted, and the coefficients used to create a filter network representing it.

    • Wiener Deconvolution is an interesting approach. I’ll have to play with this.

      The problem is going to be that to apply an Fourier Transform, you are assuming the entire length of the record repeats ad infinitum. It doesn’t. We don’t have that much data. Nobody has this much data, it’s an inherent limit of signal analysis math.

      To solve this issue you have to window the data to ensure there are not edge effects. (e.g. if the signal ends on 0.0 and starts on 1.0, you have a bogus step function spewing energy all over your spectrum!).

      So be sure to window the data properly. You will find, if you use a proper window, that those signals whose periods are longer than sample_length/2 go away. Possibly some shorter periods, depending on the phase of the signal in the sample window. Which is probably a good thing, according to Mr. Nyquist. I have found by experiment that you can’t reliably discern signals longer than sample_length/5.

      Edge effects in time domain AND frequency domain must be taken into account. Your best bet to throw away those endpoints at the appropriate step in the analysis. This will be unsatisfactory as due to the limited duration of the temperature record, that’s the data you are interested in. Don’t succumb to that temptation. You don’t have enough data.

      The satellite record will have enough data in AD 2148 (160 years from 1979) to see all the ocean cycles for two periods. Let’s hope some of us live that long. My guess is we’ll die from succumbing to cold or heat as energy will become too expensive to heat or cool our homes properly.

      Peter

      • “The problem is going to be that to apply an Fourier Transform, you are assuming the entire length of the record repeats ad infinitum.”

        You can look at it that way, if you consider the FFT to produce the coefficients of a Fourier Series. But, there is another way of looking at it as the samples of the Fourier Transform, smoothed by convolution with the sinc response of the finite data window. That sinc response convolution is the manifestation of the “edge effects” of which you speak.

        Since the Wiener convolution is operating on estimates of the cross spectral density and the power spectral density, the convolution is with a sinc^2 function, and edge effects are reduced. But, bias in the estimate is increased. There is always a tradeoff between bias and variance in statistical Fourier analysis.

        But, if your data set is much longer than the lowest frequency you are trying to resolve, satisfactory results are often obtained. You choose 1/5th as being the limit of resolution. I think that is not unreasonable, but it very much depends on the data, and its statistical properties, and 1/5th may be overly conservative in some cases.

      • I see little evidence that these data indicate cyclostationarity. Your autocorrelation estimate is just that – an estimate. And, it naturally becomes less accurate the longer the lag period, because you have fewer and fewer effectively independent data available to compare to one another. You really cannot rely on the fact that the peaks do not seem to be decreasing. This is likely just a statistical artifact.

        It might elucidate my point to share a small analysis of SSN I did several years ago now. Here is a plot of the PSD I estimated. The peaks occurring at years equivalent of 10, 10.8, 11.8, and 131 years indicated that the SSN data are the rectified measurements of a quasi-periodic process with years equivalent central periods of 20 and 23.6 years.

        A two-mode model for the system is here. It assumes two lightly damped processes with natural frequencies 2pi/20 and 2pi/23.6 driven by wideband random noise – in actual fact, all that is required is that the input driver be effectively uniform in spectral density near the peaks, but this makes it very easy to model.

        With this model, I was able to produce data sets which look quite similar to the actual SSN in a qualitative manner, as here, and here.

        The next obvious step would have been to implement a Kalman Filter, train it on the historical data, and use that to propagate the predicted activity forward. This would produce not only an estimate of future activity, but error bars for it from the covariance propagation. But, this is not my day job, so I never took that step.

        The point is, what you are seeing is probably not cyclostationary, but just a standard 2nd order response with light damping. That’s pretty much what one should expect from a physical model of the climate system, whereby reservoirs of energy are alternatingly charged and depleted, in accordance with boundary conditions set by the configuration of the oceans and continents.

      • but it very much depends on the data, and its statistical properties, and 1/5th may be overly conservative in some cases.

        Oh I agree, and that 1/5 was based on sync function smear of edge effects, not sync^2 as you point out for cross spectral density. Maybe with this method we can get closer to the theoretical limit of 1/2. I’ll have to do some numerical testing of this, when I get some free time.

      • With this model, I was able to produce data sets which look quite similar to the actual SSN in a qualitative manner, as here, and here.

        I think you might have something that can generate a Monte Carlo simulation allowing hypothesis testing against the null hypothesis of “it’s just noise. This answers Mr Pattersons’ reply above about “let me know when you’ve something random that produces sunspots”…

  49. If TSI has a measurable affect on surface temps, you would expect to see small bumps in the temperature record corresponding to the SSN. Which, I think you can:

    • Those ‘bumps’ are probably 9.1y lunar bumps. Or at best a mix of the two. Until there is some serious assesment of the longer lunar periodicity any solar effect will be compromised , confused and inconveniently disappear or go through phase inversions and be written off.

      count the post warm bumps and estimate the period.

      Try plotting the following it produces a 58 year modulation envelop:
      p1=9.1;p2=10.8;
      cos(2*pi*x/p1)+cos(2*pi*x/p2)

      • I played around with that very relationship here once upon a time. Though HS was very gracious calling it “A new theory”, I really just saw it as an interesting possibility. I have not developed it any farther, but it’s nice to see someone thinking along the same lines.

    • Indeed. And, there are those who have shown that integrating the SSN produces a curve very similar to the temperature record.

      The difficult part to explain is why it should be an integral response, and what long term mechanisms would exist to dissipate energy in the long run to keep that integral from diverging to infinity.

      • IMO the time integral makes perfect sense, given the heat capacity of the oceans. The longer the sun shines with greater intensity, especially in the higher energy bands, the more the oceans will warm and the longer the effect will last into intervals of declining solar output, to say nothing of magnetic effects.

      • Yes but SSN is zero based. TSI goes up and down around some evolving mean, which earth quite happily dissipates back to space or we wouldn’t be here. Why would it therefore not be a simple integral response of sorts? Probably because the oceans are huge and Willis is correct. Emergent phenomena serve to modify that and the actual response is governed. Good luck picking that apart in a manner that doesn’t amount to mere curve fitting, water is wonderful stuff and there’s lots of it!

        Furthermore, does TSI vary abruptly under some conditions we’ve not measured yet? e.g. shortly after the sun starts to exhibit pronounced hemispherical disparity and enters a new regime, which if I’m not mistaken is what Leif is alluding to above. Take a look at the recent N-S SSN record and related conjecture about the Maunder period.

      • “TSI goes up and down around some evolving mean, which earth quite happily dissipates back to space or we wouldn’t be here.”

        True in the long run, but the system has huge inertia (and probably hysterious as well given the chaotic dynamicsl) so in the short term the mean can wander. Besides, what is described here merely delays the dissipation back to space (although some gets lost along the way).

    • Actually, I can account for all of the bumps with a 15 year cycle and a 40 year cycle. It looks like the solar energy is absorbed into the oceans and lasts for two 15 year cycles before it is not seen in the “bumps”. The affect is that a solar cycle impacts temperature when it is happening, and then 15, 30, and 40 years later. The 40 year cycle seems to have the biggest affect, probably because it coincides with the apparent 30 year cycle. Here is a picture of the solar cycle shifted by 40 years. I wonder if that could cause the 37 year delay found by OP.

    • Moreover, if Hansen and followers hadn’t pushed the mid 30s to mid 40s temperatures down, raised the 1998 one up to make it a new record and then continued with this ‘bouyancy’ of recent temperatures, the green line might be following the SSN’s back down again – this where the decline in the tree ring proxy was famously hidden by Mike’s Nature trick – maybe these trees are smarter than dendrochronologists.

  50. This paper stimulated a good discussion, and I want to thank the discussants: L. Svalgaard, W. Eschenbach, especially; and thank Jeff Patterson for informative and responsive answers. As they say of the editorial process, I think that the reviewers made suggestions that improve the paper.

  51. “In addition, the derived TCR implies a mechanism that reduces the climate sensitivity to CO2 to a value below the theoretical non-feedback forcing, i.e. the feedback appears to be negative”

    This is a bit like saying that “the existence of a sunflower suggests that the sun had been rising and setting in a specific location for a very long time”

    Feedback systems are the great thorn in the side of modelers. So many don’t grasp the concept that we live in a wonderful thermometer. A full appreciation of this requires an interest and studies in both the natural and physical sciences. There is no better window into the future than a sedimentary outcrop – when one learns how to read it

    What would be Earth’s climate be should there be no flora and fauna?

    Feedbacks (in many forms) in relation to elevating CO2 were always involved. The only question remaining is: what are they – and their impact? Get to work, there’s lots of it :-)

    Thanks to all contributors to this fascinating topic

    [Thermometer? or “thermostat” ? .mod]

  52. This post makes the common mistake of analyzing too short a period. I am really amazed at the ability of the science community at large, including even most of the skeptics who post here, to ignore the obvious millennial cycle which peaked about 2003.Far from being a “wicked ” problem, if you use simple common sense ,forecasting the timing and likely amplitude of the longer wave trends is reasonably simple .See http://climatesense-norpag.blogspot.com/2015/08/the-epistemology-of-climate-forecasting.html
    and
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

    • For a putative 1000-year cycle to state that it peaked in 2003 is silly. And ‘common sense’ is not so common. Especially not in this context. And ‘simple common sense’ is likely to be ‘simplistic common sense’.

      • But the CET spectral composition does hove response at 100+ years, at least in the winter.

        In the winter when the insolation is low, the Arctic geomagnetic response to CME’s come to mind. CET summers have near zero inclination up-trend line, while the winters are at 0.4C/century, i.e. all warming in the CET is result of the rise in the winter temperatures (spring and autumn are almost arithmetic average of summer & winter) .

      • Maybe I can assist Norman.

        There appears to have been a peak around 2003 but it may not be the final peak of the 1000 to 1500 year cycle.

        It does seem a bit early to have reached the peak of that cycle so maybe the sun will pick up again in cycle 25, maybe not.

        Subject to that, Norman’s general approach is simple (not simplistic) common sense in the light of data currently available.

        If new data comes to light that common sense can be reapplied appropriately.

      • Obviously I’m being a bit provocative maybe even humorous. ? The link says:
        “Grandpa says- I’m glad to see that you have developed an early interest in Epistemology. Remember ,I mentioned the 60 year cycle, well, the data shows that the temperature peak in 2003 was close to a peak in both that cycle and the 1000 year cycle. If we are now entering the downslope of the 1000 year cycle then the next peak in the 60 year cycle at about 2063 should be lower than the 2003 peak and the next 60 year peak after that at about 2123 should be lower again, so, by that time ,if the peak is lower, we will be pretty sure that we are on our way to the next little ice age.

        That is a long time to wait, but we will get some useful clues a long time before that. Look again at the red curve in Fig 3 – you can see that from the beginning of 2007 to the end of 2009 solar activity dropped to the lowest it has been for a long time. Remember the 12 year delay between the 1991 solar activity peak and the 2003 temperature trend break. , if there is a similar delay in the response to lower solar activity , earth should see a cold spell from 2019 to 2021 when you will be in Middle School.

        It should also be noticeably cooler at the coolest part of the 60 year cycle – halfway through the present 60 year cycle at about 2033.

        We can watch for these things to happen but meanwhile keep in mind that the overall cyclic trends can be disturbed for a time in some years by the El Nino weather patterns in the Pacific and the associated high temperatures that we see in for example 1998 and 2010 (fig 2) and that we might see before the end of this year- 2015.”

  53. “We found magnetic wave components appearing in pairs, originating in two different layers in the Sun’s interior. They both have a frequency of approximately 11 years, although this frequency is slightly different, and they are offset in time. Over the cycle, the waves fluctuate between the northern and southern hemispheres of the Sun. Combining both waves together and comparing to real data for the current solar cycle, we found that our predictions showed an accuracy of 97%,” said Zharkova.

    Zharkova and her colleagues derived their model using a technique called ‘principal component analysis’ of the magnetic field observations from the Wilcox Solar Observatory in California. They examined three solar cycles-worth of magnetic field activity, covering the period from 1976-2008. In addition, they compared their predictions to average sunspot numbers, another strong marker of solar activity. All the predictions and observations were closely matched.

    Looking ahead to the next solar cycles, the model predicts that the pair of waves become increasingly offset during Cycle 25, which peaks in 2022. During Cycle 26, which covers the decade from 2030-2040, the two waves will become exactly out of synch and this will cause a significant reduction in solar activity.

    “In cycle 26, the two waves exactly mirror each other – peaking at the same time but in opposite hemispheres of the Sun. Their interaction will be disruptive, or they will nearly cancel each other. We predict that this will lead to the properties of a ‘Maunder minimum’,” said Zharkova. “Effectively, when the waves are approximately in phase, they can show strong interaction, or resonance, and we have strong solar activity. When they are out of phase, we have solar minimums. When there is full phase separation, we have the conditions last seen during the Maunder minimum, 370 years ago.”
    https://www.ras.org.uk/news-and-press/2680-irregular-heartbeat-of-the-sun-driven-by-double-dynamo

      • Only reliable long term Sunspot Rmax envelope prediction is one that matches the Svalgaard SSNumbers

        Svalgaard SSN Rmax – Vukcevic formula correlation factor R^2 = 0.76 (excluding SC20)

        Taking into the account that sunspot count is a subjective estimate of the interpreting visual appearance of the solar disk, the so obtained SSN may not be a 100% accurate reflection of sun’s magnetic activity; to allow for such errors 15% error bands are displayed.

    • Only reliable long term Sunspot Rmax envelope prediction is one that matches the Svalgaard SSNumbers

      Svalgaard SSN Rmax – Vukcevic formula correlation factor R^2 = 0.76 (excluding SC20)

      Taking into the account that sunspot count is a subjective estimate of the interpreting visual appearance of the solar disk, the so obtained SSN may not be a 100% accurate reflection of sun’s magnetic activity; to allow for such errors 15% error bands are displayed.

      • NAIRAS system uses data at Oulu.
        “The NAIRAS model predicts atmospheric radiation exposure from galactic cosmic rays (GCR) and solar energetic particle (SEP) events. GCR particles propagation from local interstellar space to Earth is modeled using an extensionhe Badhwar and O’Neill model, where the solar modulation has been parameterized using high-latitude real-time neutron monitor measurements at Oulu, Tomnicky, and Moscow. During radiation storms, the SEP spectrum is derived using ion flux measurements taken from the NOAA/GOES and NASA/ACE satellites. Transport of the cosmic ray particles – GCR and SEP – through the magnetosphere is estimated using the CISM-Dartmouth particle trajectory geomagnetic cutoff rigidity code,driven by real-time solar wind parameters and interplanetary magnetic field data measured by the NASA/ACE satellite. Cosmic ray transport through the neutral atmosphere is based on analytical solutions of coupled Boltzmann transport equations obtained from NASA Langley Research Center’s HZETRN transport code. Global distributions of atmospheric density are derived from the NCEP Global Forecasting System (GFS) meteorological data.”
        http://sol.spacenvironment.net/nairas/index.html

      • “A cosmic ray destined to be detected by the Inuvik neutron monitor starts out heading for a point over the Pacific Ocean, west of Mexico. About 60,000 km away from Earth, the particle begins to experience effects of the Earth’s magnetic field, which deflects the particle towards Inuvik. The first interaction with an air molecule happens about 20 km above Inuvik.

        It has been proposed that cosmic ray monitors be equally spaced around the poles to achieve the best view into outer space. Inuvik is geographically well located to record cosmic rays and has the support services needed for a monitor.”
        http://neutronm.bartol.udel.edu//listen/main.html

  54. The theory is interesting although for a WUWT audience a presentation with fewer abbreviations and technical shorthand would be preferable (eg: what does “cyclostationary” mean). However there is one major failing which is common to many such dissertations. It assumes the Hadcrut temperature reconstruction is accurate and representative yet there is plenty of evidence to suggest that it is massively falsified with earlier temperatures depressed and more recent temperatures enhanced to show a greater warming trend. Certainly the satellite and balloon data sets show a different picture. If Hadcrut reconstruction is not accurate your theory simply shows good correlation to wrong data which means it does not accurately model reality.

    As others have pointed out above, there is excellent correlation between rate of change of atmospheric CO2 and the more reliable satellite ocean temperature data (that the correlation is to rate of change of CO2 simply means the ocean outgassing of CO2 has a long time constant which we know it does due to slow ocean overturning – 800 years). It means, as you have in effect pointed out, that mans use of fossil fuels may not even be the reason for the rise in CO2.

    • So, we have a situation where two wrongs are supposed to make a right: both the TSI record and the temperature records are wrong, yet we are supposed to believe that the fit of the Temps to the TSI has any physical meaning. I say it does not.

  55. Could you please provide a real log scale on figure 1a. And where is figure 1b which is the fundament of all the following hypothesis?

  56. Stephen Wilde February 9, 2016 at 1:58 am

    This post brings us back to this again:

    http://joannenova.com.au/2015/01/is-the-sun-driving-ozone-and-changing-the-climate/

    Where changes in TSI (very small) serve as a proxy for the real culprit (much larger) which is the change in the mix of wavelengths and particles from the sun affecting the balance of the ozone creation / destruction process differently at different heights and latitudes and thereby altering the gradient of tropopause height between equator and poles which drives changes in global cloudiness.

    I looked at your paper. It had a shocking lack of numbers. Your theory is that solar changes temporally related to (but distinct from) the sunspot cycle affect the ozone layer. In particular, when the sun is quiet (few sunspots), you say there will be less ozone at the tropics and more at the poles. Similarly, when the sun is active (many sunspots) you say there will be more ozone at the tropics and less at the poles.

    However, as far as I could see there was no attempt to verify this by observations. This seemed like it would be easy to test. I got the Mauna Loa and South Pole ozone data from here. The most sensitive indicator will be the difference between the two locations, since this would detect the hypothesized polar-tropical ozone swings you mentioned. Here are the results:

    There is no sign of the hypothesized tropical-polar ozone swings. The R^2 is only 0.04 …

    Regards,

    w.

  57. Maybe I just need to think about it for a while, but I can’t for the life of me understand what “high frequency components” of T vs. CO2 concentration could possibly mean. Sure, it’s a smoothing technique, but I have absolutely no doubt it has no business being applied to this data.

  58. LT February 9, 2016 at 8:30 am

    That graph says it all, since 1980 the stratosphere is 1 Deg. K cooler today, which implies more solar energy is entering the troposphere and or the surface today than before those eruptions. Until that energy imbalance is accounted for there can be no meaningful analysis of climate metrics associated with solar forcing’s at the troposphere all the way down to the surface.

    Thanks for that, LT. You are correct that the stratosphere is cooler today. However, you’ve only alluded to one of the three ways that the stratosphere gains energy, from the absorption of downwelling solar radiation.

    In addition to being warmed that way, the stratosphere is also warmed from below, by the radiation from the troposphere as well as by radiation absorbed directly from the surface. The approximate sizes of these three sources of warming are:

    Direct solar absorption: 10 W/m2

    Absorption of upwelling surface radiation: 13 W/m2

    Absorption of tropospheric radiation: 268 W/m2

    As you can see, a change in any one of these three will change stratospheric temperatures. Clearly, the stratosphere is absorbing less energy, which is why it is cooler … but which energy source has decreased?

    All the best,

    w.

    • “Absorption of tropospheric radiation: 268 W/m2”

      Really? That may be the amount passing through. But it would have dramatic effects if it was the amount absorbed.

      • Indeed, I’m not sure what Willis meant there.

        So we are left with absorption of upward IR and incoming SW. So we have to look at the form of the data to see whether it tells us anything.

        If we ignore the obvious volcanic origin of the changes and draw a straight line we may attempt to attribute the changes to AGW and upward IR.

        If we stop imposing preconceived ideas by fitting linear models to data that are not linear and just look at the data we note two downward steps obviously attributable to the two eruptions and flat since.

        We may then recall that the troposphere did the opposite: warmed and then flat since.

        There were not two step changes in CO2 , so the most likely explanation to both observed changes is a change in the transparency of the lower stratosphere that is now letting more incoming solar into the troposphere.

        Ozone if a key factor in SW absorption but that natural processes which removed the volcanic aerosols may well have also scrubbed some of the anthropogenic pollution that had built up in preceding decades.

      • Nick Stokes February 9, 2016 at 10:11 pm Edit

        “Absorption of tropospheric radiation: 268 W/m2”

        Really? That may be the amount passing through. But it would have dramatic effects if it was the amount absorbed.

        Let me see if I can explain. Total absorbed by stratosphere = 10 + 13 + 268 =291 W/m2, half of which is radiated upwards and half downwards.

        291 / 2 = 145.5 W/m2, which has a blackbody temperature of -48°C, the approximate temperature at the tropopause.

        Note that radiation balance is maintained at all three levels (surface, troposphere, lower stratoshere), as the amount entering each level is equal to the amount leaving.

        Regards,

        w.

      • Willis,
        “Total absorbed by stratosphere = 10 + 13 + 268 =291 “

        That’s the issue in dispute. How do you know that is the amount absorbed? It seems you’re saying that all the IR passing through the stratosphere is absorbed and reradiated. The conventional view is that it passes through mostly without interacting. Stratosphere optical depth is very small.

      • Thanks, Nick. If you dislike my analysis please submit your own. Mine is the simplest model possible, two atmospheric layers and a surface layer. (You can’t make it all balance with one atmospheric layer, that’s where Trenberth went wrong). And the layers must be physically separate. I posit that those layers are the lower troposphere, and the area at and just above the tropopause.

        The problem is that we have pretty good ideas of the various flows and temperatures, so we’re not free to pick just any numbers. For example, we know that the temperature at the tropopause is ~ -50°C.

        That’s one way it can balance. There may be others. I invite you to investigate and propose your own.

        w.

      • Willis,
        My analysis is simple. The stratosphere is essentially transparent to outgoing IR. Virtually none is absorbed. I can believe 10 W/m2 of solar radiation absorbed.

        A simple test is your observation that if absorbed, it will be re-radiated, half up, half down. So I go to Modtran, and with Tropical, everything default, but altitude 17km, looking up, it tells me that I_out, W / m2 = 9.47024
        That is, just 9.5 W/m2 from above, not 134 (268/2). If I switch to looking down, it is 289.037 W/m2. The IR is radiated from below the tropopause, and only about 3% comes back.

    • Willis, your numbers surprise me greatly. For the stratosphere to absorb energy there must be something capable of absorbing. Nitrogen and Oxygen have no absorption lines in the thermal IR region of the spectrum so they cant absorb. Water does but from all accounts the stratosphere is extremely dry so very low in water vapour. CO2 does absorb at 14 microns but if the absorption was by CO2 what absorbs also emits (emissivity = absorptivity) so the apparent emission temperature at the CO2 wavelengths should reflect the temperature of the stratosphere which can be as high as about 270K. But its not, the emission temperature at 14-15 microns as seen by satellites is around 220K which is the temperature of the tropopause. So it cannot be CO2 doing the absorbing. What is left and at what wavelengths would the absorption be occurring? Any gas present in the stratosphere will also be present in the troposphere and since the stratosphere is warmer than the tropopause, any such gas would be radiating more energy than it is absorbing.

      It also cannot be by conduction or convection from lower layers because the stratosphere is warmer than the tropopause ie: there is a temperature inversion. I suspect the dominant energy input to the stratosphere is absorption of very short UV wavelengths forming Ozone high in the stratosphere and then absorption of longer wavelength UV by the ozone formed. The stratosphere stays warm even though it absorbs little energy precisely because it has very limited means of radiating any energy – predominantly the 10 micron line from Ozone. So if the stratosphere is cooling down it probably means less solar UV is being absorbed. From what I understand the sunspot cycle and solar activity affects the very short wave UV component the most so it would not be surprising if the biggest impact was on the stratosphere.

      • Yes, the biggest impact is on the gradient of tropopause height betwen equator and poles which allows latitudinal sliding of the climate zones and jet stream tracks beneath the altered gradient.

        That affects total global cloudiness for an effect on the proportion of solar energy entering the oceans.

      • Spot on, Michael. The confusion between entropy-directed heat transfer and state-driven radiative intensity, introduced by Trenberth’s cartoon, continues to plague the unwary.

    • “A minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0.1 bar in the atmospheres of Earth1, Titan2, Jupiter3, Saturn4, Uranus and Neptune4, despite great differences in atmospheric composition, gravity, internal heat and sunlight. In all of these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of short-wave solar radiation, from a region below characterized by convection, weather and clouds5, 6. However, it is not obvious why the tropopause occurs at the specific pressure near 0.1 bar. Here we use a simple, physically based model7 to demonstrate that, at atmospheric pressures lower than 0.1 bar, transparency to thermal radiation allows short-wave heating to dominate, creating a stratosphere. At higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. A common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0.1 bar tropopause. We reason that a tropopause at a pressure of approximately 0.1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. Judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets.”


      http://www.nature.com/ngeo/journal/v7/n1/abs/ngeo2020.html

    • The stratosphere is completely transparent to infrared radiation. Only a very strong volcanic eruption may increase temporarily the temperature in the stratosphere due to an increase in density.
      The temperature in the stratosphere increases only by UV energy, as shown in the graphic below.

      • The question is whether the strong ionizing radiation in the zone of ozone during the polar night can raise the temperature in the stratosphere?

  59. Jeff Patterson,

    Your lead post stimulated a wonderful scientific dialog. Thank you, for providing it. Solar focused posts are among the best topics at WUWT.

    Happy belated Chinese Lunar New Year.

    John

  60. When pure integration–a demonstrably unstable operation in the general case–is married to a fudge-factor feedback and a wholesale 37-year lag, one gets a geophysically implausible system model, scarcely corresponding to the verbally expressed speculations about ocean storage and release of heat.

    What would provide a far more plausible system model is the RLC circuit, with capacitance and inductance components providing a frequency-dependent phase lag. In the case of a pure RC circuit, one gets an exponentially fading impulse response, which constitutes not an FIR filter resembling the gaussian, but a recursive IIR filter, commonly termed the exponential low-pass filter.

    BTW, the notion that a signal can be “denoised” simply by low-passing without any a priori specification of signal characteristics is a widespread mistake among analytic novices.

    • Hey, the specification is clear enough: long term rise is AGW the rest is “noise” QED.

      Please pay attention at the back ;)

      • “Long-term rise” is a hand-waving verbal description, not a scientific specification of signal characteristics.

  61. {bold emphasis mine – John Whitman}

    lsvalgaard on February 9, 2016 at 4:01 am

    TSI in solar cycle 24 is indeed anomalous. This is something we are actively investigating at the moment. Here is the evidence for that:

    [was a graph here in Leif’s comment]

    We compare five TSI series (ACRIM3 SORCE, PMOD, RMIB, TCTE),adjusted to match SORCE up through 2008 [necessary because there are small systematic differences between them] with five solar indices (new sunspot number SN, Sunspot areas SA, Group Number GN, Magnesium II UV MGII, F10.7 flux) scaled to the SN scale and matched to their cycle 23 values. As you can see, everything matches in SC23, but the TSIs are too high in SC24. We believe this is correct, and not just problems with the data. The Sun may be telling us that we are entering a new regime.

    Leif,

    Referring to the bold emphasized quotes, can you elaborate possible reasons for the new ‘anomalous’ regime?

    John

    • Wild speculation: solar magnetic fields are generated below the solar ‘surface’ and are shredded by the convection into many separate ‘strands’ of magnetic flux ‘ropes’ which may or may not reach the surface or be visible as small spots [pores]. The pores then assemble into larger spots [this is an observational fact]. If that process should work less efficiently we still get the magnetic field [brighter sun] but not so many visible spots [dimmer sun]. There is some indication that this is the case as the number of spots per group has fallen to half of what it used to be and also that the smallest groups seem to be rarer. If so, we get a brighter sun without the usual solar activity indices reflecting that.

    • Munch, munch, munch. Glug glug.

      Hot popcorn, cold beer, and Leif peppering the conversation. Love it. Especially re: Maunder Minimum.

      So the wild guess is no spots leads to brighter Sun leads to TSI up…but Earth is cold at time during the Maunder Minimum. Chair squirming must be at a crescendo at the moment.

      Munch, munch, munch. Glug glug.

      • Medieval warming and LIA were caused by the Dinosaurs, not the Sun.

        Variables are only so useful if they can be replaced by oranges.

      • You are kidding, right? There are several studies that suggest correlations, but dinosaurs aren’t one of them, and solar-driven mechanisms are one helluvu stretch.

  62. Lots of discussion on sunspots issue.

    I presented three papers at the Symposium on Earth’s Near Space Environment, 18-21 February 1975, held at the National Physical Laboratory, New Delhi. These were published in Indian Journal of Radio Space Physics, Vol 6, March 1977, pp.44-50, 51-59 & 60-66. In this presented Effect of Solar Flares on Lower Tropospheric Temperature & Pressure; Power spectral analysis of lower stratospheric dynamic height, temperature, Zonal component and Meridional components of wind at 100, 50 & 30 mbar; and power spectral analysis of total and net radiation intensities. [the abstracts of these were included in abstract volumes on solar & terrestrial physics compiled by SCOSTEP of USAS in 1976/77].

    The effects of solar flares are more pronounced on pressure is more pronounced compared to that on temperature.

    It is inferred that the annual and semi-annual components of temperature are of the same origin, i.e., solar radiation. The discrepancies observed in the different periods of oscillation during the 10-year period are attributed to the fifth harmonic of sunspot cycle acting in unison with the multiple mode of annual cycle, where higher mode are prominent at lower sunspot activity and lower modes at higher sunspot activity. It is inferred, therefore, from this analysis, that the QBO is only a fifth harmonic of 11-year sunspot cycle and these cycles are formed in situ only. This in situ inference was made: the base angles of QBO for t, H, u & v are all in phase at 50 mbar levels while at 100 & 50 mbar a lag is seen. These suggest that they are formed in situ only.

    The total solar and net radiation intensities show sunspot cycle. Therefore, it is suggested that during the sunspot cycle period there is a certain change in the solar radiation emitted by the sun itself; which in turn, is reflected in other atmospheric process also. They show the multiples of sunspot cycle [22 & 44 years]. When we integrate these the pattern show differences from cycle to cycle. For example, Durban rainfall though presents 66 year cycle. When integrated with the sub-multiple of 22 years, the pattern presents ‘W’ followed by ‘M’ shape. Thus, whenever we encounter a cyclic variation with sub-multiples or multiples, the integrated pattern will be quite different. This is clearly seen in Fortaleza rainfall in northeast Brazil.

    Dr. S. Jeevananda Reddy

  63. For those who like to understand the sun-climate change in the N. Hemisphere since the times of Maunder Minimum may benefit from readin this

    There is a very strong likelihood that the N. Atlantic tectonics responds to the same driver as the longer term solar variability. The sun’s response is a bit delayed (about 12 years, this may give some indication of the depth of solar dynamo). As graph shows the amplitude correlation is not perfect, but then nor are the data for tectonics or SGN on which TSI estimates are made. Major point of contention is the negative correlation around 1860 (on the graph outlined in green).
    As far as climate is concerned it should be noted that there is a gradual delay of the NH’s temperatures to the tectonics since 1900, thus a significant cooling in the N. Hemisphere could be expected by the mid 2020s and to last to mid 2040s.
    Possible explanation for this delay could be slowdown in the N. Atlantic subpolar gyre circulation. The North Atlantic’s Subpolar gyre is the engine of the heat transport across the North Atlantic Ocean. This is a region of the intense ocean – atmosphere interaction. Cold winds remove the surface heat at rates of several hundred watts per square meter, resulting in deep water convection. These changes in turn affect the strength and character of the Atlantic thermohaline circulation (THC) and the horizontal flow of the upper ocean, thereby altering the oceanic poleward heat transport and the distribution of sea surface temperature (SST).
    ( see link )
    Point of interest: the long term solar variability modulation (see graph as discussed further above could not pick up anomalous SC20, but the tectonics graph does.

  64. Speculating further about sunspots, as I understand it the overall visible radiation is dimmed by the presence of sunspots even though the sunspot and TSI maxima coincide. Is it possible that during this period of maximum solar turbulence radiation from deeper within the sun can be released? If so, what is its nature and does it make a significant change to the solar spectral distribution?

  65. Schrodinger’s question raises my own. What were solar holes doing while spots were hard to detect during the Maunder? Did they diminish in size? Appear in places other than in certain bands? Did the resulting wind increase or decrease? Did they last longer or go away quicker? What would satellites have had to deal with then?

    http://www.nasa.gov/content/solar-dynamics-observatory-welcomes-the-new-year

    http://www.swpc.noaa.gov/phenomena/coronal-holes

  66. Our good old ‘friend’ John Cook from the ‘Sceptical Scince’ asked himself some years ago : What ended the Little Ice Age?
    He then elaborates : “This analysis is a useful reminder that CO2 is not the only driver of climate.
    – To end the Little Ice Age, the sun did most of the early heavy lifting.
    – When the solar contribution flattened out in the mid-20th century, humanity took the baton and we’ve been running with it ever since.”
    My view is in very small letters on the graph you may find further above.
    What do you think ? Is John Cook right or wrong in part one, part two or both.

    • When the solar contribution flattened out in the mid-20th century
      This is typical example of deflection. The solar contribution on century time scale has been flat since 1700.

      • ‘natural internal stochastic variations’ = ‘have no idea, but too embarrassed to say so’
        and yet when one shows the natural process with the data available and the power to enable reversal in the previous trend, you confidently declare ‘crackpot finding’, or is it that you find unpalatable the idea that there are geodynamic processes coincident with solar activity.
        Nature doesn’t go ‘random’ on such a scale, nature is ruled by cause and consequence the fact that we do not understand it, it’s our not the nature’s fault.

      • Admitting that there are things we do not understand in detail is the honorable thing to do.

        the idea that there are geodynamic processes coincident with solar activity
        Claiming that such unfounded ideas represent knowledge is the sure mark of crackpottery in which you excel.

      • There are numerous graphs purporting to represent Holocene temperatures, almost all of them make the LIA end (1650-1700) either coldest or second coldest during whole period stretching back 8-10 Kyr.
        Looking at the Maunder Minimum exit GSN doesn’t appear to be anything extraordinary, thus it may be likely that the solar activity is not primary cause in ending the LIA.
        Looking at the alternative, the impulse peaking in 1720s is extraordinary in relation to what happened since and possibly before. One could attempt to ‘process’ data to reduce its amplitude, and perhaps get read of negative correlation around 1860, if the aim was just to highlight correlation with the solar activity.
        Partial correlation with solar activity is coincidental but could indicate existence of a process we don’t understand – ‘the honourable thing’ to admit to.
        What is far more important at this stage is to understand why the N. Hemisphere temperature suddenly reversed its trend. I am only aware of only two data supported possibilities:
        – sun going into overdrive, which you say it didn’t, that is OK by me.
        – tectonic activity in the N. Atlantic going into overdrive, one that has not be repeated in the subsequent 300 years, which is definitely is not OK by you. If you know of another one I would like to know.
        It is not OK by you since it happen that from time to time in those subsequent 300 years tectonics occasionally shows same or similar trend as the GSN, so describing it as the ‘crackpottery’ is your choice and of course you are entitled to do so, if indeed is that your true understanding, but I am inclined to think that it may not be so.

      • Concede is not the right word to use.
        The issue with 10Be is that Greenland and Antarctica disagree. I quote here a recent paper by Muscheler et al.: “comparison to the revised sunspot records. We note that there is a difference in Greenland and Antarctic 10Be data that can lead to disagreeing conclusions about past solar activity levels. This difference is likely due to weather and climate influences in the data.”
        And is likely not solar at all.

      • First of all you omit the all-important conclusion of the paper you took the Figure from:
        “We observe that although recent 10Be flux in NGRIP is low, there is no indication of unusually high recent solar activity in relation to other parts of the investigated period.”

        As this Figure from a comparison of 14C and the new Group Number series shows:

        Solar activity since 1700 has not increased.

        So, won’t you concede that solar activity the past 300 years have not increased?

      • lsvalgaard on February 10, 2016 at 11:12 am

        “Admitting that there are things we do not understand in detail is the honorable thing to do.”

        That is a Feynman-like statement. Thanks Leif.

        John

      • Leif you say “The solar contribution on century time scale has been flat since 1700.”
        and “Solar activity since 1700 has not increased.”
        Look at the Bergren Be10 values at 1700 and the end 20th century values
        Look at your data at 1700 and 1990.
        Your statements quoted above are plainly wrong.
        What are you seeing differently?

      • Look at your data at 1700 and 1990
        Look at the data at 1728 and 2008…
        Are you an idiot, comparing a minimum with a maximum? Don’t you know there is an 11-year cycle?

        The point is [as Berggren says] that the 10Be data is contaminated by climate and that most of the variation is not solar. 14C is another [and better] indicator and it is plain that 14C reaches the same high values in the 17th, 18th, 19th, and 20th centuries, thus no long-term trend:

  67. Pamela – A quick look at the literature shows various relationships. If we take SSN as the solar activity cycle, then the peaks coincide with solar UV, microwave flux and TSI. Ap Index is slightly displaced.

    The solar wind velocity is completely out of phase as is the neutron count on earth.

    The solar magnetic field reverses every SSN cycle, the peaks and troughs in sync with SSN.

    I believe that there is some link between the solar cycle and our climate. It could be the cumulative effect of several small factors, for example TSI direct warming plus UV initiated chemical reactions plus solar wind and magnetic field influences on particles of different types resulting in changes in albedo, especially cloud cover.

    • I would never advocate a quick look at ANY literature. Journals carry a vast weight of the one-two punch of poorly done research combined with poorly done statistical analysis. Be a discerning picky eater of such material.

  68. The mid-century period of slight cooling from 1945 to 1975 – referred to as the ‘grand hiatus’, also has not been satisfactorily explained.

    Um . . . negative PDO phase is not a satisfactory explanation?

  69. Galactic radiation is still growing. Oulu shows neutrons at Earth. You have to remember that the strongest interaction with the particles of air occurs at an altitude of about 20 km in the zone the ozone.

  70. Pamela, I got hold of data for each parameter, several sources for each, and plotted each against TSI.

  71. Jeff Patterson February 11, 2016 at 3:07 pm

    Thanks Willis. Just to make sure we’re on the same page, I thought your comments re trend were referencing the G. Kopp, N. Krivova, C.J. Wu study which Leif and you warned me off of. The results posted above used the SSN pointed to me by Leif after appling his conversion (to TSI) formula. I’m assuming that’s the best we can hope for.

    Thanks for the clarification, Jeff. You are correct that that is as good as we might hope … although I vaguely remember a study showing that TSI continued to vary during the Maunder Minimum …

    Re: 85 years. Its actually the 405 year record TSI record that’s of import. I left pad the CO2 record to match, assuming a constant 285 ppm prior to 1732. Error in this assumption have a negligible effect by 1850 when the comparison starts.

    But if I understand you correctly, you’re matching it to the 164 year temperature record.

    Re # of parameters: Calling it an eight parameter model (setting aside the fact that its a convolution) ignores that there are 4 different functions being match independently (autocorrelation of model vs denoised temp, CO2 vs TSI, T vs TSI, and boundary condition).

    I fear I don’t see why that should make a difference. You still have an eight-parameter fitted model.

    I’ll see your Fermi and raise a Kepler. He took the Tycho Brahe’s data on the position of the planets and derived the “law” of gravity by trial and error. He finally found the equations that fit the data (he almost had it one time but Mercury was off by a few arc-seconds so he rejected that equation) and when he did many said “but it doesn’t fit what we think we know”, and in fact we still don’t know _why_ two masses attract (or why a mass bends the space-time continuum if you’d rather). He let the data and the fit and the predictions it made speak for themselves. Most of the great advances in science have come that way.

    I fear that you still misunderstand Fermi’s objections. Fermi said:

    One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and selfconsistent mathematical formalism.

    Kepler’s three laws were:

    The path of the planets about the sun is elliptical in shape, with the center of the sun being located at one focus. (The Law of Ellipses)

    An imaginary line drawn from the center of the sun to the center of the planet will sweep out equal areas in equal intervals of time. (The Law of Equal Areas)

    The ratio of the squares of the periods of any two planets is equal to the ratio of the cubes of their average distances from the sun. (The Law of Harmonies)

    Three different laws of motion … and not a single tunable parameter among them. Not one. And since Kepler had both clear mathematical formalism and no tunable parameters, Fermi would be satisfied.

    So I fear that your example is a false parallel. You have no clear mathematical formalism and eight tunable parameters. Not the same at all.

    All that said (whew) I’m not claiming the model correct (even in the sense that no model is correct). I’m sharing an interesting correlation that deserves the attention of someone who can figure out why the model fits the data so well.

    Again with the claims of how well the model fits the data. I will say it again. I am not impressed in the slightest by how well your model fits the data. The model fits the data so well because you had free choice of any arbitrary transformation function, and you could fit the results by using eight tunable parameters.

    Again I say, Fermi was not impressed with how well Dyson’s model fit the observations, and I’m not impressed with how well your model fits the observations. Here we have three very smart men (Enrico Fermi, Freeman Dyson, and “Johnny” Von Neumann), all of whom agree with each other in telling us in plain English to AVOID THE PITFALL OF MULTI-PARAMETER TUNED MODELS.

    Are you seriously claiming to be smarter than those three? Or are you exempt somehow? You are facing the same problem the climate modelers face. After tuning their model on the historical record, how well model output fits that historical record is meaningless as a measurement of the strength of the model, and for the same reason—too many tunable parameters.

    Thanks for the ear. I can’t begin to tell you how much admiration I have for your work.

    Thanks, JP. I’m just doing what I see you doing—looking at the data itself and trying to draw your own conclusions.

    My best to you,

    w.

    • Newton had to invent a whole new branch of math to work out the physical explanation for the elliptical orbits curve-fit by Kepler. I’d say Jeff’s model pales in complexity to that.

  72. rishrac, thanks for your reply. Let me recap the bidding.

    Willis Eschenbach February 11, 2016 at 2:34 pm

    lsvalgaard February 10, 2016 at 8:44 am

    I’ll stand by the original research that solar activity is a major player in climate
    What ‘original research’?

    rishrac February 10, 2016 at 12:56 pm

    There was/is a lot of research into solar activity and climate before AGW. It’s all over the place. It was also talked about extensively in the ham radio world. There are a lot of detailed records. The connection between solar activity and climate was fairly evident. …

    rishrac, when a scientist asks “What original research”, he is asking for links or citations to the research itself. All you’ve done is to repeat in varied forms your claim that there’s original research out there … but where?

    If you could provide a link to whatever it is that you are calling the “original research”, that would move the conversation forwards.

    w.

    You have now replied to my request for a link to your rumored “original research”, as follows:

    rishrac February 12, 2016 at 5:29 am

    Whatever I put up would be redundant. In fact some of the information here exceeds what was produced in the 1970’s.

    I will take that as your acknowledgement that in fact you don’t have any “original research” to link to.

    w.

Comments are closed.