Guest essay by Jeff Patterson
Temperature versus CO2
Greenhouse gas theory predicts a linear relationship between the logarithm of CO2 atmospheric concentration and the resultant temperature anomaly. Figure 1 is a scattergram comparing the Hadcrut4 temperature record to historical CO2 concentrations.

UPDATE: Thanks to an alert commenter, this graph has now been updated with post 2013 data to present:

At first glance Figure 1a appears to confirm the theoretical log-linear relationship. However if Gaussian filtering is applied to the temperature data to remove the unrelated high frequency variability a different picture emerges.
Figure 1b contradicts the assertion of a direct relationship between CO2 and global temperature. Three regions are apparent where temperatures are flat to falling while CO2 concentrations are rising substantially. Also, a near step-change in temperature occurred while CO2 remained nearly constant at about 310 ppm. The recent global warming hiatus is clearly evident in the flattening of the curve above 380 ppm. These regions of anti-correlation were pointed to by Professor Judith Curry in her recent testimony before the Senate Subcommittee on Space, Science and Competitiveness:[6]
If the warming since 1950 was caused by humans, what caused the warming during the period 1910 –1945? The period 1910-1945 comprises over 40% of the warming since 1900, but is associated with only 10% of the carbon dioxide increase since 1900. Clearly, human emissions of greenhouse gases played little role in causing this early warming. The mid-century period of slight cooling from 1945 to 1975 – referred to as the ‘grand hiatus’, also has not been satisfactorily explained.
A much better correlation exists between atmospheric CO2 concentration and the variation in total solar irradiance (TSI). Figure 2 shows the TSI reconstruction due to Krivova[2] .

When the TSI time series is exponentially smoothed and lagged by 37 years, a near-perfect fit is exhibited (Figure 3).

Note that while in general correlation does not imply causation here there is no ambiguity as to cause and effect. Clearly the atmospheric concentration of CO2 cannot affect the sun spot number from which the TSI record is reconstructed.
This apparent relationship between TSI and CO2 concentration can be represented schematically by the system shown in Figure 4. As used here, a system is a black box that transforms some input driving function into some output we can measure. The mathematical equation that describes the input to output transformation is called the system transfer function. The transfer function of the system in Figure 4 is a low-pass filter whose output is delayed by the lag td1 . The driving input u(t) is the demeaned TSI reconstruction shown in Figure 2b. The output v(t) is the time series shown in Figure 3a (blue curve) which closely approximates the measured CO2 concentration (Figure 3a, yellow curve).

In Figure 4, the block labeled 1/s is the Laplacian representation of a pure integration. Along with the dissipation feedback factor a1 it forms what system engineers call a “leaky integrator”. It is mathematically equivalent to the exponential smoothing function often used in time series analysis. The block labeled td1 is the time lag and G is a scaling factor to handle the unit conversion.
In a plausible physical interpretation of the system, the dissipative integrator models the ocean heat content which accumulates variations in TSI; warming when it rises above some equilibrium value and cooling when it falls below. As the ocean warms it becomes less soluble to CO2 resulting in out-gassing of CO2 to the atmosphere.
The fidelity with which this model replicates the observed atmospheric CO2 concentration has significant implications for attributing the source of the rise in CO2 (and by inference the rise in global temperature) observed since 1880. There is no statistically significant signal of an anthropogenic contribution to the residual plotted Figure 3c. Thus the entirety of the observed post-industrial rise in atmospheric CO2 concentration can be directly attributed to the variation in TSI, the only forcing applied to the system whose output accounts for 99.5% ( r2=.995) of the observational record.
How then, does this naturally occurring CO2 impact global temperature? To explore this we will develop a system model which when combined with the CO2 generating system of Figure 4 can replicate the decadal scale global temperature record with impressive accuracy.
Researchers have long noted the relationship between TSI and global mean temperature.[5] We hypothesize that this too is due to the lagged accumulation of oceanic heat content, the delay being perhaps the transit time of the thermohaline circulation. A system model that implements this hypothesis is shown in Figure 5.

As before, the model parameters are the dissipation factor a2 that determines the energy discharge rate; input offset constant Ci representing the equilibrium TSI value; scaling constants G1, G2 which convert their inputs to a contributive DT, and time lag td2. The output offset Co represents the unknown initial system state and is set to center the modeled output on the arbitrarily chosen zero point of the Hadcrut4 temperature anomaly. It has no impact on the residual variance which is assumed zero mean.
The driving function u(t) is again the variation in solar irradiance (Figure 2b). The second input function v(t) is the output of the model of Figure 4 which was shown to closely approximate the logarithmic CO2 concentration. Thus the combined system has a single input u(t) and a single output- the predicted temperature anomaly Ta(t). Once the two systems are combined the CO2 concentration becomes an internal node of the composite system.
Y(t) represents other internal and external contributors to the global temperature anomaly, i.e. the natural variability of the climate system. The goal is to find the system parameter values which minimizes variance of Y(t) on a decadal time scale.
Natural Variability
Natural variability is a catch-all phrase encompassing variations in the observed temperature record which cannot be explained and therefore cannot be modeled. It includes components on many different time scales. Some are due to the complex internal dynamics of the climate system and random variations and some to the effects of feedbacks and other forcing agents (clouds, aerosols, water vapor etc.) about which there is great uncertainty.
When creating a system model it is important to avoid the temptation to sweep too much under the rug of natural variation. On the other hand, in order to accurately estimate the system parameters affecting the longer term temperature trends it is helpful to remove as much of the short-term noise-like components as practicable, especially since these unrelated short-term variations are of the same order of magnitude as the effect we are trying to analyze. The removal of these short-term spurious components is referred to as data denoising. Denoising must be carried out with the time scale of interest in mind in order to ensure that significant contributors are not discarded. Many techniques are available for this purpose but most assume the underlying process that produced the observed data exhibits stochastic stationarity, in essence a requirement that the process parameters remain constant over the observation interval. As we show in the next section, the climate system is not even weak sense stationary but rather cyclostationary.
Autocorrelation
Autocorrelation is a measure of how similar a lagged version of a time series resembles the unlagged data. In a memoryless system, correlation falls abruptly to zero with increasing lag. In systems with memory, the correlation will decrease gradually. Figure 6a shows the autocorrelation function (ACF) of the linearly de-trended unfiltered Hadcrut4 global temperature record. Instead of the correlation gradually decreasing, we see that the correlation cycles up and down in a quasi-periodic fashion. A system that exhibits this characteristic is said to be cyclostationary. Despite the nomenclature, a cyclostationary process is not stationary, even in the weak sense.

With linear detrending, significant correlation is exhibited at two lags, 70 years and 140 years. However the position of the correlation peaks is highly dependent on the order of the detrending polynomial.
Power spectral density (spectrum) is the discrete Fourier transform of the ACF and is plotted in Figure 6b. It shows significant periodicity at 71 and 169 years but again the extracted period will vary depending on the order of the detrending polynomial (linear, parabolic, cubic etc.) and also slightly on the data endpoints selected.
Denoising the Data
From the above it is apparent that we cannot assume a particular trend shape to reliably isolate the “main” decadal scale climatic features we hope to model. Nor can we assume the period of the oscillatory component(s) remains fixed over the entire record. This makes denoising a challenge. However, a technique [1] has been developed for denoising data which makes no assumptions regarding the stationarity of the time record which combines wavelet analysis with principal component analysis to isolate quasi-periodic components. A single parameter (wavelet order) determines the time scale of the retained data. The implementation used here is the wden function in Matlab™. The denoised data using a level 4 wavelet as described in [1] is plotted as the yellow curve in Figure 7.

The resulting denoised temperature profile is nearly identical to that derived by other means (Singular Spectrum Analysis, Harmonic Decomposition, Principal Component Analysis, Loess Filtering, Windowed Regression etc.)
Figure 8a compares the autocorrelation of the denoised data (red) to that of the raw data (blue). We see that the denoising process has not materially affected the stochastic properties over the time scales of interest. The narrowness of the central lobe of the residual ACF (Figure 8b) shows that we have not removed any temperature component related to the climate system memory.

The denoised data (Figure 7) shows a long-term trend and a quasi-periodic oscillatory component. Taking the first difference of the denoised data (Figure 9) shows how the trend (i.e. the instantaneous slope) has evolved over time.

There are several interesting things of note in Figure 8. The period is relatively stable while the amplitude of the oscillation is growing slightly. The trend maxed out at .23 ⁰C /decade circa 1994 and has been decreasing since. It currently stands at .036 ⁰C /decade. Note also that the mean slope is non-zero (.05 ⁰C /decade) and the trend itself trends upward with time. This implies the presence of a system integration as otherwise the differentiation would remove the trend of the trend.
A time series trend does not necessarily foretell how things will evolve in the future. The trend estimated from Figure 9 in 1892 would predict cooling at a rate of .6 degrees-per-century while just 35 years later predict 1.5 degrees-per-century of warming. Both projections would have been wildly off base. Nor is there justification in assuming the long-term trend to be some regression on the slope. Without knowledge of the underlying system, one has no basis on which to decide the proper form of the regression. Is the long term trend of the trend linear? Perhaps, but it might just as plausibly be a section of a low frequency sine wave or a complimentary exponential or perhaps it is just integrated noise giving the illusion of a trend. To sort things out we need to approximate the system which produced the data. For this purpose we will use the model shown in Figure 5 above.
Model Parametrization
As noted, the composite system is comprised of two sub-systems. The first (Figure 4) replicates the atmospheric CO2 whose effect on temperature is assumed linear with scaling factor G1. The parameters of the first system were set to give a best-fit match to the observational CO2 record (see Figure 3).
The remaining parameters were optimized using a three-step process. First the dissipation factor a2 and time delay td2 were optimized to minimize the least-squares error (LSE) of the model output ACF as compared to the ACF of the denoised data (Figure 10, lower left), using a numerical method [7] guaranteed to find the global minimum. In this step the output and target ACFs are both calculated from the demeaned rather than detrended data. This eliminates the dependence on the regression slope and, since the ACF is independent of the scaling and offset, allows the routine to optimize to these parameters independently. In the second step, the scaling factors G1, G2 are found by minimizing the residual LSE using the parameters found in step one. Finally the input offset Ci is found by solving the boundary condition to eliminate the non-physical temperature discontinuity. The best-fit parameters are shown in Table 1. The results (figure 10) correlate well with observational time series (r = .984).

Figure 10- Modeled results versus observation
| Dissipation Factor | a1 | .006 |
| Dissipation Factor | a2 | .051 |
| Scaling Parameter | G1 | .0176 |
| Scaling Parameter | G2 | .0549 |
| CO2 Lag (years) | td1 | 37 |
| TSI Lag (years) | td2 | 84 |
| Input Offset (W/m2) | C0 | -.045 |
| Output Offset (K) | C1 | .545 |
Table 1- Best fit model parameters
The error residual (upper right) remains within the specified data uncertainty (± .1⁰C) over virtually all of the 165 year observation interval. The model output replicates most of the oscillatory component that heretofore has been attributed to the so-called Atlantic Multi-decadal Oscillation (AMO). As shown in the detailed plots of Figure 11, the model output aligns closely in time with all of the major breakpoints in the slope of the observational data, and replicates the decadal scaled trends of the record (the exception being a 10 year period beginning in 1965), including the recent hiatus and the so-called ‘grand hiatus’ of 1945-1975.

Figure 12 plots the scaled, second difference of the denoised data against the model residual. The high degree of correlation infers an internal feedback sensitive to the second derivative of temperature. That such an internal dynamic can be derived from the modeled output provides further evidence of the model’s validity. Further investigation of an enhanced model that includes this dynamic will be undertaken.

Climate Sensitivity to CO2
The transient climate sensitivity to CO2 atmospheric concentration can be obtained from the model by running the simulation with G2 set to zero, giving the contribution to the temperature anomaly from CO2 alone (Figure 13a).

A linear regression on the modeled temperature anomaly (with G2 = 0) versus the logarithmic CO2 concentration (Figure 13b) shows a best fit slope of 1.85 yielding an estimated transient climate sensitivity to doubled CO2 of 1.28 ⁰C. Note however that assuming the model is relevant, the issue of climate sensitivity is moot unless and until an anthropogenic contribution to the CO2 concentration becomes detectable.
Discussion
These results are in line with the general GHG theory which postulates CO2 as a significant contributor to the post-industrial warming but are in direct contradiction to the notion that human emissions have thus far contributed significantly to the observed concentration. In addition, the derived TCR implies a mechanism that reduces the climate sensitivity to CO2 to a value below the theoretical non-feedback forcing, i.e. the feedback appears to be negative. Other inferences are that the observed cyclostationarity is inherent in the TSI variation and not a climate system dynamic (because a single-pole response cannot produce an oscillatory component) and that at least over the short instrumental time period, the climate system as a whole can be modeled as a linear, time-invariant system, albeit with significant time lag.
In a broader context, these results may contain clues to the underlying climate dynamics that those with expertise in these systems should find valuable if they are willing to set aside preconceived notions as to the underlying cause. This model, like all models, is nothing more than an executable hypothesis and as Professor Feynman points out, all scientific hypotheses start with a guess. The execution of a hypothesis, either by solving the equations in closed form or by running a computer simulation is never to be confused with an experiment. Rather a simulation provides the predicted ramifications of the hypothesis which falsify the hypothesis if the predictions do not match empirical observations.
An estimate of the future TSI is required in order for this model to predict how global temperature will evolve. There are some models of this in development by others and I hope to provide a detailed projection in a future article. In the meantime, due to the inherent system lag, we can get a rough idea over the short term. TSI peaked in the early 80s so we should expect the CO2 concentrations to peak some 37 years later, i.e. in a few years from now. Near the start of the next decade, CO2 forcing will dominate and thus we would expect temperatures to flatten and begin to fall as this forcing decrease. Between now and then we should expect a modest increase. This no doubt will be heralded as proof that AGW is back and that drastic measures are required to stave off the looming catastrophe.
Comment on Model Parametrization
It is important to understand the difference between curve fitting and model parametrization. The output of a model is the convolution of its input and the model’s impulse response which means that the output at any given point in time depends on all prior inputs, each of which is shaped the same way by the model parameter under consideration. This is illustrated in Figure 14. The input u(t) has been decomposed in to individual pulses and the system response to each pulse plotted individually. Each input pulse causes a step response that decays at a rate determined by the dissipation rate, set to .05 on the left and .005 on the right. The output at any point is the sum of each of these curves, shown in the lower panels. The gain factor G simply scales the result and does not affect the correlation with the target function. Thus, unlike polynomial regression, it is not possible to fit an arbitrary output curve given specified forcing function, u(t). In the models of Figures 4 and 5 it is only the dissipation factor (and to a small extent in the early output, the input constant) which determine the functional “shape” of the output. The scaling, offset and delay do not effect correlation and so are not degrees of freedom in the classical sense.

References:
1) Aminghafari, M.; Cheze, N.; Poggi, J-M. (2006), “Multivariate de-noising using wavelets and principal component analysis,” Computational Statistics & Data Analysis, 50, pp. 2381–2398.
2) N.A. Krivova, L.E.A. Vieira, S.K. Solanki (2010).Journal of Geophysical Research: Space Physics, Volume 115, Issue A12, CiteID A12112. DOI:10.1029/2010JA015431
3) Ball, W. T.; Unruh, Y. C.; Krivova, N. A.; Solanki, S.; Wenzler, T.; Mortlock, D. J.; Jaffe, A. H. (2012) Astronomy & Astrophysics, 541, id.A27. DOI:10.1051/0004-6361/201118702
4) K. L. Yeo, N. A. Krivova, S. K. Solanki, and K. H. Glassmeier (2014) Astronomy & Astrophysics, 570, A85, DOI: 10.1051/0004-6361/201423628
5) For a summary of many of the correlations between TSI and climate that have been investigated see The Solar Evidence (http://appinsys.com/globalwarming/gw_part6_solarevidence.htm)
6) STATEMENT TO THE SUBCOMMITTEE ON SPACE, SCIENCEAND COMPETITIVENESS OF THE UNITED STATES SENATE; Hearing on “Data or Dogma? Promoting Open Inquiry in the Debate Over the Magnitude of Human Impact on Climate Change”; Judith A. Curry, Georgia Institute of Technology
7) See Numerical Optimization from Wolfram. In particular, the NMinimize function using the “”NelderMead” method.
8) See wden from MathWorks Matlab™ documentation.
Data:
Hadcrut4 global temperature series:
Available at https://climexp.knmi.nl/data/ihadcrut4_ns_avg _ 00_ 1850:2015.dat
Krivova TSI reconstruction:
Available at http://lasp.colorado.edu/home/sorce/files/2011/09/TSI_TIM_Reconstruction.txt
CO2 data
Available at http://climexp.knmi.nl/data/ico2_log.dat
My main criticism of this analysis:
1. In a 150 year record you have no hope of discerning 100 year cycles. You can barely discern 60-80 year cycles. Nyquist says you need at least two periods, in practice with overlapping 60-80 year cycles you need about 5 cycles. Yes you show cycles there, but I’m not sure if they are real or an artifact.
2. Edge effects when filtering. Please, just throw away the edge data so that you or others aren’t tempted to interpret invalid data. You are predicting data in the future and the past that you don’t have. It doesn’t matter whether you use extend, mirror, zero pad or whatever. I have found that the LEAST variance in error (given Monte Carlo simulation of pink noise signals) is the reflection method. The extend method looks tempting (has a low mean error) but there can be huge outliers when there’s a large slew at the ends of the signals being analyzed. Best would be to not filter at all. The Mark 1 eyeball is pretty good at picking out the signal from the noise, you don’t need to help it all that much.
3. Combining (1) and (2) above in the frequency domain: trying to find low frequency signals below 2-5 periods per the length of the data set is basically trying to guess beyond the lower edge of the measurable frequencies. You’re just extrapolating. It’s just as bad of Nyquist violation as trying to guess the high frequency signals beyond samplerate/2.
4. As noted above, if you integrate any signal that doesn’t cross zero you get an increasing trend. The temperature happens to be going up. False correlation is likely the real result here.
5. You made an assertion that two signals were correlated, but you didn’t do a Monte Carlo analysis to see if your correlation rises above a noise floor. See this paper here for an example of how to do that. Executive summary is that you must test against noise whose sprectrum matches that of your original signal. In this paper they manage to discern ENSO in the SST, but everything else is noise. Note how the noise floor goes up with decreasing frequencies…
http://paos.colorado.edu/research/wavelets/bams_79_01_0061.pdf
Peter
I don’t use FFT but a specially designed filter I used for some years (looking at analogue and frequency modulation signals)
CET is strongly influenced by the ‘next door’ N. Atlantic temperatures, and yet the CET has no 60 year component. Here I looked at AMO (1880-2014) and the CET (1700-2014) data.
http://www.vukcevic.talktalk.net/AMO-CET.gif
My interpretation of this is that the AMO is most likely subject to the same two components as the CET (53 and 67 years) but it could be that the ocean averages them out. I am not aware of anyone showing AMO except with one periodicity in the range around 60 years. AFAIK the source of it has not been identified as yet.
Thanks for the critique.
1) There is no cycle discerning here nor claim of periodicity. There are no FFT’s involved so no worries about Nyquist.
2) The edge effect of a 4 point Gaussian filter is small (and IMHO the very best that can be achieved) and was only used in figure 1. Figure 8 says all there is to say about any deleterious effects of the denoising algorithm. It is scholastically transparent except for the BW reduction (the impulsive central peaks are gone)
3) N/A. See above
4) But the signal does cross zero, over and over as it is demeaned (see figure 2b)
5)” Executive summary is that you must test against noise whose sprectrum matches that of your original signal”.
If you have a stochastic process that can replicate the TSI spectrum I’m all ears. 🙂
Cheers,
JP
scholastically => stochastically
Folks, none of the details or the math about smoothing, fitting, etc matters if the input data [TSI] is wrong. Since there is good evidence that the TSI used by Jeff is wrong [even if pushed by the IPCC – as it is] the whole issue is moot, except perhaps to show the opposite conclusion that the influence of TSI is minor and hard to even detect.
This is what the TSI curve should look like (the brown curve):
http://www.leif.org/research/Kopp-et-al-New-TSI.png
Same basic shape as before but smaller variations.
How does mere scaling down of the same old solar variability remove correlations with observed climate variations ?
All the troughs still coincide with cool spells.
Anyway, TSI is a mere proxy for other aspects of solar variation which are sufficient to affect the climate system.
All the troughs still coincide with cool spells.
Solar activity now is on par with what it was 100 years ago, and 200 years ago, and 300 years ago so we are in a ‘trough’, yet current temps are ‘the highest evah’…
You aren’t allowing for the delay caused by oceanic thermal inertia and current surface temperatures are skewed by poor recording quality plus thus far unjustified ‘adjustments’ that cool the past and warm the present.
The satellite records do appear to show that the late 90s was the thermal peak arising from the run of several active cycles during the late 20th century.
Give it time.
But you are allowing for those unknown delays?
Anyhow, your chart does NOT show current solar activity on a par with 100 and 200m years ago, far from it.
WUWT?
I’ll try to post the results of running with Lief’s TSI model, assuming I can just embed an html image tag. Here goes…
Assuming that worked..
We get a pretty decent match to the raw data albeit the correlation is slightly lower. The residual exhibits a strong periodicity meaning a lower AMO signal is present in Leif’s reconstruction.
The biggest parametric change aside from the expected scaling change was to move the CO2 lag value from 37 years to 3 years which seems a more realistic figure. There is more TSI ripple in the modeled output but they actually time align pretty well with the raw (not denoised) data (lower right).
Lief’s TSI model
It is not really MY model, but the result of modelling TSI using the new sunspot series by Kopp et al.
So, what you saying is that even if you use a TSI that shows no trend since 1700, it does not make any significant difference. Does this not simply say that the climate is not driven by TSI?
The crucial test would be to use TSI since 1700 in reverse as I suggested. If you so that, there should not be any valid TSI-signal in your result. If there still is, then the variation of TSI doesn’t matter.
Oh I agree that’s a huge problem too. But even he corrects the TSI he’s still going to have the other problems.
The TSI problem is a matter of wrong data. The problems I and other pointed out are methodology problems. You have to fix both to get a proper analysis.
Actually, it’s not fixable. We don’t have a long enough history of temperature data to discern the underlying cycles accurately. Humans have a real hard time with “we don’t know, have to wait”. But that, IMHO, is exactly where we are at. I note this applies to correlations with C02, TSI, whatever you care to look at whose cycles are longer than about 30 years.
Peter
TSI is a mere proxy for other solar processes and so does not need to be other than minor in itself.
EUV, UV, F10.7, and Solar Wind have varied just like TSI. What other indicators do you have in mind?
I have in mind all processes that impinge upon the ozone creation / destruction process.
I don’t think anyone has a complete grip on that at present but there is a critical diagnostic indicator in that from 2004 to 2007 ozone increased above 45km (in the mesosphere) at a time of quiet sun whereas conventional climatology proposes decreased ozone at all levels when the sun is quiet.
The mesosphere supplies the flow of descending air into the polar vortices so that is an observation of critical significance as recognised by Joanna Haigh:
http://www.nature.com/nature/journal/v467/n7316/full/nature09426.html
“our findings raise the possibility that the effects of solar variability on temperature throughout the atmosphere may be contrary to current expectations”
but not contrary to my hypothesis 🙂
I have the only hypothesis that accommodates her observations.
I know, I know: whatever data comes to light, whatever flaws are found, whatever happens, NOTHING will EVER falsify your hypothesis as pure hand waving cannot be assailed in any way.
I’ve previously given you a long list of events that could invalidate my hypothesis.
You insist on ignoring that.
None of them have happened yet.
Since they are not quantitative, they are not of interest.
They don’t need to be quantitative. A change in the direction of trend, if sustained, is enough to invalidate my hypothesis.
I would advise that, instead of conjecturing a first order model, you actually deconvolve the data to find a more precise form of the transfer function. I would recommend using Wiener deconvolution with the FFT:
https://en.wikipedia.org/wiki/Wiener_deconvolution
I like to transform to the time domain to get an estimated impulse response. The impulse response naturally grows more uncertain as the number of data points available to estimate the correlation become fewer. You can often identify a transition region where the impulse response becomes less coherent, and the estimate becomes dominated by noise.
A tapered window applied to the data then eliminates the noise dominated portion, and inverse transforming back to the frequency domain then produces an estimate of the transfer function smoothed by the transform of the window response. A a rational transfer function can then be fitted, and the coefficients used to create a filter network representing it.
Wiener Deconvolution is an interesting approach. I’ll have to play with this.
The problem is going to be that to apply an Fourier Transform, you are assuming the entire length of the record repeats ad infinitum. It doesn’t. We don’t have that much data. Nobody has this much data, it’s an inherent limit of signal analysis math.
To solve this issue you have to window the data to ensure there are not edge effects. (e.g. if the signal ends on 0.0 and starts on 1.0, you have a bogus step function spewing energy all over your spectrum!).
So be sure to window the data properly. You will find, if you use a proper window, that those signals whose periods are longer than sample_length/2 go away. Possibly some shorter periods, depending on the phase of the signal in the sample window. Which is probably a good thing, according to Mr. Nyquist. I have found by experiment that you can’t reliably discern signals longer than sample_length/5.
Edge effects in time domain AND frequency domain must be taken into account. Your best bet to throw away those endpoints at the appropriate step in the analysis. This will be unsatisfactory as due to the limited duration of the temperature record, that’s the data you are interested in. Don’t succumb to that temptation. You don’t have enough data.
The satellite record will have enough data in AD 2148 (160 years from 1979) to see all the ocean cycles for two periods. Let’s hope some of us live that long. My guess is we’ll die from succumbing to cold or heat as energy will become too expensive to heat or cool our homes properly.
Peter
“The problem is going to be that to apply an Fourier Transform, you are assuming the entire length of the record repeats ad infinitum.”
You can look at it that way, if you consider the FFT to produce the coefficients of a Fourier Series. But, there is another way of looking at it as the samples of the Fourier Transform, smoothed by convolution with the sinc response of the finite data window. That sinc response convolution is the manifestation of the “edge effects” of which you speak.
Since the Wiener convolution is operating on estimates of the cross spectral density and the power spectral density, the convolution is with a sinc^2 function, and edge effects are reduced. But, bias in the estimate is increased. There is always a tradeoff between bias and variance in statistical Fourier analysis.
But, if your data set is much longer than the lowest frequency you are trying to resolve, satisfactory results are often obtained. You choose 1/5th as being the limit of resolution. I think that is not unreasonable, but it very much depends on the data, and its statistical properties, and 1/5th may be overly conservative in some cases.
Doesn’t work for cyclostationary signals. And you need to know a priori the complex spectrum of the noise.
I see little evidence that these data indicate cyclostationarity. Your autocorrelation estimate is just that – an estimate. And, it naturally becomes less accurate the longer the lag period, because you have fewer and fewer effectively independent data available to compare to one another. You really cannot rely on the fact that the peaks do not seem to be decreasing. This is likely just a statistical artifact.
It might elucidate my point to share a small analysis of SSN I did several years ago now. Here is a plot of the PSD I estimated. The peaks occurring at years equivalent of 10, 10.8, 11.8, and 131 years indicated that the SSN data are the rectified measurements of a quasi-periodic process with years equivalent central periods of 20 and 23.6 years.
A two-mode model for the system is here. It assumes two lightly damped processes with natural frequencies 2pi/20 and 2pi/23.6 driven by wideband random noise – in actual fact, all that is required is that the input driver be effectively uniform in spectral density near the peaks, but this makes it very easy to model.
With this model, I was able to produce data sets which look quite similar to the actual SSN in a qualitative manner, as here, and here.
The next obvious step would have been to implement a Kalman Filter, train it on the historical data, and use that to propagate the predicted activity forward. This would produce not only an estimate of future activity, but error bars for it from the covariance propagation. But, this is not my day job, so I never took that step.
The point is, what you are seeing is probably not cyclostationary, but just a standard 2nd order response with light damping. That’s pretty much what one should expect from a physical model of the climate system, whereby reservoirs of energy are alternatingly charged and depleted, in accordance with boundary conditions set by the configuration of the oceans and continents.
but it very much depends on the data, and its statistical properties, and 1/5th may be overly conservative in some cases.
Oh I agree, and that 1/5 was based on sync function smear of edge effects, not sync^2 as you point out for cross spectral density. Maybe with this method we can get closer to the theoretical limit of 1/2. I’ll have to do some numerical testing of this, when I get some free time.
With this model, I was able to produce data sets which look quite similar to the actual SSN in a qualitative manner, as here, and here.
I think you might have something that can generate a Monte Carlo simulation allowing hypothesis testing against the null hypothesis of “it’s just noise. This answers Mr Pattersons’ reply above about “let me know when you’ve something random that produces sunspots”…
If TSI has a measurable affect on surface temps, you would expect to see small bumps in the temperature record corresponding to the SSN. Which, I think you can:
http://photographyoutside.com/wp-content/uploads/2016/02/currentssn.jpg
Those ‘bumps’ are probably 9.1y lunar bumps. Or at best a mix of the two. Until there is some serious assesment of the longer lunar periodicity any solar effect will be compromised , confused and inconveniently disappear or go through phase inversions and be written off.
count the post warm bumps and estimate the period.
Try plotting the following it produces a 58 year modulation envelop:
p1=9.1;p2=10.8;
cos(2*pi*x/p1)+cos(2*pi*x/p2)
I played around with that very relationship here once upon a time. Though HS was very gracious calling it “A new theory”, I really just saw it as an interesting possibility. I have not developed it any farther, but it’s nice to see someone thinking along the same lines.
Indeed. And, there are those who have shown that integrating the SSN produces a curve very similar to the temperature record.
The difficult part to explain is why it should be an integral response, and what long term mechanisms would exist to dissipate energy in the long run to keep that integral from diverging to infinity.
IMO the time integral makes perfect sense, given the heat capacity of the oceans. The longer the sun shines with greater intensity, especially in the higher energy bands, the more the oceans will warm and the longer the effect will last into intervals of declining solar output, to say nothing of magnetic effects.
Yes but SSN is zero based. TSI goes up and down around some evolving mean, which earth quite happily dissipates back to space or we wouldn’t be here. Why would it therefore not be a simple integral response of sorts? Probably because the oceans are huge and Willis is correct. Emergent phenomena serve to modify that and the actual response is governed. Good luck picking that apart in a manner that doesn’t amount to mere curve fitting, water is wonderful stuff and there’s lots of it!
Furthermore, does TSI vary abruptly under some conditions we’ve not measured yet? e.g. shortly after the sun starts to exhibit pronounced hemispherical disparity and enters a new regime, which if I’m not mistaken is what Leif is alluding to above. Take a look at the recent N-S SSN record and related conjecture about the Maunder period.
“TSI goes up and down around some evolving mean, which earth quite happily dissipates back to space or we wouldn’t be here.”
True in the long run, but the system has huge inertia (and probably hysterious as well given the chaotic dynamicsl) so in the short term the mean can wander. Besides, what is described here merely delays the dissipation back to space (although some gets lost along the way).
Actually, I can account for all of the bumps with a 15 year cycle and a 40 year cycle. It looks like the solar energy is absorbed into the oceans and lasts for two 15 year cycles before it is not seen in the “bumps”. The affect is that a solar cycle impacts temperature when it is happening, and then 15, 30, and 40 years later. The 40 year cycle seems to have the biggest affect, probably because it coincides with the apparent 30 year cycle. Here is a picture of the solar cycle shifted by 40 years. I wonder if that could cause the 37 year delay found by OP.
http://photographyoutside.com/wp-content/uploads/2016/02/40yrfuturessn.jpg
Moreover, if Hansen and followers hadn’t pushed the mid 30s to mid 40s temperatures down, raised the 1998 one up to make it a new record and then continued with this ‘bouyancy’ of recent temperatures, the green line might be following the SSN’s back down again – this where the decline in the tree ring proxy was famously hidden by Mike’s Nature trick – maybe these trees are smarter than dendrochronologists.
This paper stimulated a good discussion, and I want to thank the discussants: L. Svalgaard, W. Eschenbach, especially; and thank Jeff Patterson for informative and responsive answers. As they say of the editorial process, I think that the reviewers made suggestions that improve the paper.
“In addition, the derived TCR implies a mechanism that reduces the climate sensitivity to CO2 to a value below the theoretical non-feedback forcing, i.e. the feedback appears to be negative”
This is a bit like saying that “the existence of a sunflower suggests that the sun had been rising and setting in a specific location for a very long time”
Feedback systems are the great thorn in the side of modelers. So many don’t grasp the concept that we live in a wonderful thermometer. A full appreciation of this requires an interest and studies in both the natural and physical sciences. There is no better window into the future than a sedimentary outcrop – when one learns how to read it
What would be Earth’s climate be should there be no flora and fauna?
Feedbacks (in many forms) in relation to elevating CO2 were always involved. The only question remaining is: what are they – and their impact? Get to work, there’s lots of it 🙂
Thanks to all contributors to this fascinating topic
[Thermometer? or “thermostat” ? .mod]
Shucks – yes 🙁
This post makes the common mistake of analyzing too short a period. I am really amazed at the ability of the science community at large, including even most of the skeptics who post here, to ignore the obvious millennial cycle which peaked about 2003.Far from being a “wicked ” problem, if you use simple common sense ,forecasting the timing and likely amplitude of the longer wave trends is reasonably simple .See http://climatesense-norpag.blogspot.com/2015/08/the-epistemology-of-climate-forecasting.html
and
http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html
For a putative 1000-year cycle to state that it peaked in 2003 is silly. And ‘common sense’ is not so common. Especially not in this context. And ‘simple common sense’ is likely to be ‘simplistic common sense’.
But the CET spectral composition does hove response at 100+ years, at least in the winter.
http://www.vukcevic.talktalk.net/TSI-CET.gif
In the winter when the insolation is low, the Arctic geomagnetic response to CME’s come to mind. CET summers have near zero inclination up-trend line, while the winters are at 0.4C/century, i.e. all warming in the CET is result of the rise in the winter temperatures (spring and autumn are almost arithmetic average of summer & winter) .
Maybe I can assist Norman.
There appears to have been a peak around 2003 but it may not be the final peak of the 1000 to 1500 year cycle.
It does seem a bit early to have reached the peak of that cycle so maybe the sun will pick up again in cycle 25, maybe not.
Subject to that, Norman’s general approach is simple (not simplistic) common sense in the light of data currently available.
If new data comes to light that common sense can be reapplied appropriately.
Obviously I’m being a bit provocative maybe even humorous. ? The link says:
“Grandpa says- I’m glad to see that you have developed an early interest in Epistemology. Remember ,I mentioned the 60 year cycle, well, the data shows that the temperature peak in 2003 was close to a peak in both that cycle and the 1000 year cycle. If we are now entering the downslope of the 1000 year cycle then the next peak in the 60 year cycle at about 2063 should be lower than the 2003 peak and the next 60 year peak after that at about 2123 should be lower again, so, by that time ,if the peak is lower, we will be pretty sure that we are on our way to the next little ice age.
That is a long time to wait, but we will get some useful clues a long time before that. Look again at the red curve in Fig 3 – you can see that from the beginning of 2007 to the end of 2009 solar activity dropped to the lowest it has been for a long time. Remember the 12 year delay between the 1991 solar activity peak and the 2003 temperature trend break. , if there is a similar delay in the response to lower solar activity , earth should see a cold spell from 2019 to 2021 when you will be in Middle School.
It should also be noticeably cooler at the coolest part of the 60 year cycle – halfway through the present 60 year cycle at about 2033.
We can watch for these things to happen but meanwhile keep in mind that the overall cyclic trends can be disturbed for a time in some years by the El Nino weather patterns in the Pacific and the associated high temperatures that we see in for example 1998 and 2010 (fig 2) and that we might see before the end of this year- 2015.”
Given the peak at 2003
It is not a given that there is a peak in 2003. This is numerology. And the Steinhilber analysis has been thoroughly debunked: http://arxiv.org/pdf/1307.5988v2.pdf already.
Leif I had to repost a comment
Check the 1024 year periodicity at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3341045/figure/F4/
The previous millennial peak was at about 985 see Fig 2 at
http://climatesense-norpag.blogspot.com/2015/08/the-epistemology-of-climate-forecasting.html
With a 2003 estimated peak ,(see Fig 4) that makes a period of 1018 which is amazingly close.
Perhaps you would repost your reply – thanks.
“We found magnetic wave components appearing in pairs, originating in two different layers in the Sun’s interior. They both have a frequency of approximately 11 years, although this frequency is slightly different, and they are offset in time. Over the cycle, the waves fluctuate between the northern and southern hemispheres of the Sun. Combining both waves together and comparing to real data for the current solar cycle, we found that our predictions showed an accuracy of 97%,” said Zharkova.
Zharkova and her colleagues derived their model using a technique called ‘principal component analysis’ of the magnetic field observations from the Wilcox Solar Observatory in California. They examined three solar cycles-worth of magnetic field activity, covering the period from 1976-2008. In addition, they compared their predictions to average sunspot numbers, another strong marker of solar activity. All the predictions and observations were closely matched.
Looking ahead to the next solar cycles, the model predicts that the pair of waves become increasingly offset during Cycle 25, which peaks in 2022. During Cycle 26, which covers the decade from 2030-2040, the two waves will become exactly out of synch and this will cause a significant reduction in solar activity.
“In cycle 26, the two waves exactly mirror each other – peaking at the same time but in opposite hemispheres of the Sun. Their interaction will be disruptive, or they will nearly cancel each other. We predict that this will lead to the properties of a ‘Maunder minimum’,” said Zharkova. “Effectively, when the waves are approximately in phase, they can show strong interaction, or resonance, and we have strong solar activity. When they are out of phase, we have solar minimums. When there is full phase separation, we have the conditions last seen during the Maunder minimum, 370 years ago.”
https://www.ras.org.uk/news-and-press/2680-irregular-heartbeat-of-the-sun-driven-by-double-dynamo
Is thoroughly debunked here:
http://arxiv.org/pdf/1512.05516.pdf
“We show that the Zh15 model fails to reproduce the well-established features of the solar activity evolution during the last millennium. This means that the predictive part for the future is not reliable either”
What are the forecasts?
http://i64.tinypic.com/2lc9xzt.jpg
Only reliable long term Sunspot Rmax envelope prediction is one that matches the Svalgaard SSNumbers
http://www.vukcevic.talktalk.net/SSN-O&N.gif
Svalgaard SSN Rmax – Vukcevic formula correlation factor R^2 = 0.76 (excluding SC20)
Taking into the account that sunspot count is a subjective estimate of the interpreting visual appearance of the solar disk, the so obtained SSN may not be a 100% accurate reflection of sun’s magnetic activity; to allow for such errors 15% error bands are displayed.
Apologies for double posting, perhaps moderator could delete one.
Only reliable long term Sunspot Rmax envelope prediction is one that matches the Svalgaard SSNumbers
http://www.vukcevic.talktalk.net/SSN-O&N.gif
Svalgaard SSN Rmax – Vukcevic formula correlation factor R^2 = 0.76 (excluding SC20)
Taking into the account that sunspot count is a subjective estimate of the interpreting visual appearance of the solar disk, the so obtained SSN may not be a 100% accurate reflection of sun’s magnetic activity; to allow for such errors 15% error bands are displayed.
All indications are that you will be wrong on the next cycle. It will not be smaller than the current one.
That could be the case, but not certainty.
Given that 1/4 of the actual Svllalgaard SSN are outside your prediction band of +/- 15%, perhaps your band should be a bit wider?
Here you can see better.
http://i68.tinypic.com/2ah8ah1.jpg
You can see that Oulu is somewhat anomalous. Compare it with Hermanus
http://www.nwu.ac.za/sites/www.nwu.ac.za/files/files/p-nm/SRU%20Neutron%20Monitors%20Monthly%20Graphs.pdf
I asked what are the predictions and please reply.
NAIRAS system uses data at Oulu.
“The NAIRAS model predicts atmospheric radiation exposure from galactic cosmic rays (GCR) and solar energetic particle (SEP) events. GCR particles propagation from local interstellar space to Earth is modeled using an extensionhe Badhwar and O’Neill model, where the solar modulation has been parameterized using high-latitude real-time neutron monitor measurements at Oulu, Tomnicky, and Moscow. During radiation storms, the SEP spectrum is derived using ion flux measurements taken from the NOAA/GOES and NASA/ACE satellites. Transport of the cosmic ray particles – GCR and SEP – through the magnetosphere is estimated using the CISM-Dartmouth particle trajectory geomagnetic cutoff rigidity code,driven by real-time solar wind parameters and interplanetary magnetic field data measured by the NASA/ACE satellite. Cosmic ray transport through the neutral atmosphere is based on analytical solutions of coupled Boltzmann transport equations obtained from NASA Langley Research Center’s HZETRN transport code. Global distributions of atmospheric density are derived from the NCEP Global Forecasting System (GFS) meteorological data.”
http://sol.spacenvironment.net/nairas/index.html
That is like saying that smoking is healthy because many people do.
Prediction of what?
“A cosmic ray destined to be detected by the Inuvik neutron monitor starts out heading for a point over the Pacific Ocean, west of Mexico. About 60,000 km away from Earth, the particle begins to experience effects of the Earth’s magnetic field, which deflects the particle towards Inuvik. The first interaction with an air molecule happens about 20 km above Inuvik.
It has been proposed that cosmic ray monitors be equally spaced around the poles to achieve the best view into outer space. Inuvik is geographically well located to record cosmic rays and has the support services needed for a monitor.”
http://neutronm.bartol.udel.edu//listen/main.html
The theory is interesting although for a WUWT audience a presentation with fewer abbreviations and technical shorthand would be preferable (eg: what does “cyclostationary” mean). However there is one major failing which is common to many such dissertations. It assumes the Hadcrut temperature reconstruction is accurate and representative yet there is plenty of evidence to suggest that it is massively falsified with earlier temperatures depressed and more recent temperatures enhanced to show a greater warming trend. Certainly the satellite and balloon data sets show a different picture. If Hadcrut reconstruction is not accurate your theory simply shows good correlation to wrong data which means it does not accurately model reality.
As others have pointed out above, there is excellent correlation between rate of change of atmospheric CO2 and the more reliable satellite ocean temperature data (that the correlation is to rate of change of CO2 simply means the ocean outgassing of CO2 has a long time constant which we know it does due to slow ocean overturning – 800 years). It means, as you have in effect pointed out, that mans use of fossil fuels may not even be the reason for the rise in CO2.
So, we have a situation where two wrongs are supposed to make a right: both the TSI record and the temperature records are wrong, yet we are supposed to believe that the fit of the Temps to the TSI has any physical meaning. I say it does not.
Could you please provide a real log scale on figure 1a. And where is figure 1b which is the fundament of all the following hypothesis?
sorry, my mistake. Yet, I can’t see how 1b contradictes anything
Stephen Wilde February 9, 2016 at 1:58 am
I looked at your paper. It had a shocking lack of numbers. Your theory is that solar changes temporally related to (but distinct from) the sunspot cycle affect the ozone layer. In particular, when the sun is quiet (few sunspots), you say there will be less ozone at the tropics and more at the poles. Similarly, when the sun is active (many sunspots) you say there will be more ozone at the tropics and less at the poles.

However, as far as I could see there was no attempt to verify this by observations. This seemed like it would be easy to test. I got the Mauna Loa and South Pole ozone data from here. The most sensitive indicator will be the difference between the two locations, since this would detect the hypothesized polar-tropical ozone swings you mentioned. Here are the results:
There is no sign of the hypothesized tropical-polar ozone swings. The R^2 is only 0.04 …
Regards,
w.
There’s that old saw… Correlation does not equal causation (and no correlation whatsoever sure as heck doesn’t either.)
You additionally need to consider the reverse sign ozone effect above 45km observed over the period 2004 to 2007.
Also, look at your graphic. Ozone dropped during the three active cycles between 1970 and 2000 and has begun to recover since 2000.
It is a multicycle phenomenon.
Correction, the ozone DIFFERENCE dropped during the three active cycles.
Changes in UV radiation.
http://www.iup.uni-bremen.de/gome/solar/mgii_composite_2.png
http://www.iup.uni-bremen.de/gome/gomemgii.html
So if UV has a measurable effect, the world ought to cool off to a statistically significant extent.
Unfortunately for that idea, the Earth seems to be warming [or at least holding steady] rather than following UV on its way down. Bottom line: There is no good evidence that UV, TSI, or any other solar index has any measurable effect above [say] a tenth of a degree, which can be safely ignored.
Yet look at satellite observations, less balloons. GASTA crashing precipitously.
The Magnesium Index follows the sunspot number quite nicely, showing less UV lately…
Not to mention record snow cover, SH and total sea ice, lake ice, cold records, etc. The so-called “surface record” is a work of science fiction.
Dunno what accounts for the divergence between balloons and satellites, but assume coverage and altitude.
lsvalgaard
February 10, 2016 at 11:19 am
I saw that. The whole cycle seems lower, to include TSI and its UV component.
ren,
Nice chart
John
awaiting Ferdinand Engelbeen….
Maybe I just need to think about it for a while, but I can’t for the life of me understand what “high frequency components” of T vs. CO2 concentration could possibly mean. Sure, it’s a smoothing technique, but I have absolutely no doubt it has no business being applied to this data.
Maybe this will help. https://en.wikipedia.org/wiki/Cross-spectrum
LT February 9, 2016 at 8:30 am
Thanks for that, LT. You are correct that the stratosphere is cooler today. However, you’ve only alluded to one of the three ways that the stratosphere gains energy, from the absorption of downwelling solar radiation.
In addition to being warmed that way, the stratosphere is also warmed from below, by the radiation from the troposphere as well as by radiation absorbed directly from the surface. The approximate sizes of these three sources of warming are:
Direct solar absorption: 10 W/m2
Absorption of upwelling surface radiation: 13 W/m2
Absorption of tropospheric radiation: 268 W/m2
As you can see, a change in any one of these three will change stratospheric temperatures. Clearly, the stratosphere is absorbing less energy, which is why it is cooler … but which energy source has decreased?
All the best,
w.
“Absorption of tropospheric radiation: 268 W/m2”
Really? That may be the amount passing through. But it would have dramatic effects if it was the amount absorbed.
Indeed, I’m not sure what Willis meant there.
So we are left with absorption of upward IR and incoming SW. So we have to look at the form of the data to see whether it tells us anything.
If we ignore the obvious volcanic origin of the changes and draw a straight line we may attempt to attribute the changes to AGW and upward IR.
If we stop imposing preconceived ideas by fitting linear models to data that are not linear and just look at the data we note two downward steps obviously attributable to the two eruptions and flat since.
We may then recall that the troposphere did the opposite: warmed and then flat since.
There were not two step changes in CO2 , so the most likely explanation to both observed changes is a change in the transparency of the lower stratosphere that is now letting more incoming solar into the troposphere.
Ozone if a key factor in SW absorption but that natural processes which removed the volcanic aerosols may well have also scrubbed some of the anthropogenic pollution that had built up in preceding decades.
Nick Stokes February 9, 2016 at 10:11 pm Edit
Let me see if I can explain. Total absorbed by stratosphere = 10 + 13 + 268 =291 W/m2, half of which is radiated upwards and half downwards.

291 / 2 = 145.5 W/m2, which has a blackbody temperature of -48°C, the approximate temperature at the tropopause.
Note that radiation balance is maintained at all three levels (surface, troposphere, lower stratoshere), as the amount entering each level is equal to the amount leaving.
Regards,
w.
Willis,
“Total absorbed by stratosphere = 10 + 13 + 268 =291 “
That’s the issue in dispute. How do you know that is the amount absorbed? It seems you’re saying that all the IR passing through the stratosphere is absorbed and reradiated. The conventional view is that it passes through mostly without interacting. Stratosphere optical depth is very small.
Thanks, Nick. If you dislike my analysis please submit your own. Mine is the simplest model possible, two atmospheric layers and a surface layer. (You can’t make it all balance with one atmospheric layer, that’s where Trenberth went wrong). And the layers must be physically separate. I posit that those layers are the lower troposphere, and the area at and just above the tropopause.
The problem is that we have pretty good ideas of the various flows and temperatures, so we’re not free to pick just any numbers. For example, we know that the temperature at the tropopause is ~ -50°C.
That’s one way it can balance. There may be others. I invite you to investigate and propose your own.
w.
Willis,
My analysis is simple. The stratosphere is essentially transparent to outgoing IR. Virtually none is absorbed. I can believe 10 W/m2 of solar radiation absorbed.
A simple test is your observation that if absorbed, it will be re-radiated, half up, half down. So I go to Modtran, and with Tropical, everything default, but altitude 17km, looking up, it tells me that I_out, W / m2 = 9.47024
That is, just 9.5 W/m2 from above, not 134 (268/2). If I switch to looking down, it is 289.037 W/m2. The IR is radiated from below the tropopause, and only about 3% comes back.
Willis, your numbers surprise me greatly. For the stratosphere to absorb energy there must be something capable of absorbing. Nitrogen and Oxygen have no absorption lines in the thermal IR region of the spectrum so they cant absorb. Water does but from all accounts the stratosphere is extremely dry so very low in water vapour. CO2 does absorb at 14 microns but if the absorption was by CO2 what absorbs also emits (emissivity = absorptivity) so the apparent emission temperature at the CO2 wavelengths should reflect the temperature of the stratosphere which can be as high as about 270K. But its not, the emission temperature at 14-15 microns as seen by satellites is around 220K which is the temperature of the tropopause. So it cannot be CO2 doing the absorbing. What is left and at what wavelengths would the absorption be occurring? Any gas present in the stratosphere will also be present in the troposphere and since the stratosphere is warmer than the tropopause, any such gas would be radiating more energy than it is absorbing.
It also cannot be by conduction or convection from lower layers because the stratosphere is warmer than the tropopause ie: there is a temperature inversion. I suspect the dominant energy input to the stratosphere is absorption of very short UV wavelengths forming Ozone high in the stratosphere and then absorption of longer wavelength UV by the ozone formed. The stratosphere stays warm even though it absorbs little energy precisely because it has very limited means of radiating any energy – predominantly the 10 micron line from Ozone. So if the stratosphere is cooling down it probably means less solar UV is being absorbed. From what I understand the sunspot cycle and solar activity affects the very short wave UV component the most so it would not be surprising if the biggest impact was on the stratosphere.
Are the changes in solar activity are not the cause of these waves in the winter in the stratosphere?
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_WAVE1_MEAN_JFM_NH_2016.png
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_HGT_ANOM_JFM_NH_2016.png
It does not cause a change in temperature in the stratosphere?
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_JFM_NH_2016.png
Yes, the biggest impact is on the gradient of tropopause height betwen equator and poles which allows latitudinal sliding of the climate zones and jet stream tracks beneath the altered gradient.
That affects total global cloudiness for an effect on the proportion of solar energy entering the oceans.
Spot on, Michael. The confusion between entropy-directed heat transfer and state-driven radiative intensity, introduced by Trenberth’s cartoon, continues to plague the unwary.
The stratosphere stopped coling around 2000 and may now be warmimg somewhat.
The Venus atmosphere contains no O2 or O3 so less heating from above due to UV absorption.
“A minimum atmospheric temperature, or tropopause, occurs at a pressure of around 0.1 bar in the atmospheres of Earth1, Titan2, Jupiter3, Saturn4, Uranus and Neptune4, despite great differences in atmospheric composition, gravity, internal heat and sunlight. In all of these bodies, the tropopause separates a stratosphere with a temperature profile that is controlled by the absorption of short-wave solar radiation, from a region below characterized by convection, weather and clouds5, 6. However, it is not obvious why the tropopause occurs at the specific pressure near 0.1 bar. Here we use a simple, physically based model7 to demonstrate that, at atmospheric pressures lower than 0.1 bar, transparency to thermal radiation allows short-wave heating to dominate, creating a stratosphere. At higher pressures, atmospheres become opaque to thermal radiation, causing temperatures to increase with depth and convection to ensue. A common dependence of infrared opacity on pressure, arising from the shared physics of molecular absorption, sets the 0.1 bar tropopause. We reason that a tropopause at a pressure of approximately 0.1 bar is characteristic of many thick atmospheres, including exoplanets and exomoons in our galaxy and beyond. Judicious use of this rule could help constrain the atmospheric structure, and thus the surface environments and habitability, of exoplanets.”
http://www.nature.com/ngeo/journal/v7/n1/carousel/ngeo2020-f1.jpg
http://www.nature.com/ngeo/journal/v7/n1/carousel/ngeo2020-f2.jpg
http://www.nature.com/ngeo/journal/v7/n1/abs/ngeo2020.html
Hmm, why is there no (or a very small) inversion in Venus’s “stratosphere”?
The stratosphere is completely transparent to infrared radiation. Only a very strong volcanic eruption may increase temporarily the temperature in the stratosphere due to an increase in density.
The temperature in the stratosphere increases only by UV energy, as shown in the graphic below.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_MEAN_ALL_NH_2015.png
The question is whether the strong ionizing radiation in the zone of ozone during the polar night can raise the temperature in the stratosphere?
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_ANOM_JFM_NH_2016.png
http://sol.spacenvironment.net/raps_ops/current_files/rtimg/dose.15km.png
Two dimensional shell games again. Diffusion doesn’t work like that, especially in mixed gases.
Jeff Patterson,
Your lead post stimulated a wonderful scientific dialog. Thank you, for providing it. Solar focused posts are among the best topics at WUWT.
Happy belated Chinese Lunar New Year.
John
When pure integration–a demonstrably unstable operation in the general case–is married to a fudge-factor feedback and a wholesale 37-year lag, one gets a geophysically implausible system model, scarcely corresponding to the verbally expressed speculations about ocean storage and release of heat.
What would provide a far more plausible system model is the RLC circuit, with capacitance and inductance components providing a frequency-dependent phase lag. In the case of a pure RC circuit, one gets an exponentially fading impulse response, which constitutes not an FIR filter resembling the gaussian, but a recursive IIR filter, commonly termed the exponential low-pass filter.
BTW, the notion that a signal can be “denoised” simply by low-passing without any a priori specification of signal characteristics is a widespread mistake among analytic novices.
Hey, the specification is clear enough: long term rise is AGW the rest is “noise” QED.
Please pay attention at the back 😉
“Long-term rise” is a hand-waving verbal description, not a scientific specification of signal characteristics.
Leif,
Referring to the bold emphasized quotes, can you elaborate possible reasons for the new ‘anomalous’ regime?
John
Wild speculation: solar magnetic fields are generated below the solar ‘surface’ and are shredded by the convection into many separate ‘strands’ of magnetic flux ‘ropes’ which may or may not reach the surface or be visible as small spots [pores]. The pores then assemble into larger spots [this is an observational fact]. If that process should work less efficiently we still get the magnetic field [brighter sun] but not so many visible spots [dimmer sun]. There is some indication that this is the case as the number of spots per group has fallen to half of what it used to be and also that the smallest groups seem to be rarer. If so, we get a brighter sun without the usual solar activity indices reflecting that.
A similar speculation can be found here:
http://hmi.stanford.edu/hminuggets/?p=1380
possibly helping to understand why we didn’t see so many spots during the Maunder Minimum…
Leif,
Thanks, all interesting stuff.
John
Thanks for that. Interesting.
it is good at times to see/consider some wild speculation, provided always that one bears in mind that that is all that it is, ie., wild speculation.
Same excuse for the Oort, Wolf, Spörer and Dalton Minima? The Dalton, like the Maunder, was observed by telescopes, with improved models for the Dalton.
How to account for the isotope evidence?
The isotope evidence shows that the solar magnetic cycle continued throughout those Grand Minima.
Yes, and it confirms that those Grand Minima existed and aren’t just observational artifacts, plus that colder climate is associated with them.
There is certain amount of myth here. The associations are weak:
http://www.leif.org/research/Global-Temps-10Be.png
and the isotope deposition also depends on the climate itself, so there is a circular argument here.
Regrettably we lack UV reconstructions for most of the LIA (you’ve estimated its last 110 years or so) and all of the MWP. I think we can agree that TSI on its own requires some important amplification effects plausibly to be the main driver of climate change.
From what we know about the physics of what generates UV [the magnetic network] UV and TSI should vary together as the variation of TSI has the same cause, so if TSI is not the primary driver, neither is UV. The simplest would be to grudgingly admit that TSI [and associated phenomena: UV, F10.7, magnetic field, etc] don’t have much influence. This is unfortunate, as funding for my work would pick up immensely if the Sun were the culprit.
Your funding should benefit from supporting the view that the sun isn’t the “culprit”.
Unfortunately not, as the funders are hooked on Climate Change being the most important disaster to befall humanity…so studying something that may not have much influence on the Climate is low on the totem pole.
Too bad then. Yet another area of genuine science going without while worse than worthless garbage gets funded.
What else is new?
Can’t you just say the magic words “climate change” in your application, and win $100,000 dollars, a la Groucho?
We do, we do, e.g. Slide 2 of http://www.leif.org/research/SSN%20Validation-Reconstruction%20(Cliver).pdf
I’m a lucky dog because most of my research is funded by private foundations, the biotech and pharma industries, so I don’t have to incant the magic chant. I guess I could apply to the US government for a grant to study the effects of climate change forecast by GIGO models on microbial genomes, but the work would be a waste, so why do it, when I could do something useful?
Then where are the Big Bucks from Uncle Sugar?
Maybe your science is too hardy sciency for the current powers who be at NASA, NOAA, etc. Softer and fuzzier are the orders of the day. Maybe you could work in some kittens.
http://wattsupwiththat.com/2016/02/09/reduce-co2-or-more-homeless-kittens/
Exactly. You’ve been scooped on kittens, but there are still puppies.
Or the effect of gamma rays on man in the moon marigolds. For the sun, it would have to be UV light rather than gamma rays, I’m afraid however.
Or the magnetic rays from Jupiter that Vuk the Entertainer likes to invoke from time to time…
If you’re aiming for a grant, I’d go with gamma rays and/or puppies.
Munch, munch, munch. Glug glug.
Hot popcorn, cold beer, and Leif peppering the conversation. Love it. Especially re: Maunder Minimum.
So the wild guess is no spots leads to brighter Sun leads to TSI up…but Earth is cold at time during the Maunder Minimum. Chair squirming must be at a crescendo at the moment.
Munch, munch, munch. Glug glug.
Medieval warming and LIA were caused by the Dinosaurs, not the Sun.
Variables are only so useful if they can be replaced by oranges.
You are kidding, right? There are several studies that suggest correlations, but dinosaurs aren’t one of them, and solar-driven mechanisms are one helluvu stretch.
Lots of discussion on sunspots issue.
I presented three papers at the Symposium on Earth’s Near Space Environment, 18-21 February 1975, held at the National Physical Laboratory, New Delhi. These were published in Indian Journal of Radio Space Physics, Vol 6, March 1977, pp.44-50, 51-59 & 60-66. In this presented Effect of Solar Flares on Lower Tropospheric Temperature & Pressure; Power spectral analysis of lower stratospheric dynamic height, temperature, Zonal component and Meridional components of wind at 100, 50 & 30 mbar; and power spectral analysis of total and net radiation intensities. [the abstracts of these were included in abstract volumes on solar & terrestrial physics compiled by SCOSTEP of USAS in 1976/77].
The effects of solar flares are more pronounced on pressure is more pronounced compared to that on temperature.
It is inferred that the annual and semi-annual components of temperature are of the same origin, i.e., solar radiation. The discrepancies observed in the different periods of oscillation during the 10-year period are attributed to the fifth harmonic of sunspot cycle acting in unison with the multiple mode of annual cycle, where higher mode are prominent at lower sunspot activity and lower modes at higher sunspot activity. It is inferred, therefore, from this analysis, that the QBO is only a fifth harmonic of 11-year sunspot cycle and these cycles are formed in situ only. This in situ inference was made: the base angles of QBO for t, H, u & v are all in phase at 50 mbar levels while at 100 & 50 mbar a lag is seen. These suggest that they are formed in situ only.
The total solar and net radiation intensities show sunspot cycle. Therefore, it is suggested that during the sunspot cycle period there is a certain change in the solar radiation emitted by the sun itself; which in turn, is reflected in other atmospheric process also. They show the multiples of sunspot cycle [22 & 44 years]. When we integrate these the pattern show differences from cycle to cycle. For example, Durban rainfall though presents 66 year cycle. When integrated with the sub-multiple of 22 years, the pattern presents ‘W’ followed by ‘M’ shape. Thus, whenever we encounter a cyclic variation with sub-multiples or multiples, the integrated pattern will be quite different. This is clearly seen in Fortaleza rainfall in northeast Brazil.
Dr. S. Jeevananda Reddy
Interesting stuff Dr Reddy. Do you have a link to any of those papers? I’d like to have a look.
5th harmonic seems a little unlikely in absence of other harmonics, though it could be hitting a natural resonance of the climate system.
Mike — sorry I don’t have any link, they are journal publications. They provided me one copy of the journal.
Dr. S. Jeevananda Reddy
Yes, that is consistentn with my hypothesis:
http://joannenova.com.au/2015/01/is-the-sun-driving-ozone-and-changing-the-climate/
For those who like to understand the sun-climate change in the N. Hemisphere since the times of Maunder Minimum may benefit from readin this
http://www.vukcevic.talktalk.net/Tec-TSI.gif
There is a very strong likelihood that the N. Atlantic tectonics responds to the same driver as the longer term solar variability. The sun’s response is a bit delayed (about 12 years, this may give some indication of the depth of solar dynamo). As graph shows the amplitude correlation is not perfect, but then nor are the data for tectonics or SGN on which TSI estimates are made. Major point of contention is the negative correlation around 1860 (on the graph outlined in green).
As far as climate is concerned it should be noted that there is a gradual delay of the NH’s temperatures to the tectonics since 1900, thus a significant cooling in the N. Hemisphere could be expected by the mid 2020s and to last to mid 2040s.
Possible explanation for this delay could be slowdown in the N. Atlantic subpolar gyre circulation. The North Atlantic’s Subpolar gyre is the engine of the heat transport across the North Atlantic Ocean. This is a region of the intense ocean – atmosphere interaction. Cold winds remove the surface heat at rates of several hundred watts per square meter, resulting in deep water convection. These changes in turn affect the strength and character of the Atlantic thermohaline circulation (THC) and the horizontal flow of the upper ocean, thereby altering the oceanic poleward heat transport and the distribution of sea surface temperature (SST).
( see link )
Point of interest: the long term solar variability modulation (see graph as discussed further above could not pick up anomalous SC20, but the tectonics graph does.
see also: Decelerating Atlantic meridional overturning circulation
http://iopscience.iop.org/article/10.1088/1748-9326/10/9/094007
What happens to the Gulf Stream?
http://oi65.tinypic.com/2cdjwbc.jpg
See also a discussion of vuk’s proposal being thoroughly examined and declared DOA.
http://wattsupwiththat.com/2016/01/29/study-suggests-a-sea-level-climate-feedback-loop-in-the-mid-ocean-ridge-system-regulates-ice-ages/
Ms Gray, tank you for your kind attention.
Many things in science, not only were declared as you put it ‘DOA’, but found to be still born by selected contemporaneous ‘experts’ on the settled science, to be revived at some stage by evolving natural events.
I am sure readers of this blog are grateful for your expert guidance in these matters despite being sufficiently capable and intelligent to make their own mind.
Have a good day.
Many things in science, not only were declared as you put it ‘DOA’, but found to be still born by selected contemporaneous ‘experts’ on the settled science, to be revived at some stage by evolving natural events.
For every such case, there are thousands of crackpot ‘findings’ like yours that stay DOA.
Some crackpots are certifiable in which case the wise man backs off very gently, am I to look forward to a lesser degree of criticism?
No, crackpots should be criticized whenever they try to diminish an otherwise good blog like WUWT.
Currently, very low density of solar plasma.
http://services.swpc.noaa.gov/images/animations/enlil/latest.jpg
Speculating further about sunspots, as I understand it the overall visible radiation is dimmed by the presence of sunspots even though the sunspot and TSI maxima coincide. Is it possible that during this period of maximum solar turbulence radiation from deeper within the sun can be released? If so, what is its nature and does it make a significant change to the solar spectral distribution?
Schrodinger’s question raises my own. What were solar holes doing while spots were hard to detect during the Maunder? Did they diminish in size? Appear in places other than in certain bands? Did the resulting wind increase or decrease? Did they last longer or go away quicker? What would satellites have had to deal with then?
http://www.nasa.gov/content/solar-dynamics-observatory-welcomes-the-new-year
http://www.swpc.noaa.gov/phenomena/coronal-holes
Our good old ‘friend’ John Cook from the ‘Sceptical Scince’ asked himself some years ago : What ended the Little Ice Age?
He then elaborates : “This analysis is a useful reminder that CO2 is not the only driver of climate.
– To end the Little Ice Age, the sun did most of the early heavy lifting.
– When the solar contribution flattened out in the mid-20th century, humanity took the baton and we’ve been running with it ever since.”
My view is in very small letters on the graph you may find further above.
What do you think ? Is John Cook right or wrong in part one, part two or both.
When the solar contribution flattened out in the mid-20th century
This is typical example of deflection. The solar contribution on century time scale has been flat since 1700.
That is understood as ‘it is not the sun’, so what do you think ended the Little Ice Age?
Any complicated non-linear system has natural internal stochastic variations, why should the climate be any different?
‘natural internal stochastic variations’ = ‘have no idea, but too embarrassed to say so’
and yet when one shows the natural process with the data available and the power to enable reversal in the previous trend, you confidently declare ‘crackpot finding’, or is it that you find unpalatable the idea that there are geodynamic processes coincident with solar activity.
Nature doesn’t go ‘random’ on such a scale, nature is ruled by cause and consequence the fact that we do not understand it, it’s our not the nature’s fault.
Admitting that there are things we do not understand in detail is the honorable thing to do.
the idea that there are geodynamic processes coincident with solar activity
Claiming that such unfounded ideas represent knowledge is the sure mark of crackpottery in which you excel.
There are numerous graphs purporting to represent Holocene temperatures, almost all of them make the LIA end (1650-1700) either coldest or second coldest during whole period stretching back 8-10 Kyr.
Looking at the Maunder Minimum exit GSN doesn’t appear to be anything extraordinary, thus it may be likely that the solar activity is not primary cause in ending the LIA.
Looking at the alternative, the impulse peaking in 1720s is extraordinary in relation to what happened since and possibly before. One could attempt to ‘process’ data to reduce its amplitude, and perhaps get read of negative correlation around 1860, if the aim was just to highlight correlation with the solar activity.
Partial correlation with solar activity is coincidental but could indicate existence of a process we don’t understand – ‘the honourable thing’ to admit to.
What is far more important at this stage is to understand why the N. Hemisphere temperature suddenly reversed its trend. I am only aware of only two data supported possibilities:
– sun going into overdrive, which you say it didn’t, that is OK by me.
– tectonic activity in the N. Atlantic going into overdrive, one that has not be repeated in the subsequent 300 years, which is definitely is not OK by you. If you know of another one I would like to know.
It is not OK by you since it happen that from time to time in those subsequent 300 years tectonics occasionally shows same or similar trend as the GSN, so describing it as the ‘crackpottery’ is your choice and of course you are entitled to do so, if indeed is that your true understanding, but I am inclined to think that it may not be so.
Leif would you not concede that the general decline in the 10 Be flux from 1700 to about 1990 represents increasing solar activity?
http://4.bp.blogspot.com/-cmUdPuT0jhc/U9ACp-RIuSI/AAAAAAAAAT8/kBTHWwpf6Bg/s1600/Berggren2009.png
Concede is not the right word to use.
The issue with 10Be is that Greenland and Antarctica disagree. I quote here a recent paper by Muscheler et al.: “comparison to the revised sunspot records. We note that there is a difference in Greenland and Antarctic 10Be data that can lead to disagreeing conclusions about past solar activity levels. This difference is likely due to weather and climate influences in the data.”
And is likely not solar at all.
First of all you omit the all-important conclusion of the paper you took the Figure from:
“We observe that although recent 10Be flux in NGRIP is low, there is no indication of unusually high recent solar activity in relation to other parts of the investigated period.”
As this Figure from a comparison of 14C and the new Group Number series shows:
http://www.leif.org/research/Comparison-GSN-14C-Modulation.png
Solar activity since 1700 has not increased.
So, won’t you concede that solar activity the past 300 years have not increased?
CET is a good representative of the N. European temperature trends
http://www.vukcevic.talktalk.net/10Be-CET.gif
The 10Be series is probably not correct, i.e. does not show a pure solar variation, but is heavily contaminated by the climate record.
That is a Feynman-like statement. Thanks Leif.
John
Leif you say “The solar contribution on century time scale has been flat since 1700.”
and “Solar activity since 1700 has not increased.”
Look at the Bergren Be10 values at 1700 and the end 20th century values
Look at your data at 1700 and 1990.
Your statements quoted above are plainly wrong.
What are you seeing differently?
Look at your data at 1700 and 1990
Look at the data at 1728 and 2008…
Are you an idiot, comparing a minimum with a maximum? Don’t you know there is an 11-year cycle?
The point is [as Berggren says] that the 10Be data is contaminated by climate and that most of the variation is not solar. 14C is another [and better] indicator and it is plain that 14C reaches the same high values in the 17th, 18th, 19th, and 20th centuries, thus no long-term trend:
http://www.leif.org/research/Comparison-GSN-14C-Modulation.png
Leif you were the one who picked the start time for this discussion you said “Solar activity since 1700 has not increased”
Leif 1700 is the best for scientific understanding- that is why I thought you had made a good choice .Obviously the period from the spotless Maunder minimum to the late 20th century peak is the most significant for estimating the solar influence as seen the 10Be data.
Maybe you could convert the Dye 3 10Be flux into an equivalent Oulu count from 1700 to 1990 and come up with a gross ballpark handy dandy conversion to see the general trend”
The early part of Dye-3 has uncertain time determination and there is now general agreement that the 10Be record is strongly contaminated by climate effect. The 14C record seems to be better.
Then plot that against CET over the same period and we are all set do doubt.
Again, you play dumb. As should be clear [from graphs and verbiage], the intended meaning was:
In every century activity reaches the same high level and the same low level, so on a century scale solar activity has not changed. For the climate, that is the scale that is important.
Not so Leif, the most important scale for climate change is millennial with peaks at 985 and.2003.
The amplitude of this cycle is about 1.8 degrees .It is asymmetrical with a downleg from 985 to about 1700 and the upleg from there to 2003.
http://2.bp.blogspot.com/-4nY2wr6L-WY/U81v9OzFkfI/AAAAAAAAATM/NA6lV86_Mx4/s1600/fig5.jpg
I am talking about solar activity and you are showing a dubious temperature graph. How about being on the same page?
Your temps and my solar activity since 1700:
http://www.leif.org/research/Temp-and-GN.png
I see no correspondence. Except that the AGW folks might have a point…
Leif Try plotting my temperature curve from1700 on against the 10Be solar
modulation parameter (green curve on C) from Lockwood 2014
http://api.onlinelibrary.wiley.com/asset/v1/doi/10.1002%2F2014JA019972/asset/image_n%2Fjgra51126-fig-0005.png?l=SkaBT8QEx2pJG1HJUUYQRpMpMHTD3MalzSh%2BJg4NZDC9ytqdtPsivKtTRsbIUSvhTZPXBz10627jO6SXB3gCvg%3D%3D
Why would I do that? the 10Be is heavily contaminated by climate effects.
Here are his modulation parameters since 1700. Note that they reach the same levels in the 18th and 20th centuries. You can plot your temperatures if you need to convince yourself.
http://www.leif.org/research/Modulation-Parameter-Lockwood.png
Here is an even better plot [from 14C which is less influenced by climate]:
http://www.leif.org/research/Comparison-GSN-14C-Modulation.png
Modulation the last 300 years:
http://www.leif.org/research/Modulation-Last-300-Years.png
Enough said…
Pamela – A quick look at the literature shows various relationships. If we take SSN as the solar activity cycle, then the peaks coincide with solar UV, microwave flux and TSI. Ap Index is slightly displaced.
The solar wind velocity is completely out of phase as is the neutron count on earth.
The solar magnetic field reverses every SSN cycle, the peaks and troughs in sync with SSN.
I believe that there is some link between the solar cycle and our climate. It could be the cumulative effect of several small factors, for example TSI direct warming plus UV initiated chemical reactions plus solar wind and magnetic field influences on particles of different types resulting in changes in albedo, especially cloud cover.
IMO, not just could be, but is.
I would never advocate a quick look at ANY literature. Journals carry a vast weight of the one-two punch of poorly done research combined with poorly done statistical analysis. Be a discerning picky eater of such material.