Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.
Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.
Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.
Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
| Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
| GISS |
0.056 |
0.087 |
0.031 |
| CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.
Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Leif, as I noted, my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.
Pat Frank says:
June 9, 2011 at 9:36 pm
my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.
Without a plausible reason or theoretical expectation, any correlation that appears between physical observables is numerology.
Leif Svalgaard says:
June 9, 2011 at 5:51 pm
“The hole you are in…”
I am in no hole.
“…email the (x,y) point values to me…”
I have no intention of revealing personal information over so trivial a matter. I really don’t give a rodent’s derriere if you believe me or not. Assume I’m lying if you like. My thesis is still compelling.
Leif Svalgaard says:
June 9, 2011 at 6:55 pm
“Wrong attitude.”
What is my thesis, Leif? Do you have any idea? Go back and read and reread until you understand it. Play around with the simple simulation model I gave to help you understand it.
Pat Frank says:
June 9, 2011 at 9:36 pm
Your analysis is justified by the glaring fact that it is legitimate, due to the ubiquity of sinusoidal inputs and modal responses to noise in every distributed parameter system in the universe, as I have painstakingly documented in the foregoing. Leif is quite simply wrong, but he has a burr in his saddle, and you are not going to satisfy him no matter what you do.
Pat Frank says:
June 9, 2011 at 9:03 pm
Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.”
The funny thing about this is, that is exactly what the quants on Wall Street do every day. And, they make obscene amounts of money doing it.
They’ve had recent setbacks, mainly because they do more than merely observe, they interact with the system based on their observations. This creates feedback. It became significant feedback in the recent decade, and it was not designed specifically to be stabilizing feedback. But, no investment house has liquidated it’s financial analysis department in response. And, won’t.
The government also does a lot of this kind of thing. How do you think they come up with “seasonally adjusted” economic statistics?
Leif, when a multi-decadal oscillation appears in the difference between two temperature anomaly data sets, one of which is land+SSTs and the other of which is land-only, and when the world oceans are known to exhibit multi-decadal thermal oscillations, one has a direct physical inference. Your numerical dismissal is still ill-founded.
However, to test this inference further, I made a difference between the cosine-alone portions of the cosine+linear fits to the GISS 1999 (land-only) and GISS 2007 (land+SST) data sets. The difference oscillation of the two fitted cosines alone, tracks very well through the oscillation representing the difference of the two full anomaly data sets. The appearance of this difference correspondence indicates these independently fitted cosines capture an oscillation in the original full data sets.
If anything, the oscillation expressing the difference between the full cosine+linear fits for 1999 (land-only) and 2007 (land+SST) tracks even better through the difference oscillation of the anomaly data sets themselves.
Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.
Bart says:
June 10, 2011 at 12:09 am
Assume I’m lying if you like.
If there were a statistically significant 22-year signal in the 2000 year temperature reconstruction that would be an important result. You categorically claim [several times] that there is, based on your superior understanding of distributed parameter systems in the universe. All I ask is that you produce the evidence for that
Pat Frank says:
June 10, 2011 at 12:30 am
Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.
Curve fitting without understanding of the physics is and has always been numerology. Just as Balmer’s formula until it was understood, or the Bode-Titius ‘law’ [ http://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law ].
Leif, curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.
Pat Frank says:
June 10, 2011 at 9:43 am
curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.
Of course it is numerology. Not to say that numerology cannot be useful, like the example of Balmer’s formula shows. Why are you so upset about numerology? There was a time when the purported relationship between sunspots and geomagnetic disturbances was numerology. A century later we discovered the physical process that takes the relationship out of numerology and into physics. On the other hand, the Bode-Titius law is still numerology.
Leif – I think maybe you are laboring under a misapprehension that I claimed it was present and at equal strength in the 2nd half of the data. But, as I stated here: “The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation.”
I then gave you a simple simulation model to give insight into how these processes vary in time. Try setting zeta = 0.01, and observe how the energy of oscillation surges and fades. This is fully compatible with random modal excitation. It only depends on how fast the energy of oscillation dissipates in the absence of reinforcing excitation.
Some modes, which have ready access to sympathetic energy sinks, drain quickly, and some do not. Those which do not, we tend to see as steadier quasi-periodic oscillations, and these have longer term predictive power. Oscillations at modal frequencies are not necessarily persistent, but they are recurring, due to random forcing input which will occasionally reinforce, and occasionally either fail to reinforce or actually weaken, the oscillations.
And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own. I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.
Bart says:
June 10, 2011 at 12:11 pm
And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own.
You claimed a significant signal was present in both halves as visible in your PSDs. You have evaded showing the evidence for that. That settles the issue for me.
I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.
That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years. And there is power at any period, so clearly Spector would pick up such periods. Show your PSD for the modern data. BTW, we expect there to be a 0.1 degree solar cycle effect in any case.
Leif PCA outside of physical theory, e.g., is numerology. Deriving a physically valid inference and fitting observational data using physical reasoning, in the context of known physical phenomena, is not numerology.
Why do you insist on disparaging semi-empirical work? Data always lead theory. Analyzing such data using physical reasoning is not numerology. It’s standard practice in science when the theory is incomplete.
For any who would like to get a handle on what I have been talking about, here is a good video to see the modal analysis of a tuning fork. You can find other discussions if you google “modal analysis”. Most hits tend to be in regard to structures. But, you can google “fluid modal analysis” and find some specific to fluids. And, you can look up rheology and modal analysis. That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident (e.g., I would expect the vibration modes of the oceans to appear prominently in climate variables).
Leif Svalgaard says:
June 10, 2011 at 12:29 pm
“You claimed a significant signal was present in both halves as visible in your PSDs.”
I claimed an observable signal was present in both halves. You have latched onto this triviality like a pit bull, and have blinded yourself to all else. I’m sure others viewing our discussion have formed their own opinions of the validity of my arguments for better or for worse, and there is nothing more I can say now which will change their opinions, so I give up. You are a lost cause.
“That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years.”
Sigh… do you ever, you know, read what people write before forming your opinions?
“…and then I allowed it to optimize the periods as well.“
Pat Frank says:
June 10, 2011 at 12:31 pm
Why do you insist on disparaging semi-empirical work?
Who says that numerology is disparagement? Numerology is OK as long as you KNOW it is numerology. The problem comes when you begin to believe that your numerology is understanding.
Bart says:
June 10, 2011 at 12:41 pm
That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident
You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data]. And still no demonstration of the 22-yr cycle in the PSDs for the two halves. Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods. I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle. If so, that is fine with me, because I don’t it either.
Leif Svalgaard says:
June 10, 2011 at 1:21 pm
“You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data].”
Yet, that is precisely what you yourself claimed, in so many words:
What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.
“Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods.”
I never said it was a steady state oscillation. I have gone to great lengths to explain why it would not generally be expected to be. All of this has apparently gone sailing right over your head.
“I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle.”
I never did. I said that it was “there”, i.e., that it was observable. And, as it is attenuated by a factor of 1/5 due to the averaging taking place, that would indicate that it is, in fact, much more significant in reality.
“If so, that is fine with me, because I don’t it either.”
There’s a lot you don’t see, because you are unfamiliar with spectral estimation methods, and your analysis is very crude.
Leif Svalgaard says:
June 10, 2011 at 1:21 pm
“Who says that numerology is disparagement?”
From dictionary.com: numerology — n
the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs
At the very least, you are guilty of gross hyperbole.
Bart says:
June 10, 2011 at 1:40 pm
What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.
That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.
I never did. I said that it was “there”, i.e., that it was observable.
There is power at any and all frequencies, the thing is if it is significant.
And, as it is attenuated by a factor of 1/5 due to the averaging taking place
There is no averaging of higher sampling rate data. I might have expressed that clumsily, but what I have said over and over and over again is that the scarce widely scattered data from many dataset were lumpend into 30-year intervals.
But you have still not showed the PSDs, so have no real support for your claims.
Bart says:
June 10, 2011 at 1:46 pm
From dictionary.com: numerology — n
the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs
That particular example very many people are firm believers in. Some even think that the positions of the planets influence the climate and the sun. A better example of [useful] numerology is Balmer’s formula.
Balmer’s formula wasn’t numerology. If was phenomenological; made to represent a physical observable. Numerology has no particular connection to the physical. Phenomenological equations, by contrast, are used in physics all the time, either when theory is inadequate or when it is too complex to solve exactly.
Phenomenological approaches to data are classically the bridge that allows observables to be systematically examined when theory fails. When the phenomenological context is physical, the approach is entirely scientific.
Your use of “numerology” has been distinctly disparaging Lief.
Leif Svalgaard says:
June 10, 2011 at 3:30 pm
“That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.”
Not only are you contradicting your earlier post, you are contradicting the source:
“
”
You even discerned the signature of the pattern of zeros of the transfer function yourself. Maybe, I should just sit back and let you argue it out with yourself?
The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years. Instead, you threw away your credibility by trying to play gotcha’ games over something you did not understand.
Numerology, my fanny.
Pat Frank says:
June 10, 2011 at 4:16 pm
Numerology has no particular connection to the physical.
Any curve fitting to physical parameters is numerology when there is no theory or plausible expectation that the fit should occurs.
Your use of “numerology” has been distinctly disparaging
I said that numerology was OK if you KNOW it is numerology. If you deny it is numerology, then it becomes dubious.
Bart says:
June 10, 2011 at 7:11 pm
Not only are you contradicting your earlier post, you are contradicting the source:
This is what the source says:
“The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis.”
No contradiction.
Bart says:
June 10, 2011 at 7:19 pm
The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years.
I showed that there is no such 60 year cycle in the 2000-yr series. Now you claim there is, so you have to provide PSDs to back up that claim as well in addition to the 22-yr cycle you also claim. We are still waiting for you to comply. If you cannot or will not, then you have no credible claims. It looks more and more like this being the case as you are evading bringing forward any evidence.
Leif Svalgaard says:
June 10, 2011 at 10:01 pm
“No contradiction.”\
Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples. The resolution decreases with depth, but that is merely an effect of spatial filtering, which effectively is temporal filtering, since the layers accumulate in time. So, there is additional filtering beyond the 30 year sliding average, and the 23 year process is even more than 5X more significant in reality.
“I showed that there is no such 60 year cycle in the 2000-yr series.”
You showed nothing of the kind. Your analysis is crap. The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.
Bart says:
June 11, 2011 at 12:52 am
Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples.
Loehle does not give the original data. But his 2000-yr series is the average of 18 data series, each re-sampled to 1-yr resolution and those he does give. I have here plotted all of them. The heavy black curve is his average temperature reconstruction: http://www.leif.org/research/Loehle-Mean.png
Here are the individual data series [three to each plot]. It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years http://www.leif.org/research/Loehle-1-18.png
The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.
None of the ‘spikes’ are significant with the exception of the big 2000-yr wave. Cut the data in two halves, make a PSD for each half and you’ll see. The input data is simply not good enough to show a 22-year cycle even if present.
“It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years…”
Anything that is in there will appear in the analysis. So, it does not matter what the “vast majority” does.
“None of the ‘spikes’ are significant…”
You are woefully, painfully wrong. In the raw data, the 88 year process has an RMS of 0.044 degC. The 62 year process 0.041 decC. The 23 year process 0.013 degC. Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.