Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.
Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.
Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.
Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
| Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
| GISS |
0.056 |
0.087 |
0.031 |
| CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.
Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Pat Frank says:
June 12, 2011 at 6:10 pm
I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.
It was made contrary to the explanatory theory of the day [Maxwell] and was not ‘hewing’ as much as possible to known physics. It was completely contrary to known physics. And was numerology in its day.
Bart says:
June 12, 2011 at 1:50 pm
‘capricious’ You seem to think it means something other than what it does.
In what meaning did you employ it?
Your infatuation with cyclomania shall stand for your own account, disconnected from reality. BTW, the ever faster expanding Universe will always have a heat sink.
There will always be peaks, the question is if they are significant in view of the data.
Pat Frank says:
June 12, 2011 at 6:10 pm
I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.
From http://www.chemteam.info/Electrons/Balmer-Formula.html :
At the time, Balmer was nearly 60 years old and taught mathematics and calligraphy at a high school for girls as well as giving classes at the University of Basle. […] Balmer was devoted to numerology and was interested in things like how many sheep were in a flock or the number of steps of a Pyramid. He had reconstructed the design of the Temple given in Chapters 40-43 of the Book of Ezekiel in the Bible. How then, you may ask, did he come to select the hydrogen spectrum as a problem to solve?
One day, as it happened, Balmer complained to a friend he had “run out of things to do.” The friend replied: “Well, you are interested in numbers, why don’t you see what you can make of this set of numbers that come from the spectrum of hydrogen?” […] Many of the experimentally measured values were very, very close to Balmer’s values, within 0.1 Å or less. There was at least one line, however, that was about 4 Å off. Balmer expressed doubt about the experimentally measured value, NOT his formula! ”
From http://www.owlnet.rice.edu/~dodds/Files231/atomspec.pdf :
“Although the formula was very successful, it was only numerology until the development of quantum mechanics led to a spectacularly successful explanation of all atomic spectra and many similar puzzles”
From http://www.theophoretos.hostmatrix.org/quantummechanics.htm
“A Swiss school mathematics teacher, Johann Jakob Balmer, tried to find a formula involving whole numbers which would predict exactly the frequencies of the four prominently visible spectra lines of hydrogen; if he could, then he would have discovered the eidos underlying the hydrogen spectra lines. And he did find the formula in 1885 [..] The formula was a feat of numerology, not of physics. ”
And so on.
Leif Svalgaard says:
June 12, 2011 at 8:19 pm
“In what meaning did you employ it?”
Coupled with “arbitrary”, as in the legal phrase.
“There will always be peaks, the question is if they are significant in view of the data.”
Exactly. Such behavior is the rule rather than the exception. So, when you see two full cycles of an evidently periodic process, as we do in the 20th century global temperature record, it is fully reasonable to expect that this may be the expression of a major mode of the system which has been recently, or is still being, excited.
On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements. Here is another example of successful empiricism: What do we call the transformation of Special Relativity? Why is it not called the “Einstein Transformation”?
Bart says:
June 13, 2011 at 8:53 am
Leif Svalgaard says:
Coupled with “arbitrary”, as in the legal phrase.
There is nothing arbitrary in dividing the data into two consecutive subsets.
“There will always be peaks, the question is if they are significant in view of the data.”
Exactly. Such behavior is the rule rather than the exception.
No, the most peaks are not significant, and especially not with this particular data. There are statistical methods for estimated the significance of the peaks. Try to use them.
On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements.
Numerology is using these empirical numbers without physical justification. This was clearly shown in the several links I gave about Balmer’s formula. Here is another one: the height of the Cheops pyramid is very close to a nano-Astronomical unit. Clearly, the Egyptians must have known the accurate distance to the Sun.
Leif you’ve now defined, “Numerology is using these empirical numbers without physical justification.,” which shows your use of “numerology” is merely a pejorative as you have applied it to my analysis.
From the very first, the cosine fit was physically justified by the difference anomaly oscillation traced to sea surface temperatures, and their multidecadal oscillations.
Your criticism is physically groundless, Leif, and your unflagging continuance in it has become indistinguishable from an insistent personal contrarianism. Sorry to say.
Pat Frank says:
June 13, 2011 at 11:30 am
From the very first, the cosine fit was physically justified by the difference anomaly oscillation traced to sea surface temperatures, and their multidecadal oscillations.
What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it. The assumption that it is, is the numerology, because you have no theory to suggest that there is a specific mechanism at work, with an estimate of the period to expect.
Leif Svalgaard says:
June 13, 2011 at 9:22 am
“There is nothing arbitrary in dividing the data into two consecutive subsets.”
There is. PSD estimates are biased by the finite length of the data window. When you shorten the data window for no particular reason, you degrade the estimate, especially at low frequencies. Even stationary, ergodic processes do not necessarily behave consistently within an arbitrarily small data window.
“No, the most peaks are not significant…”
Yes, they are. Laughably, absurdly so. You just don’t have the tools to see it.
“…and especially not with this particular data.”
Which particular data? The proxy historical data, or the 20th century data? The peaks are most definitely significant in the former. The only valid argument against them is that they are of dubious provenance. However, having just completed a PSD analysis of the latter, I can tell you there are significant peaks at frequencies corresponding to periods of roughly (in years) 64, 22, 9.6, and 1.0 years.
That peaks of 62-64 years and 22-23 years appear in both the historical data and the 20th century measurements gives me good reason to believe they are due to the same modal excitation, and are recurring processes.
“…the height of the Cheops pyramid is very close to a nano-Astronomical unit.”
If lengths of a nanoAU appeared ubiquitously in every natural and man-made formation we ever saw, I’d not only say your analogy were valid, I’d say you were onto something.
Leif Svalgaard says:
June 13, 2011 at 11:38 am
“What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it.”
To an extent, I agree with you on this. This is a random process. It is correlated such that you can say what is likely to happen in the future, but you can never say with absolute assurance that it shall happen. There may, additionally, be other processes which assert themselves in the years ahead.
However, at this time, the hypothesis of a strong and recurring 60 year quasi-periodic process having been responsible for the apparent upsurge in global temperatures in the latter half of the 20th century cannot be lightly dismissed. It is at least as well founded as the idea that, because CO2 is going up, and temperatures went up, CO2 is responsible for the rise. More so, because the CO2 hypothesis does not explain the recent pause.
Pat Frank says:
June 12, 2011 at 5:55 pm
Very good. I’m surprised he allowed your mauling of his arguments to appear.
Bart says:
June 13, 2011 at 12:05 pm
Even stationary, ergodic processes do not necessarily behave consistently within an arbitrarily small data window.
These were not ‘arbitrarily’ small. Each window has about a thousand points in them, and would show anything significant. Your PSD on the 20th century data shows that 150 points are enough.
Yes, they are. Laughably, absurdly so. You just don’t have the tools to see it.
You have not shown anything. Just make the same absurd claims over and over.
Which particular data? The proxy historical data, or the 20th century data?
Both
“…the height of the Cheops pyramid is very close to a nano-Astronomical unit.”
If lengths of a nanoAU appeared ubiquitously in every natural and man-made formation we ever saw, I’d not only say your analogy were valid, I’d say you were onto something.
Most people would not be so dumb as to suggest that two cycles of 60 years means that one is on to something.
Bart says:
June 13, 2011 at 12:13 pm
It is at least as well founded as the idea that, because CO2 is going up, and temperatures went up, CO2 is responsible for the rise. More so, because the CO2 hypothesis does not explain the recent pause.
This is a fallacious argument of the same kind as “a stone cannot fly, you cannot fly, ergo your are a stone”.
Leif Svalgaard says:
June 13, 2011 at 1:44 pm
“Your PSD on the 20th century data shows that 150 points are enough.”
Apples and oranges. The 20th century data is much better behaved, and does not require as much smoothing.
“You have not shown anything. Just make the same absurd claims over and over.”
Ditto for you. You have not shown anything meaningful. I have asked WUWT if I could send them a jpeg of the plot. They have not responded, so I guess that means they cannot. But, you would be amazed at what a professional in these matters can do.
“Most people would not be so dumb as to suggest that two cycles of 60 years means that one is on to something.”
Most people are dumb.
‘This is a fallacious argument of the same kind as “a stone cannot fly, you cannot fly, ergo your are a stone”.’
You lost me there, chief. Take a few breaths, and try again.
The 20th century data aremuch better behaved, and do not require as much smoothing.
“You lost me there, chief. Take a few breaths, and try again.”
Maybe you are saying that the argument “because CO2 is going up, and temperatures went up, CO2 is responsible for the rise” is fallacious. Indeed, it is. That was the point.
Bart says:
June 13, 2011 at 1:56 pm
The 20th century data is much better behaved, and does not require as much smoothing.
You are suggesting that a thousand points are not enough…
But, you would be amazed at what a professional in these matters can do.
I’m amazed how wrong a professional can be…
Most people are dumb.
most people do not display it as vividly.
You lost me there, chief. Take a few breaths, and try again.
For the slow ones: just because one argument is wrong, does not mean the another one is right.
Bart says:
June 13, 2011 at 1:56 pm
I have asked WUWT if I could send them a jpeg of the plot.
http://photobucket.com/ is your friend. you can place the figures there.
Leif Svalgaard says:
June 13, 2011 at 2:11 pm
You are suggesting that a thousand points are not enough…”
I am doing more than suggesting it. I am telling you. It is not enough to get nearly as good resolution of the low frequency region as twice the number of data points is. Why does that surprise you, when you have been arguing so strenuously that the data are lousy?
“For the slow ones: just because one argument is wrong, does not mean the another one is right.”
For the slowest of the slow, that was not the argument. The argument was: “However, at this time, the hypothesis of a strong and recurring 60 year quasi-periodic process having been responsible for the apparent upsurge in global temperatures in the latter half of the 20th century cannot be lightly dismissed.” This argument has been amply justified in the foregoing thread.
If all you’ve got left are personal attacks, when you have been so thoroughly discredited on this subject, we really have reached the end of the conversation. Here is a primer on PSD estimation which you may find illuminating.
Leif Svalgaard says:
June 13, 2011 at 2:20 pm
“You can place the figures there.”
Why didn’t you say so? Here you go.
Bart says:
June 13, 2011 at 2:36 pm
It is not enough to get nearly as good resolution of the low frequency region as twice the number of data points is. Why does that surprise you, when you have been arguing so strenuously that the data are lousy?
Do it anyway. After all it is almost 50 periods of the 23-yr peak.
The argument was:
your argument was involving CO2.
If all you’ve got left are personal attacks, when you have been so thoroughly discredited on this subject
I am always willing to learn, but all you can do is to use words as ‘crap’, ‘lousy’, ‘woefully wrong’, ‘discredited’, etc. So who is the attacker?
Bart, using your example of CO2 not explaining the pause, but since you say that your thesis does, it has the stronger argument, please consider that there are other explanations regarding the pause in the trend. Do you dismiss those in preference to your thesis?
Leif, you wrote, “What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it. The assumption that it is, is the numerology…”
I made no such assumption. My entire analysis concerned the 130 year anomaly trend and the oscillation that is apparently within it. The assumption of extension is your misinference.
You’ve made the same mistake as Tamino, but have applied it differently. Tamino thinks that unless one can extrapolate a data oscillation into the future, the oscillation is not present in the data at all. This is prima facie nonsense, and Tamino also reveals that he has apparently never heard of beat frequencies.
Leif Svalgaard says:
June 13, 2011 at 3:10 pm
“Do it anyway.”
I have told you what I get. I have explained why this is a meaningless test. I do see any point in pursuing it any further.
“your argument was involving CO2.”
No, that is the AGW argument. I was comparing and contrasting this argument with that one.
“So who is the attacker?”
We both have been guilty. I have been very frustrated with your willingness to go out on such a weak limb to justify your intransigence. Prior to this thread, I would not have expected it of you. But, for my part, I apologize for the heated language.
Pamela Gray says:
June 13, 2011 at 3:46 pm
“Do you dismiss those in preference to your thesis?”
I dismiss them because they are appear ad hoc and epicyclic, an attempt to shoehorn a predetermined verdict into an existing set of rebellious data.
I know that systems governed by partial differential equations always respond to random inputs based on their eigenmodes. I know that a set of partial differential equations “describe how the velocity, pressure, temperature, and density of a moving fluid are related” (i.e., the oceans and the atmosphere). I know that I see, in particular, 20-ish and 60-ish year peaks in measured and proxy global temperature PSDs.
I think this should be a serious contender for explaining the temperature record of the 20th century.
Thanks for your support, Bart. It’s not a happy experience posting there.
Pat Frank says:
June 13, 2011 at 4:26 pm
You’ve made the same mistake as Tamino, but have applied it differently. Tamino thinks that unless one can extrapolate a data oscillation into the future, the oscillation is not present in the data at all. This is prima facie nonsense, and Tamino also reveals that he has apparently never heard of beat frequencies.
Of course the wave is in the data. On the other hand, if one cannot extend the wave in the future, then it has little interest. Extending it is the numerology. Now, if you tell me that your wave has no predictive power, then, of course, you are off the hook as far as numerology is concerned, but then your wave is not really of interest anymore.
Bart says:
June 13, 2011 at 4:52 pm
I have told you what I get. I have explained why this is a meaningless test. I do not see any point in pursuing it any further.
You are missing a teaching moment. And your explanation is no good.
No, that is the AGW argument. I was comparing and contrasting this argument with that one.
I don’t see why dragging AGW into it has any meaning. You were saying that because you believe AGW is wrong, you must be right. This is fallacious.
Leif Svalgaard says:
June 13, 2011 at 5:16 pm
“You are missing a teaching moment.”
You appear to suffering from a delusion as to who is the teacher, and who is the pupil here. I seriously doubt you could reiterate what my explanation was with any coherence. If this is how you see things, we are most decidedly done.
Pat Frank says:
June 13, 2011 at 5:11 pm
Chin up, Buckaroo. You said it yourself: these guys don’t even know what beats are. To those who flung their feces, you never had a chance of reaching them anyway. But, others more thoughtful and less vocal were undoubtedly impressed. Small moves, Pat. Small moves.
Bart says:
June 13, 2011 at 6:12 pm
“You are missing a teaching moment.”
You appear to suffering from a delusion as to who is the teacher, and who is the pupil here.
I think you have this backwards, but if you don’t want to, so be it.
Your 88-year cycle is larger than the 62 and 23-year cycles. Where is it in the modern data?
On your plot, the 88-year cycle has a power of 0.31 deg^2, which would mean a clear signal of 1 degree, which is nowhere to be seen in the original signal.