This DSP engineer is often tasked with extracting spurious signals from noisy data. He submits this interesting result of applying these techniques to the HadCRUT temperature anomaly data. Digital Signal Processing analysis suggests cooling ahead in the immediate future with no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113. See figures 13 and 14. Code and data is made available for replication. – Anthony
Guest essay by Jeffery S. Patterson, DSP Design Architect, Agilent Technologies
Harmonic Decomposition of the Modern Temperature Anomaly Record
Abstract: The observed temperature anomaly since 1900 can be well modeled with a simple harmonic decomposition of the temperature record based on a fundamental period of 170.7 years. The goodness-of-fit of the resulting model significantly exceeds the expected fit to a stochastic AR sequence matching the general characteristic of the modern temperature record.
Data
I’ve used the monthly Hadcrut3 temperature anomaly data available from http://woodfortrees.org/data/hadcrut3vgl/every as plotted in Figure 1.
Figure 1 – Hadcrut3 Temperature Record 1850-Present
To remove seasonal variations while avoiding spectral smearing and aliasing effects, the data was box-car averaged over a 12-month period and decimated by 12 to obtain the average annual temperature plotted in Figure 2.
Figure 2 – Monthly data decimated to yearly average
A Power Spectral Density (PSD) plot of the decimated data reveals harmonically related spectral peaks.
Figure 3 – PSD of annual temperature anomaly in dB
To eliminate the possibility that these are FFT (Fast Fourier Transform) artifacts while avoiding the spectral leakage associated with data windowing, we use a technique is called record periodization. The data is regressed about a line connecting the record endpoints, dropping the last point in the resulting residual. This process eliminates the endpoint discontinuity while preserving the position of the spectral peaks (although it does extenuate the amplitudes at higher frequencies and modifies the phase of the spectral components). The PSD of the residual is plotted in Figure 4.
Figure 4 – PSD of the periodized record
Since the spectral peaking is still present we conclude these are not record-length artifacts. The peaks are harmonically related, with odd harmonics dominating until the eighth. Since spectral resolution increases with frequency, we use the eighth harmonic of the periodized PSD to estimate the fundamental. The following Mathematica (Mma) code finds the 5th peak (8th harmonic) and estimates the fundamental.
wpkY1=Abs[ArgMax[{psdY,w>.25},w]]/8
0.036811
The units are radian frequency across the Nyquist band, mapped to ±p (the plots are zoomed to 0 < w < 1 to show the area of interest). To convert to years, invert wpkY1 and multiply by 2p, which yields a fundamental period of 170.7 years.
From inspection of the PSD we form the harmonic model (note all of the radian frequencies are harmonically related to the fundamental):
(*Define the 5th order harmonic model used in curve fit*)
model=AY1*Sin[wpkY1 t+phiY1]+AY2*Sin[2*wpkY1* t+phiY2]+AY3*
Sin[3*wpkY1* t+phiY3]+AY4*Sin[4*wpkY1* t+phiY4]+AY5*
Sin[5*wpkY1* t+phiY5]];
vars= {AY1,phiY1,AY2,phiY2,AY3,phiY3,AY4,phiY4,AY5,phiY5 }
and fit the model to the original (unperiodized) data to find the unknown amplitudes, AYx, and phases, phiYx.
fitParms1=FindFit[yearly,model,vars,t]
fit1=Table[model/.fitParms1,{t,0,112}];
residualY1= yearly- fit1;{AY1→-0.328464,phiY1→1.44861,AY2→-
0.194251,phiY2→3.03246,AY3→0.132514,phiY3→2.26587,AY4→0.0624932,
phiY4→-3.42662,AY5→-0.0116186,phiY5→-1.36245,AY8→0.0563983,phiY8→
1.97142,wpkY1→0.036811}
The fit is shown in Figure 5 and the residual error in Figure 6.
Figure 5 – Harmonic model fit to annual data
Figure 6 – Residual Error Figure 7 – PSD of the residual error
The residual is nearly white, as evidenced by Figure 7, justifying use of the Hodric-Prescott filter on the decimated data. This filter is designed to separate cyclical, non-stationary components from data. Figure 8 shows an excellent fit with a smoothing factor of 15.
Figure 8 – Model vs. HP Filtered data (smoothing factor=3)
Stochastic Analysis
The objection that this is simple curve fitting can be rightly raised. After all, harmonic decomposition is a highly constrained form of Fourier analysis, which is itself a curve fitting exercise that yields the harmonic coefficients (where the fundamental is the sample rate) which recreate the sequence exactly in the sample domain. That does not mean however, that any periodicity found by Fourier analysis (or by implication, harmonic decomposition) are not present in the record. Nor, as will be shown below, is it true that harmonic decomposition on an arbitrary sequence would be expected to yield the goodness-of-fit achieved here.
The 113 sample record examined above is not long enough to attribute statistical significance to the fundamental 170.7 year period, although others have found significance in the 57-year (here 56.9 year) third harmonic. We can however, estimate the probability that the results are a statistical fluke.
To do so, we use the data record to estimate an AR process.
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearlyTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
{a1→0.713,a2→0.0647,a3→0.0629,a4→0.181,a5→0.0845,v→0.0124391}
As can be seen in Figure 9 below, the process estimate yields a reasonable match to observed power spectral density and covariance function.
Figure 9 – PSD of estimated AR process (red) vs. data Figure 9b – Correlation function (model in blue)
Figure 10 – 500 trial spaghetti plot Figure 10b – Three paths chosen at random
As shown in 10b, the AR process produces sequences which in general character match the temperature record. Next we perform a fifth-order harmonic decomposition on all 500 paths, taking the variance of the residual as a goodness-of-fit metric. Of the 500 trials, harmonic decomposition failed to converge 74 times, meaning that no periodicity could be found which reduced the variance of the residual (this alone disproves the hypothesis that any arbitrary AR sequences can be decomposed). To these failed trials we assigned the variance of the original sequence. The scattergram of results are plotted in Figure 11 along with a dashed line representing the variance of the model residual found above.
Figure 11 – Variance of residual; fifth order HC (Harmonic Coefficients), residual 5HC on climate record shown in red
We see that the fifth-order fit to the actual climate record produces an unusually good result. Of the 500 trials, 99.4% resulted in residual variance exceeding that achieved on the actual temperature data. Only 1.8% of the trials came within 10% and 5.2% within 20%. We can estimate the probability of achieving this result by chance by examining the cumulative distribution of the results plotted in Figure 12.
Figure 12 – CDF (Cumulative Distribution Function) of trial variances
The CDF estimates the probability of achieving these results by chance at ~8.1%.
Forecast
Even if we accept the premise of statistical significance, without knowledge of the underlying mechanism producing the periodicity, forecasting becomes a suspect endeavor. If for example, the harmonics are being generated by a stable non-linear climatic response to some celestial cycle, we would expect the model to have skill in forecasting future climate trends. On the other hand, if the periodicities are internally generated by the climate itself (e.g. feedback involving transport delays), we would expect both the fundamental frequency and importantly, the phase of the harmonics to evolve with time making accurate forecasts impossible.
Nevertheless, having come thus far, who could resist a peek into the future?
We assume the periodicity is externally forced and the climate response remains constant. We are interested in modeling the remaining variance so we fit a stochastic model to the residual. Empirically, we found that again, a 5th order AR (autoregressive) process matches the residual well.
tDataY=TemporalData[residualY1-Mean[residualY1],Automatic];
yearTD=TemporalData[residualY1,{ DateRange[{1900},{2012},"Year"]}]
procY=ARProcess[{a1,a2,a3,a4,a5},v];
procParamsY = FindProcessParameters[yearTD["States"],procY]
estProcY= procY /. procParamsY
WeakStationarity[estProcY]
A 100-path, 100-year run combining the paths of the AR model with the harmonic model derived above is shown in Figure 13.
Figure 13 – Projected global mean temperature anomaly (centered 1950-1965 mean)
Figure 14 – Survivability at 10 (Purple), 25 (Orange), 50 (Red), 75 (Blue) and 100 (Green) years
The survivability plots predict no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113.
Discussion
With a roughly one-in-twelve chance that the model obtained above is the manifestation of a statistical fluke, these results are not definitive. They do however show that a reasonable hypothesis for the observed record can be established independent of any significant contribution from greenhouse gases or other anthropogenic effects.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
@ur momisugly Matthew R Marler: I supplied two links to two graphs of the results of SSA. Evidently there are back and forward arrows to other pictures I’ve linked to on this site before, which I didn’t know you’d see. The other “slides” had nothing to do with the current discussion.
I’ll have to delete old slides each time I post new ones. Stupid site.
For anyone looking for a simple assessment.
Average global temperature history since 1975 is like a hill. We went up the hill from 1975 to 2001 where the average global temperature trend reached a plateau (per the average of the five government agencies that publicly report average global temperature anomalies). The average global temperature trend since 2001 has been flat to slightly declining but is on the plateau at the top of the hill. Claiming that the hill is highest at its top is not very profound. The temperature trend has started to decline but the decline will be slow; about 0.1 K per decade for the planet, approximately twice that fast for land areas.
A licensed mechanical engineer (retired) who has been researching this issue (unfunded) for 6 years, and in the process discovered what actually caused global warming and why it ended, has four papers on the web that you may find of interest. They provide some eye-opening insight on the cause of change to average global temperature and why it has stopped warming. The papers are straight-forward calculations (not just theory) using readily available data up to May, 2013. (data through July made no significant difference)
The first one is ‘Global warming made simple’ at http://lowaltitudeclouds.blogspot.com It shows, with simple thermal radiation calculations, how a tiny change in the amount of low-altitude clouds could account for half of the average global temperature change in the 20th century, and what could have caused that tiny cloud change. (The other half of the temperature change is from net average natural ocean oscillation which is dominated by the PDO)
The second paper is ‘Natural Climate change has been hiding in plain sight’ at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . This paper presents a simple equation that, using a single external forcing, calculates average global temperatures since they have been accurately measured world wide (about 1895) with an accuracy of 90%, irrespective of whether the influence of CO2 is included or not. The equation uses a proxy which is the time-integral of sunspot numbers (the external forcing). A graph is included which shows the calculated trajectory overlaid on measurements.
Change to the level of atmospheric CO2 has had no significant effect on average global temperature.
The time-integral of sunspot numbers since 1610 which is shown at http://hockeyschtick.blogspot.com/2010/01/blog-post_23.html corroborates the significance of this factor.
A third paper, ‘The End of Global Warming’ at http://endofgw.blogspot.com/ expands recent (since 1996) measurements and includes a graph showing the growing separation between the rising CO2 and not-rising average global temperature.
The fourth paper http://consensusmistakes.blogspot.com/ exposes some of the mistakes that have been made by the ‘Consensus’ and the IPCC
Dang DSPs have taken all the eye candy glamor out of electronics. Instead of little silver plated coils, silver mica capacitors, crystal filters, cavity resonators, lots of little things to tune and a bunch of gold plated goodies to tie everything together in an artful manner, all you see now is a black epoxy stamp size wafer with a bunch of wires coming out of all four sides, processing signals in the digital domain in ways that are nearly impossible in analog.
richardscourtney:
Regarding your post at September 12, 2013 at 7:08 am
Thanks for your response. To be fair, I was asking a question though, not making a point. I don’t have any particular need to grasp any straws. I notice though that you don’t seem to have quite addressed the point, despite seeming very confident about it. So I wonder if we could revisit this?
My thinking is this:
Let’s assume everyone agrees that CO2 is a greenhouse gas, and will have a slight warming effect, all other things being equal.
Let’s also assume that everyone agrees that all other things are NOT equal, and that natural variability plays a significant role in global climate relative to the impact of CO2.
One upshot of this is that it is much harder to detect the impact of CO2 against natural variability than climate extremists would have us imagine. The impact of CO2 relative to natural variability is much smaller than claimed.
But just how reassuring is that? No-one can say that it is not warmer now than it would otherwise have been, can they? How would we possibly begin to establish that empirically?
Again, I’m asking a question here, not trying to make a point.
Also, if it is accepted that CO2 is a greenhouse gas, and will have a slight warming effect, all other things being equal, then any theory which fails to take account of this – by only including natural variability for example – is also wrong. Isn’t that true?
Grateful for any comments on this…
Pete Brown:
I am replying to your post addressed to me at September 13, 2013 at 12:43 am
http://wattsupwiththat.com/2013/09/11/digital-signal-processing-analysis-of-global-temperature-data-suggests-global-cooling-ahead/#comment-1415817
If I understand you correctly, then what you are really asking about is climate sensitivity. If so, then the magnitude of climate sensitivity is of importance and – in the context of your interest – the precise details of WHY climate sensitivity has that magnitude are secondary.
If I have misunderstood you then please say because I am replying to what I think you are saying and not avoiding anything.
I provide two answers. Firstly, if you use the WUWT Search function then you will find much on climate sensitivity. Secondly, my view on the matter is as follows.
I am convinced that increased atmospheric CO2 concentration will result in some rise in global temperature, but I am also convinced any such temperature rise would be too small for it to be discernible and, therefore, it would only have an abstract existence. I explain this as follows.
Before presenting my argument, I point out I remain to be convinced that human emissions are or are not the cause – in part or in whole – of the observed recent CO2 rise. However, the cause of a rise in atmospheric CO2 concentration is not relevant to the effect on global temperature of that rise.
My view is simple and can be summarised as follows. The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be too small to discern. This concurs with the empirically determined values of low climate sensitivity obtained by Idso, by Lindzen&Choi, etc..
In other words, the man-made global warming from man’s emissions of greenhouse gases (GHG) would be much smaller than natural fluctuations in global temperature so it would be physically impossible to detect the man-made global warming.
Of course, human activities have some effect on global temperature for several reasons. For example, cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.
Similarly, the global warming from man’s GHG emissions would be too small to be detected. Indeed, because climate sensitivity is less than 1.0°C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected. If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).
I hold this view because I am an empiricist so I accept whatever is indicated by data obtained from observation of the real world.
Empirical – n.b. not model-derived – determinations indicate climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
Idso from surface measurements
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
and Lindzen & Choi from ERBE satellite data
http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
and Gregory from balloon radiosonde data
http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf
Climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 concentration and, therefore, any effect on global temperature of increase to atmospheric CO2 concentration only has an abstract existence; it does not have a discernible existence that has observable effects.
In the context of this thread, the implication of small climate sensitivity is that atmospheric CO2 concentration has very small effect and, therefore, may not be capable of discrimination when analysing temperature data.
Please get back to me if I have failed to address what you wanted or I have not been clear.
Richard
Pete Brown:
re your comment to me at September 13, 2013 at 12:43 am.
I have provided a reply but (for some reason) it is stuck in moderation and should appear in an hour or so. Whatever the reason for this delay, it is not that I am avoiding provision of an answer to you. Please be patient.
Richard
Richard
Thanks, I am grateful.
nice piece of work. what it shows of course is that the data supports very little correlation with rising CO2 levels at all..sop co2 sensitivity is very low indeed, and almost all the warming is down to ‘something else’
Pete Brown:
It has now appeared. Hopefully this will appear in your inbox.
Richard
richardscourtney:
Richard
Thanks. I am familiar with the concept of climate sensitivity, but very interesting to hear your views. Thanks for your time in responding.
Pete
Ulric Lyons says:
September 12, 2013 at 3:36 am
And I greatly doubt that you have the product. What a team!
w.
Wayne says:
September 12, 2013 at 7:20 am
Hey, I’m quoting him, not expressing a second opionion.
w.
Willis Eschenbach says:
September 13, 2013 at 7:27 am
You are missing out twice Willis.
Scientifically. Monetarily.
I have seen Ulric’s work from its inception.
I predict much egg. On many, many faces.
Jeff Patterson says:
September 11, 2013 at 6:11 am
I’m not understanding this one at all. Suppose we have two points, one at say (0,0) and one at (1, 0.3).
My first objection is that I can only draw one straight line between the two points … but I can draw an infinite number of sine waves between those two points.
Next, you say “if we know the points are separated in time by no more than half the shortest period” … I don’t understand how on earth we could possibly know that. For example … what is the shortest period sine wave in the HadCRUT data? And how does that relate to a 170 year cycle?
Thanks in advance for your answers,
w.
@ur momisugly Matthew R Marler: I’ve removed all but the two graphs of the results of the SSA. As I mentioned, the “Trend” is the first two components, and the “Season” is the second two components, which I chose based on a scree plot. The Season for both NH and SH are very similar, and appear to be some kind of damped signal with a roughly 50-year frequency — perhaps a ringing of some sort — which may simply be an artifact of GISS’ processing.
@ur momisuglyWillis: Yes, yes, you were quoting Jeff, but in a way that is clearly a personal judgement. Maybe it’s just me, but you come across as harsh rather than disagreeing. Which might be justified if he were vociferous in his claims, which he wasn’t.
Since a sine wave has two variables, amplitude and phase, all you need to determine it is two points giving two equations which you solve simultaneously. You can in principle generate a sine wave with a period of one year from two points days apart.
Hmmm… There are a load of other cycles seen in longer term weather (including the Bond Event cycles). Several of them look to have a lunar periodicity / connection. I lean somewhat toward the simple mechanism of lunar / tidal ocean mixing modulation (that I’ve linked to here many times in a peer reviewed paper / link). So I wonder if there is a lunar period “close” to this?
Draconic Month is when the Moon crosses from above to below the ecliptic (or the other way). That period is 18.6 years.
https://en.wikipedia.org/wiki/Month#Draconic_month
“Sometimes written ‘draconitic’ month, and also called the nodical month. The orbit of the moon lies in a plane that is tilted with respect to the plane of the ecliptic: it has an inclination of about five degrees. The line of intersection of these planes defines two points on the celestial sphere: the ascending node, when the moon’s path crosses the ecliptic as the moon moves into the northern hemisphere, and descending node when the moon’s path crosses the ecliptic as the moon moves into the southern hemisphere. The draconic or nodical month is the average interval between two successive transits of the moon through its ascending node. Because of the sun’s gravitational pull on the moon, the moon’s orbit gradually rotates westward on its axis, which means the nodes gradually rotate around the earth. As a result, the time it takes the moon to return to the same node is shorter than a sidereal month. It lasts 27.212220 days (27 d 5 h 5 min 35.8 s). The plane of the moon’s orbit precesses over a full circle in about 18.6 years.”
3 x that is 55.8 years (very close to the 56 year period often called a “60 year cycle” seen in weather cycles – when the Saros cycle returns to over the same 1/3 of the globe …) Now what is 3 x 56? 167.4 years. Or very close to that 170.7 year period found.
I think what we are finding is the lunar tidal effect on ocean depth and mixing as the moon changes where it is relative to the continents. Above and below the ecliptic. And in line with the sun when particular continents are underfoot. Essentially an interaction of tides with the continents as the moon makes different strength tides with different land lined up.
Folks often talk about natural ocean oscillations, but could not there be a tidal metronome?…
Willis Eschenbach: My first objection is that I can only draw one straight line between the two points … but I can draw an infinite number of sine waves between those two points.
Next, you say “if we know the points are separated in time by no more than half the shortest period” … I don’t understand how on earth we could possibly know that.
Right on both counts.
pochas: Since a sine wave has two variables, amplitude and phase,
A sine wave has three variables: amplitude, phase and period. The claim that you only need 2 points to estimate a sine wave depends on the assumption that one of those is known.
To go back to the modeling of Jeffrey S. Patterson, he started by focusing on sine curves. After some smoothing, he used his 112 years of data to estimate the amplitude, period and phase of the fundamental, then nonlinear least squares to estimate the amplitude and phase of each harmonic. His decision to focus on sine curves is totally unprincipled, unless there is strong external evidence that the system has to be periodic: he could have used orthogonal polynomials of high order, wavelets, b-splines, etc. He wrote appropriate caveats: if the system is chaotic instead of periodic, there is no reason to expect his result to have any predictive value. And Willis Eschenbach appropriately critiqued Patterson’s null hypothesis test by pointing out that the “correct” null distribution is not known (that is, the “true” background variation is not known.) But Patterson explained that his choice of a null distribution, which in this field is about all one can be expected to do, unless we require an infinite set of null distributions.
In my mind, the value of such work is in what comes after: perhaps someone will be stimulated to find, and will find, a physical basis for the fundamental period, and maybe analysis will show that some features of the system produce the harmonics from the fundamental. In statistics, some people refer to work such as Patterson’s as “hypothesis generation”. At least with Vaughan Pratt’s model he had a physical basis for the main monotonic trend, but his estimates of the natural variation were suspect. Nicola Scaffetta is seeking physical drivers in the geometry and gravitation of the solar system. And so on.
This is just curve fitting. No matter how tricky you present it and how many tricks you use, the result is is the same: just curve fitting. Curve fitting CANNOT be used to make predictions. The misconception here is that you assumed that there are some cycles composing the evolution of temperature contained in the data you are using. But that’s just an assumption and is based in nothing.
You’re just playing with the data and the math, but there’s no physics on your work. Whenever you look for cycles in a data set, you need to be ready to explain the periods you get from it. But for this case you even forced the results and, therefore, the periods of the cycles you obtain are a result of the temporal length of the data set and the methodology you use.
Sorry, very impressive tricky math but the result is just a nonsense.
Ask the tide predictors if they can predict tides the way you’re trying to predict temperatures. Of course not.
E. M. Smith: The plane of the moon’s orbit precesses over a full circle in about 18.6 years.”
3 x that is 55.8 years (very close to the 56 year period often called a “60 year cycle” seen in weather cycles – when the Saros cycle returns to over the same 1/3 of the globe …) Now what is 3 x 56? 167.4 years. Or very close to that 170.7 year period found.
So a harmonic model with sines of periods 18.6, 55.8, and 167.4 years would produce a result not unlike what Patterson found (that is, appx the same sum of squared residuals), and might be predictive of the future. It’s a mere conjecture, but what isn’t at this point?
Willis Eschenbach says:
“And I greatly doubt that you have the product. What a team!”
I agree, there is no currency in it with your attitude, it would be like flogging a dead horse. The last type of person I need to team up with is someone who has looked and failed to find any connections and who also thinks that their opinion on the subject is superior. I am more interested in showing you a set of highly interesting and meaningful connections that can change your mind on the matter. If you refuse to look, it’s not the end of the World for me.
Willis Eschenbach says:
September 13, 2013 at 7:48 am
“My first objection is that I can only draw one straight line between the two points … but I can draw an infinite number of sine waves between those two points.”
Nyquist would be very sorry to hear that. Fortunately for us (especially for me – otherwise I’d be digging ditches) he is correct. The bandwidth limitation constrains the maximum slope and amplitude, squeezing your infinite solution space down to just one. Actually sampling at exactly the Nyquist rate splits the spectral energy between the Nyquist bin and d.c. in a manner that depends on the phase of the signal relative to your sample clock and is ill-advised.
“Next, you say “if we know the points are separated in time by no more than half the shortest period” … I don’t understand how on earth we could possibly know that?
Because I low-pass filtered the data prior to decimation.
Thanks in advance for your answers,
Your welcome
“Huh? You’ve admitted you used an AR model for your Monte Carlo test … but since you are not a statistician, you failed to realize that the choice of the model for the Monte Carlo test is a make-or-break decision for the validity of the test. You can’t just grab any data with similar bandwidth and variance as you have done and claim you’ve established your claims, that’s a joke.”
If by Monte Carlo test you refer to section entitled “Stochastic Analysis” and not to the section on forecast (where I also use a AR process), please show me where “[I am] ASSUMING that AR data is what we are actually looking at” or where “I attempt to prove that if it is AR data we are seeing”. That section makes no such assumption. The assumption is the one I stated earlier. To wit:
“Asserting that there is no significance to the goodness of fit achieved is equivalent to asserting that HD on any sequence of similar BW and variance would yield similar results.” You call this a joke but fail to reveal the punch line. Come on, give it up. I could use a good laugh even at my own expense.
Sorry, the post above should have referenced Willis Eschenbach post of September 12, 2013 at 12:18 am
Juan says:
September 13, 2013 at 9:39 am
“This is just curve fitting. No matter how tricky you present it and how many tricks you use, the result is is the same: just curve fitting. Curve fitting CANNOT be used to make predictions. The misconception here is that you assumed that there are some cycles composing the evolution of temperature contained in the data you are using. But that’s just an assumption and is based in nothing.”
Curve fitting can be used to make predictions given a set of assumptions about the underlying system. The prediction is only as good as those assumptions, which were explicitly stated. The analysis in no way addressed the validity of those assumptions and so the projection should be taken with a grain of salt, unless and until those assumptions (stated simply, that the climate can be modeled as a non-linear response to a period external forcing function). What curve fitting can provide, is a clue as to where to look for validation. In other words, as I have stated elsewhere, any model (curve fit, GCM or WAG) is properly used only in forming a hypothesis, hypothesis which must be validated empirically.