Digital Signal Processing analysis of global temperature data time series suggests global cooling ahead

This DSP engineer is often tasked with extracting spurious signals from noisy data. He submits this interesting result of applying these techniques to the HadCRUT temperature anomaly data. Digital Signal Processing analysis suggests cooling ahead in the immediate future with no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113. See figures 13 and 14. Code and data is made available for replication. – Anthony

Guest essay by Jeffery S. Patterson, DSP Design Architect, Agilent Technologies

Harmonic Decomposition of the Modern Temperature Anomaly Record

Abstract: The observed temperature anomaly since 1900 can be well modeled with a simple harmonic decomposition of the temperature record based on a fundamental period of 170.7 years. The goodness-of-fit of the resulting model significantly exceeds the expected fit to a stochastic AR sequence matching the general characteristic of the modern temperature record.

Data

I’ve used the monthly Hadcrut3 temperature anomaly data available from http://woodfortrees.org/data/hadcrut3vgl/every as plotted in Figure 1.

clip_image002

Figure 1 – Hadcrut3 Temperature Record 1850-Present

To remove seasonal variations while avoiding spectral smearing and aliasing effects, the data was box-car averaged over a 12-month period and decimated by 12 to obtain the average annual temperature plotted in Figure 2.

clip_image004

Figure 2 – Monthly data decimated to yearly average

A Power Spectral Density (PSD) plot of the decimated data reveals harmonically related spectral peaks.

clip_image006

Figure 3 – PSD of annual temperature anomaly in dB

To eliminate the possibility that these are FFT (Fast Fourier Transform) artifacts while avoiding the spectral leakage associated with data windowing, we use a technique is called record periodization. The data is regressed about a line connecting the record endpoints, dropping the last point in the resulting residual. This process eliminates the endpoint discontinuity while preserving the position of the spectral peaks (although it does extenuate the amplitudes at higher frequencies and modifies the phase of the spectral components). The PSD of the residual is plotted in Figure 4.

clip_image008

Figure 4 – PSD of the periodized record

Since the spectral peaking is still present we conclude these are not record-length artifacts. The peaks are harmonically related, with odd harmonics dominating until the eighth. Since spectral resolution increases with frequency, we use the eighth harmonic of the periodized PSD to estimate the fundamental. The following Mathematica (Mma) code finds the 5th peak (8th harmonic) and estimates the fundamental.

wpkY1=Abs[ArgMax[{psdY,w>.25},w]]/8

0.036811

The units are radian frequency across the Nyquist band, mapped to ±p (the plots are zoomed to 0 < w < 1 to show the area of interest). To convert to years, invert wpkY1 and multiply by 2p, which yields a fundamental period of 170.7 years.

From inspection of the PSD we form the harmonic model (note all of the radian frequencies are harmonically related to the fundamental):

(*Define the 5th order harmonic model used in curve fit*)

model=AY1*Sin[wpkY1 t+phiY1]+AY2*Sin[2*wpkY1* t+phiY2]+AY3*

Sin[3*wpkY1* t+phiY3]+AY4*Sin[4*wpkY1* t+phiY4]+AY5*

Sin[5*wpkY1* t+phiY5]];

vars= {AY1,phiY1,AY2,phiY2,AY3,phiY3,AY4,phiY4,AY5,phiY5 }

and fit the model to the original (unperiodized) data to find the unknown amplitudes, AYx, and phases, phiYx.

fitParms1=FindFit[yearly,model,vars,t]

fit1=Table[model/.fitParms1,{t,0,112}];

residualY1= yearly- fit1;{AY1→-0.328464,phiY1→1.44861,AY2→-

0.194251,phiY2→3.03246,AY3→0.132514,phiY3→2.26587,AY4→0.0624932,

phiY4→-3.42662,AY5→-0.0116186,phiY5→-1.36245,AY8→0.0563983,phiY8→

1.97142,wpkY1→0.036811}

The fit is shown in Figure 5 and the residual error in Figure 6.

clip_image010

Figure 5 – Harmonic model fit to annual data

clip_image012 clip_image014

Figure 6 – Residual Error Figure 7 – PSD of the residual error

The residual is nearly white, as evidenced by Figure 7, justifying use of the Hodric-Prescott filter on the decimated data. This filter is designed to separate cyclical, non-stationary components from data. Figure 8 shows an excellent fit with a smoothing factor of 15.

clip_image016

Figure 8 – Model vs. HP Filtered data (smoothing factor=3)

Stochastic Analysis

The objection that this is simple curve fitting can be rightly raised. After all, harmonic decomposition is a highly constrained form of Fourier analysis, which is itself a curve fitting exercise that yields the harmonic coefficients (where the fundamental is the sample rate) which recreate the sequence exactly in the sample domain. That does not mean however, that any periodicity found by Fourier analysis (or by implication, harmonic decomposition) are not present in the record. Nor, as will be shown below, is it true that harmonic decomposition on an arbitrary sequence would be expected to yield the goodness-of-fit achieved here.

The 113 sample record examined above is not long enough to attribute statistical significance to the fundamental 170.7 year period, although others have found significance in the 57-year (here 56.9 year) third harmonic. We can however, estimate the probability that the results are a statistical fluke.

To do so, we use the data record to estimate an AR process.

procY=ARProcess[{a1,a2,a3,a4,a5},v];

procParamsY = FindProcessParameters[yearlyTD["States"],procY]

estProcY= procY /. procParamsY

WeakStationarity[estProcY]

{a1→0.713,a2→0.0647,a3→0.0629,a4→0.181,a5→0.0845,v→0.0124391}

As can be seen in Figure 9 below, the process estimate yields a reasonable match to observed power spectral density and covariance function.

clip_image018 clip_image020

Figure 9 – PSD of estimated AR process (red) vs. data Figure 9b – Correlation function (model in blue)

clip_image022 clip_image024

Figure 10 – 500 trial spaghetti plot Figure 10b – Three paths chosen at random

As shown in 10b, the AR process produces sequences which in general character match the temperature record. Next we perform a fifth-order harmonic decomposition on all 500 paths, taking the variance of the residual as a goodness-of-fit metric. Of the 500 trials, harmonic decomposition failed to converge 74 times, meaning that no periodicity could be found which reduced the variance of the residual (this alone disproves the hypothesis that any arbitrary AR sequences can be decomposed). To these failed trials we assigned the variance of the original sequence. The scattergram of results are plotted in Figure 11 along with a dashed line representing the variance of the model residual found above.

clip_image026

Figure 11 – Variance of residual; fifth order HC (Harmonic Coefficients), residual 5HC on climate record shown in red

We see that the fifth-order fit to the actual climate record produces an unusually good result. Of the 500 trials, 99.4% resulted in residual variance exceeding that achieved on the actual temperature data. Only 1.8% of the trials came within 10% and 5.2% within 20%. We can estimate the probability of achieving this result by chance by examining the cumulative distribution of the results plotted in Figure 12.

clip_image028

Figure 12 – CDF (Cumulative Distribution Function) of trial variances

The CDF estimates the probability of achieving these results by chance at ~8.1%.

Forecast

Even if we accept the premise of statistical significance, without knowledge of the underlying mechanism producing the periodicity, forecasting becomes a suspect endeavor. If for example, the harmonics are being generated by a stable non-linear climatic response to some celestial cycle, we would expect the model to have skill in forecasting future climate trends. On the other hand, if the periodicities are internally generated by the climate itself (e.g. feedback involving transport delays), we would expect both the fundamental frequency and importantly, the phase of the harmonics to evolve with time making accurate forecasts impossible.

Nevertheless, having come thus far, who could resist a peek into the future?

We assume the periodicity is externally forced and the climate response remains constant. We are interested in modeling the remaining variance so we fit a stochastic model to the residual. Empirically, we found that again, a 5th order AR (autoregressive) process matches the residual well.

tDataY=TemporalData[residualY1-Mean[residualY1],Automatic];

yearTD=TemporalData[residualY1,{ DateRange[{1900},{2012},"Year"]}]

procY=ARProcess[{a1,a2,a3,a4,a5},v];

procParamsY = FindProcessParameters[yearTD["States"],procY]

estProcY= procY /. procParamsY

WeakStationarity[estProcY]

clip_image030 clip_image032

A 100-path, 100-year run combining the paths of the AR model with the harmonic model derived above is shown in Figure 13.

clip_image034

Figure 13 – Projected global mean temperature anomaly (centered 1950-1965 mean)

clip_image036

Figure 14 – Survivability at 10 (Purple), 25 (Orange), 50 (Red), 75 (Blue) and 100 (Green) years

The survivability plots predict no significant probability of a positive anomaly exceeding .5°C between 2023 and 2113.

Discussion

With a roughly one-in-twelve chance that the model obtained above is the manifestation of a statistical fluke, these results are not definitive. They do however show that a reasonable hypothesis for the observed record can be established independent of any significant contribution from greenhouse gases or other anthropogenic effects.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

313 Comments
Inline Feedbacks
View all comments
Sensorman
September 12, 2013 2:40 am

Jeffrey – when I look at the FFT result, one thing strikes me – there is obviously periodicity, but the picture looks reminiscent of a response containng reverberation or echoes. Unfortunately, I don’t have easy access to cepstral processing, but I would be fascinated to see whether there were peaks in the cepstrum. There may be “impulse responses” present that are triggered by discrete “events”, and there may be many overlaid so that they convolve in the spectrum. Just my feeling, as there is no real trace of actual peaks in the spectrum. There are maxima, but relatively broad. It’s the minima that interest me…

Sensorman
September 12, 2013 2:41 am

And apologies for mis-spelling your name, Jeffery!

Geoff Sherrington
September 12, 2013 3:21 am

Jonathan Abbott says: September 11, 2013 at 6:07 am “Straw man? Valid?”
Jeffery has given adequate qualifications and caveats for the treatment of data this way.
He notes that it could be mere curve fitting, then adds an extra test to show, to put it another way, that there is a high probability that this approach, used by others, would give a similar result.
One could agree with you that curve fitting has some weaknesses, because in the future case it cannot cope well with large, random events perhaps like the 1998 anomaly in temperature. But that is the case with all projections. For a short-term projection, this analysis beats most weaknesses of a linear least squares as is often used.
Personally, I would not push the analysis as far as he did, but that is a preference. Like Nick Stokes, I have a problem with projecting partial cycles; but Jeffery has incorporated this point in his caveats.
My worry would be that the temperature record has been adjusted in arbitrary and capricious ways that have the ability to distort the properties of the calculated curve. For instance, the ‘adjusted’ warming of the early years of the record is bound to have an effect.

September 12, 2013 3:36 am

Willis Eschenbach says:
“Anyone on this planet who could make weather predictions with the accuracy you claim, and at the distance out that you claim, could make millions. Given that you haven’t done so, I fear that I greatly doubt the claims that you so confidently put forward.”
Help me market it and I’ll cut you in then. I know I have the product without any doubt.

LdB
September 12, 2013 5:54 am

e. smith says:
September 12, 2013 at 1:49 am

So the mathematics may be quite elegant; but the input “data” is just garbage.

You are right but you cheated and gave him the answer I wanted him to work it out.
I will give him credit he is at least thinking outside the usual box that climate science seems to be stuck in.
JP there is a range of techniques for non linear time invariant systems which would are far more suitable for doing the sort of analysis you are trying 🙂

LdB
September 12, 2013 5:55 am

Sigh … mods can you please fix the end of blockquote above .. sorry
[Fixed … -w.]

kencoffman
September 12, 2013 6:09 am

I talked to a thermo engineer from Intel yesterday and he surprised me when he told me that 40-50% of the cooling of a small phone in his pocket was via radiation (hence a surface optimized for emissivity). That’s a lot, generally we don’t see more than about 10-15% of the cooling effect via radiation. Of course, outside the package, we don’t get much benefit from conduction and the smaller the enclosure…well, of course, the less area there is for heat spreading. When it’s your job to cool something (or, conversely, to heat something) and radiation becomes an important part of toolbox, that’s truly a game-changer. That’s why guys like me get the big bucks, I guess.
It’s unfair to expect thoughtful colleagues like Mr. Courtney to consider the balance between CO2 backradiation heating and CO2 cooling via convection…or to realize that if a projected temperature anomaly includes negative numbers, that indicates the theoretical span includes cooling correlated with increasing CO2 concentration. It’s all good, carry on, my friends.

Pete Brown
September 12, 2013 6:51 am

This implies that we should have experienced a period of significant cooling over the last decade. Whereas in fact temperatures have been flat. Is this where the missing heat went?

Wayne
September 12, 2013 6:58 am

@Rabbit @Matthew R Marler: Done:

Wayne
September 12, 2013 7:04 am

@Rabbit @Matthew R Marler: Oops, it cuts out HTML. I did an SSA for GISS NH and SH separately. (There’s an R package for this: Rssa.) The Trend graph is based on the first two eigenvalues, and the Season1 graph is based on the second two eigenvalues.
http://i1090.photobucket.com/albums/i371/WayneAnom/GISSssaNH_zpsf67f65f6.png
http://i1090.photobucket.com/albums/i371/WayneAnom/GISSssaSH_zpsb04724ae.png
Interestingly, both hemisphere’s Season1 appear to be damped oscillations. Perhaps artifacts of GISS processing. (I’ve had discussions about GISS in other forums, and especially the SH is whacky, since they smooth over 1200km. When Antarctica came online in the 1950’s you can detect a large change in volatility.)

September 12, 2013 7:08 am

Pete Brown:
Your post at September 12, 2013 at 6:51 am asks

This implies that we should have experienced a period of significant cooling over the last decade. Whereas in fact temperatures have been flat. Is this where the missing heat went?

Nice try, but no coconut.
The hypothesis of discernible warming is wrong and Trenberth’s “missing heat” is one of the many reasons we know it is wrong. Clutching at straws about why it is missing is merely desperation by warmunists trying to stay afloat when they are attached to the sinking AGW-scare.
There is no sign of Trenberth’s missing heat in the depths or how it could have got there. I think that if the “missing heat” ever existed then probably went the other way and by now it has passed Alpha Centauri. But so what? The important point is that it is missing and not why it cannot be seen.
Richard

Wayne
September 12, 2013 7:20 am

@Willis: “Indeed, you are correct—you’re not a statistician.”
Neither are you.
Neither am I, for that matter, though you and I are both avid students of statistics. Jeff’s method has issues, but I’d point out that he’s more knowledgeable and has been more rigorous than the average WUWT poster. Considering your confusion on basic time series analysis a while back (“linear trend”), I’d suggest that you should be more gracious in your criticisms. Jeff has certainly been reasonable in his tone, and deserves the same from you.
As a high school physics teacher once told me, it gets frustrating teaching the same course year after year and yet every single year the students still come into the class not yet understanding it. 😉 Curve-fitting is a problem, but don’t take out your frustration for past curvologists on Jeff.

Matthew R Marler
September 12, 2013 9:30 am

Jeff Patterson: I’ve seem to have struck some nerve with my hobby horse,
Not so. Willis Eschenbach is a vigorous critic of most modeling that gets presented here. Like me, he mocked the pretended precision of the estimated period of “170.7”. I don’t think data halving is as useful a technique as waiting for the next 20 years of out-of-sample data. I think he missed your qualification that if the system is chaotic your model was useless for prediction. I think your defense of your choice of null distribution for null hypothesis testing is adequate for now.
I think it will be a miracle if your model proves to be reasonably accurate over the next 20 years, and I think on the whole it is less credible than Vaughan Pratt’s model (which also will, in my guess, require a “miracle” to be accurate over the next 20 years), but I put in in what I call “the pile of active models”. Because you provided the code, anyone who is interested can test.

September 12, 2013 9:35 am

@Geoff Sherrington 3:21 am. +1 A balanced evaluation.
Goodman 12:23 am +5 Succinct. A reposte to remember for AR5.

Most climate data is fundamentally inadequate to determine the nature of longer term variability. Whether century scale change is periodic, linear AGW , random walk, or some other stochastic process cannot be determined from the available data.
The big lie is that the IPCC is now, apparently, 95% certain that it can.
The last 17 years is the proof that they can’t.

@Willis Eschenbach 12:18am
Me, I don’t trust any cycle I find in climate data that’s more than a third of the length of my dataset, and I would strongly encourage you to do the same.
Amen! That’s why the BEST scalpel is folly. It turns a low pass signal into a band pass signal. At worst, it preserves drift as signal and discards vital recalibration events.

Matthew R Marler
September 12, 2013 9:39 am

Wayne: @Rabbit @Matthew R Marler:
Cute. What was the reference to “Science”? Did that article describe the method?

September 12, 2013 9:44 am

@Willis Eschenbach 12:18am
Extracting a 170+ year cycle from a 110 year dataset is equally hopeless.
You cannot do it with an FFT. You can and should recognize the possibility there is a dominant low frequency in the data that the tool cannot adequately detect.
Take an Excel function describing a partial sine wave with a linear ramp.
=AmpS*SIN(Phase)+LinSlope*(Year-2000)+Y2KIntercept
With Year=(1900:2012)
AmpS = 0.4 (deg C)
Phase =(Year-PhaseOrig)/CycLen*2*PI()
PhaseOrig = 1965
CycLen = 175 (years)
LinSlope=0.001 (deg C/yr) (= 0.1 deg C/ century)
Y2KIntercept = 0.3 (deg C)
If you plot it up, (thick blue line on)
Excel chart picture: http://i43.tinypic.com/16j3ndj.jpg
If you do get a least squares linear fit you get.
Y = 0.0098*(Year-2000) + 0.3 (almost 1.0 deg C / century)
And an R^2 of 0.9328. (thin blue line)
In truth, there is only a 0.1 deg C / century linear trend super imposed upon a clean sinusoidal -0.4 to +0.4 deg with a wave length of 175 years that the measurements caught between phase (-2.33 and 1.68 radians) (-133 and 96 deg) (thick blue curve.)
If you remove the baseline between the 1900 and 2012 end points (maroon dashed line.), as per Jeff, you will get a thick maroon residual that you would do FFT and harmonic analysis. By visual inspection, I think most people can see the problem. The removal of the baseline has the following effects on the residual curve:
1. It shortened the cycle length to maybe 120 years instead of 175,
2. greatly reduced the amplitude of the Sinusoid to about 0.2 instead of 0.4,
3. introduced high frequency ringing caused by the mismatch of slopes at the end points,
4. and finally, baked in a 0.71 deg/century linear trend created by the choice of baseline, seven times higher than what is in the original [known], under-measured signal.
Such pitfalls make me want to give the pit wide berth.

September 12, 2013 10:39 am

Clarification: my 9:44am is in support of Willis’s 12:18am
Extracting a 170+ year cycle from a 110 year dataset is equally hopeless
I just wanted to point out that just because a tool cannot see (like the FFT) a longer wave length or cannot uniquely determine a long wave length (harmonic decomp), doesn’t mean we should blind ourselves to the possibility of long wave length cycles in the system response of the climate. Indeed, at geologic scales with mutiple ice ages and interglacial periods, there likely are some. But our temperature records are too short, too noisy, and too adjusted to have any hope of determining any century scale or longer wave lengths in the earth’s system’s response.

george e. smith
September 12, 2013 11:03 am

“””””””…….LdB says:
September 12, 2013 at 5:54 am
e. smith says:
September 12, 2013 at 1:49 am
So the mathematics may be quite elegant; but the input “data” is just garbage.
You are right but you cheated and gave him the answer I wanted him to work it out………””””””””
So “I” cheated ??
No, and I also did not assume, that Jeff was in any need of being given any answer. My post on this thread, in no way conveys any criticism of Jeff or his methods. It is always refreshing to listen to someone tell what he does in his cubicle.
Having been an Agilent (read REAL HP) employee myself, in a past life, I fully understand, that those blokes have to know that their stuff works.
But if you send a bunch of researchers out to a bunch of places, and ask them to count and tabulate the number of animals per square meter, or maybe Hectare; animal being restricted to only those bigger than an ant, and you record that data for 150 years; then Jeff could apply his methods to your data set, and come up with some output that you could feed to the WWF (no not the Hulk Hogan crowd); and they could all go gaga over it, and ask the UN for funding to rectify the problems discovered.
But don’t blame Jeff’s analytical tools; he didn’t know a priori (perhaps), that your data is garbage.
Of course the number of animals per hectare; regardless; or irregardless, as the case may be, of the size, and species of such critters; contains about as much information as the Temperatures, taken at quite random, unrelated times, at quite randomly placed, and also unrelated sparse locations around the globe.
Yes you can do a FFT on the numbers, you could even expand them as an orthogonal set of Tchebychev polynomials, or Bessel functions. Well there’s nothing wrong with the maths; make it as simple as calculating the average, or maybe the median. The result is valid, but still garbage, because the data was.
Now Mother Gaia does it right; even proper, or properly; because she has in effect a Thermometer in each and every molecule, so she always knows what the Temperature is; and it always is what it is supposed to be. But she is not going to communicate the answer to us.

Steven Groeneveld
September 12, 2013 11:32 am

This is an interesting analysis and of course there are a lot of reasonable rational caviats that are mostly valid and necessary for rigorous analysis. Still that should not blind us to the possiblity of being able to make some sense of imperfect data.
Jeff Patterson says:
In theory, you need just two samples per period. Just as two points define a line uniquely, there is only one way to draw a sine wave between two points as long as you know the points are separated in time by no more than 1/2 the shortest period (or equivalently twice the highest frequency) sine wave present in the data. if the condition is not met, the high frequency components masquerade as low frequency components in a process referred to as aliasing. That is why we remove the high frequency components by filtering before decimating (re-sampling) the data.
Willis Eschenbach says:
“The second is that as far as anyone has every determined, the climate is chaotic
@Willis Eschenbach Even Chaos can have an underlying harmonic structure (for example, fractals, Strange attractors, etc.)
Willis Eschenbach says:
Me, I don’t trust any cycle I find in climate data that’s more than a third of the length of my dataset, and I would strongly encourage you to do the same.
That is correct and valid when you can get the data. For dynamic analysis of flight test data for flutter prediction we use a sample rate of at least 5 times the highest vibration frequency expected, so this is in line with standard practice.
“And as also pointed out proper signal filtering before sampling is necessary to avoid problems like aliasing. Also I have very little confidence in the manner in which global temperatures are averaged. (Since the earth loses heat only by radiation an average of the 4th power of (absolute scale) temeratures makes more sense but that makes a very small difference compared to the distribution of source stations, or time of day of recording or using only min and max etc.). So the whole data set is suspect from the start but there can still be a harmonic signal in it.
However it has been useful to me in the past to have data that was not correctly processed. I was involved in the investigation of an aircraft accident that lost pitch control due to flutter and, as luck would have it there was a primitive recording device on board the aircraft. Signals from the AHRS (Attitude and Heading reference system) were recorded without pre-filtering onto a laptop computer at a relatively low sample rate (about 16 Hz I think). When analysing the data at the point of the failure there was a 5.5Hz signal that indicated the flutter that caused the failure. By calculating the frequences that could alias to 5.5 Hz the possibilities could have been 11 Hz 22 Hz or some higher Frequency. By putting the pilots on a seat on a shaker (that we use for ground vibration test excitation) at the possible and plausible frequencies the pilots could identify the shaking they felt as the 11 Hz. This was also in line with one of the resonant frequencies or the tailplane identified in a ground vibration test of the aircraft Having an external confirmation is necessary to identifying which frequency is aliased (although , as also pointed out, confirmation bias is always a possibility)”
Looking for high frequencies in an aliased signal is at the opposite end to the spectrum to looking for low frequencies in too short a period may not be exactly similar but I do recognise the fact that there could be a signal and I have spent some time myself curve fitting cycles to the satalite temperature record so this analysis intrigues me. I am also intriqued that the 170 point whatever cycle is close the the 179 year solar baricentre cycle. Even though a recent WUWT post discounted any planetary cycle influence on the solar cycle, It may be, as Willis pointed out, there is not enough data length to cover enough cycles sufficiently to recognise the signals properly, or that there is more noise than signal. Anyway I am biassed in appreciation for Jeff Patterson’s analysis since it is an approach that is familar to me.

Wayne
September 12, 2013 11:38 am

@Matthew R Marler says: “Cute. What was the reference to ‘Science’? Did that article describe the method?”
? I didn’t make any reference to “Science”, and I’m not sure what’s “cute”. @Rabbit asked about doing an SSA and I found a library to do so and did so. I haven’t managed to pull out frequencies yet — its internal data structure is obtuse and requires a couple of extra layers of unwrapping — but I thought the graphs would be interesting to compare to Jeff’s straight FFT approach.

george e. smith
September 12, 2013 11:53 am

Speaking about giving the answers; there’s a “Science” web site out there, where consensus reigns supreme.
Anyone can ask a question. “Are the ends of a wormhole polarized the same or opposite ?”
Anyone can answer the question. No ! the two ends of a wormhole have the same polarity; but it can be up or down.
Well Wikipedia says they are always opposite !
If you give an answer different from Wikipedia, you are in trouble.
Then anybody can vote for or against your answer; to determine its popularity. No restrictions on the credentials or lack thereof, of the critics. And so you get a score.
Now all the kids taking 4-H club Quantum Chromo-dynamics classes, like to post their homework problems there so they don’t have to look it up in Wikipedia.
You are not aloud to answer someone’s homework problem; which some referee designates as an “hw” problem. Well you can tell the “OP” which is code for the chap who asked the question; go to the Stanford Science library, and look it up in Born and Wolfe; but you can’t tell him/er that the answer is 12.
So someone; the “OP” put up a network of resistors, and asked for the equivalent resistance between two nodes. So the correct answer is; go to Wikipedia and look up Kirchoff”s Laws.
But this network has a problem. The values for the separate resistors, are not industry standard (RMA) values. So it had numbers, like 40, 120, 100 Ohms, instead of 39, 120 and 100, which are good numbers.
It was quite obvious from the numbers in the problem that the examiner, fully intended the student (New Zealand white rabbit farmer), to figure it out in his head, not putting pen to paper, till it came time to write down 32 Ohms.
I did that and gave him the answer; 32 Ohms. That drew the condemnation of some twirp younger than my children, who gave some elaborate algebra, that rabbit farmers wouldn’t understand; and the consensus fan base voted me a -2 popularity score.
I once set an exam question not unlike that one, for a pre-med first year Physics class (of 200 student doctors to be).
Part (a) Write down Kirchoff’s laws.
Part (b) Use Kirchoff’s laws, to derive the balance condition for a Wheatstone Bridge.
Part (c) Determine the current flowing in the resistor R7 in this network.
Now “this network” consisted of seven resistors connected between a total of four nodes. In two instances, there were simply two resistors in parallel between a pair of nodes. A Voltage of 10 Volts, was applied between nodes (1) and (3) and resistor R7 of 1,000 Ohms, was between nodes (2) and (4).
So if you figured out the parallel pairs (in your head) reducing the network to five resistors, and imagined the four nodes, to be in a diamond configuration, instead of the rectangular boxy shape I drew them, you would immediately see that four of the resistors comprised a Wheatstone Bridge; (please do part (b) of the problem before doing part (c). and a simple mental note would show that with those values, the Wheatstone bridge was indeed balanced; Please do Part (a) of the problem before doing part (c).
So the correct answer was zero current.
Out of the 200 students who sat the exam, not a single one figured out that the network was indeed a Wheatstone bridge, let alone that it was balanced.
About 50 students laboriously worked through the Kirchoff’s laws equations, about 45 got the arithmetic correct to give the correct answer; the other five worked the equations properly, and goofed on the arithmetic.
150 of them got parts (a) and (b) correct . Nobody received the full 20 points for the question, because none of them seemed to understand that the theory part might be related to the simple problem.
Anyhow, this consensus science site is a gas, just reading some of the goofy questions.
But you can get your favorite string theory questions answered there.
On bright student, asked if anybody had ever studied whether the universe was moving; and if so, in which direction.
So these folks are the future of science, unless some rationality eventually prevails in our education systems.
Nine (9) of my class of 200 students of excellent doctor/dentist/veterinarian candidates, got accepted by the medical school, which at the time, was at another campus, in another (smaller) town.
I elected to transfer over to Industry, from academia.

September 12, 2013 12:18 pm

Correction to: 10:39 am
… just because a tool cannot see … a longer wave length or cannot uniquely determine a long wave length (harmonic decomp), doesn’t mean we shouldn’t should blind ourselves to the possibility of long wave length cycles in the system response of the climate. ….
In other words, always be aware of what you cannot (or did not) detect.
[Fixed. -w.]

1sky1
September 12, 2013 2:26 pm

The oft-repeated idea that a noisy 113-yr temperature record doesn’t contain sufficient information to determine the parameters of a 170-yr periodic cycle is somewhat misguided. Simple heterodyning with the available record is entirely adequate for that purpose–provided that the fundamental period is known a priori on the basis of physics. That is the proven basis of Munk’s “Super-resolution of the Tides.” Without such a priori knowledge, however, even a 170-yr record would not be adequate, since the resulting raw periodogram would vary as chi-squared with only two degrees of freedom. In the face of spectral frequency smearing by the the DFT, there would be scant grounds for any reliable identifcation of any periodic components
The real culprit.in various simplistic models of yearly average temperature series is the wholly unjustified presumption that there ARE strictly periodic components (spectral lines) to be found in the empirical record That is the common blunder made not only here, but with Pratt’s multi-decadal “sawtooth” or Scafetta& Lohle’s two-sinusoid oscillation. In fact, what we have is stochastic oscillations with various bandwidths, which give the appearance of “trends” and “periodicities” in records too short for scientific purposes.

Matthew R Marler
September 12, 2013 3:58 pm

Wayne: ? I didn’t make any reference to “Science”, and I’m not sure what’s “cute”.
One of the links that you supplied had a series of ppt-type pages displaying graphs and such, and one of those pages was a header from a page of “Science”. The other link was a single page.

September 12, 2013 6:45 pm

I did not read the code in entirety, so I wonder: Does it consider the squares of temperature anomalies from a horizontal line or a best-fit whatever, or does it consider unsquared anomalies?
It appears to me that Fourier works with unsquared data. Using squared anomalies from a best-fit linear trend or whatever would give more weight to the the 1878 and 1998 El Ninos than Fourier does, and less weight to the broader peak centered around 2004-2005. The ~1910 dip and the 1950s dip, if they are squared, could cause a shorter period determination for appears to me as determination of the 3rd harmonic.
I have more faith in Fourier for spectral analysis.
My eyeballs tell me that the strongest periodic component in HadCRUT3, of periods near or less than 200 years, has a period around 60-70 years.
I am expecting the warming from the early 1970s to ~2005 to be roughly repeated, and to only slightly greater extent, after about 30 years of 5-year-smoothed global temperature roughly stagnating or very slightly cooling. (Or trend of 1-year or shorter period temperature going maybe 34-35 years without warming, if the start time is sometime in 1997 when a century-class El Nino was kicking up or about to kick up.)

1 7 8 9 10 11 13