Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.

Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.

The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.

The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
GISS |
0.056 |
0.087 |
0.031 |
CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.

There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.
This is the sort of REAL analysis I love to see.
propa science !!!
well done, mate !!
As discussed on the thread of tAV cross post, this post would have been better without the reference to the meaningless PDO+AMO dataset. That discussion starts with this comment…
http://noconsensus.wordpress.com/2011/05/24/future-perfect/#comment-50454
…which reads:
Pat Frank: The AMO and PDO data cannot be combined as you’ve presented. The PDO and AMO are not similar datasets and cannot be added or averaged. The AMO is created by detrending North Atlantic SST anomalies. On the other hand, the PDO is the product of a principal component analysis of detrended North Pacific SST anomalies, north of 20N. Basically, the PDO represents the pattern of the North Pacific SST anomalies that are similar to those created by El Niño and La Niña events. If one were to detrend the SST anomalies of the North Pacific, north of 20N, and compare it to the PDO, the two curves (smoothed with a 121-month filter) appear to be inversely related:
http://i52.tinypic.com/fvi92b.jpg
Detrended North Pacific SST anomalies for the area north of 20N run in and out of synch with the AMO:
http://i56.tinypic.com/t9zhua.jpg
Andy G55 says:
June 2, 2011 at 2:26 am
Here Here!
And not a fudge factor in sight.
Very well done.
Well done.
According to some physicists Global Average Temperature is a meaningless concept so is not a valid proxy for climate change. I tend to agree.
If more heat is fed into a system then there is more heat loss, at least that is what the study of thermodynamics tells us. As the surface is heated, the air above gains heat through conduction and is forced to rise by convection removing that heat to higher atmospheric levels forming clouds if enough water vapour is present. This will reduce solar radiation to the surface.
So Dr. Roy Spencer could be correct.
I agree. This is real science. But I have no idea what it means.
Donald Brown would prefer that you believe the IPCC theory rather than your empirical scientific observations. Ethics, ya know. Even if the IPCC is almost an order of magnitude higher in it’s prediction than what calculations based on observations reveal.
But 0.03C/decade?? Isn’t that in the noise?
Very well done.
Outstanding insight into how to tease something out of the temperature record. This is a real breakthrough precisely because it depends on a few known assumptions.
A couple of questions follow from this work:
1. Is it possible that the first half/second half difference is related to the urban heat island and other site-trend effects that Anthony has been documenting? In different terms, is it possible that GHG emissions and UHI are both coincident symptoms of a third variable, industrial growth and urbanization? This might explain why some of the other signals of GHG warming are missing.
2. The lower atmosphere must have a large heat storage capacity due to the amount of energy involved in vaporizing water at the ocean surface. When 1 kg of water is evaporated at the surface of the ocean, it absorbs 1000 times the energy that it takes to raise 1 kg of water/water vapor by a degree. Raising the surface temperature of the oceans results in more evaporation which prevents or delays temperature increases. Same thing happens in reverse during cooling. This effect is independent of the cloud effects that Dr Spencer has been developing, although the higher humidity will enhance the opportunity for cloud formation.
This analysis tells me that if the Earth’s warming out of the Little Ice Age stops and reverses, the climate sensitivity to manmade greenhouse gases won’t save us, given the assumptions upon which the cosine fit relies upon are true.
The key metric, then, is the sea levels.
Downscope.
Excellent! Nice job Mr Franks.
really indepth research done…bravo
please visit http://schizoidlawi.wordpress.com
would appreciate comments 🙂
Pat Frank:
This is an interesting and informative analysis desite the caveats that you rightly provide. Thankyou.
Your analysis provides several important findings. I note that one of these is an indication of climate sensitivity of 0.090 C/W-m^-2 (which corresponds to a temperature increase of 0.37 Celsius for a doubling of CO2).
This result is similar to the climate sensitivity that Idso determined from his 8 ‘natural experiments’. He reported:
“Best estimate 0.10 C/W/m2. The corresponds to a temperature increase of 0.37 Celsius for a doubling of CO2.”
His findings are summarised at, and his paper reporting the ‘natural experiments’ is linked from,
http://members.shaw.ca/sch25/FOS/Idso_CO2_induced_Global_Warming.htm
Richard
Oops!
Of course I intended to type
(which corresponds to a temperature increase of 0.34 Celsius for a doubling of CO2).
Sorry.
Richard
“Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.”
/////////////////////////////////////////////////
I seem to recall that Phil Jones admitted that there was no statistical difference in the rate of warming in the first half of the 20th century and that in the second part (ie., no difference in rate of warming between say 1910 – 1940 and 1970 -1995). This admission has always been a killer since CO2 could not be responsible for the warming in the first part of the 20th century but miraculously and quite unexplainably whatever caused that warming certainly stopped and CO2 suddenly kicked in post 1970s.
This is a big hole in the AGW theory until such time that they can explain/identify what caused the warming in the first half of the 20th century and explain why that warming influence came to a halt and is not operative today.
The fact that the rate of warming in the first half of the 20th century is the same as in the second half coupled with the stalling/flatening off in warming post 1998 is a big hole in the CAGW thoery. Clearly present day warming is not unprecedented and there is no evidence of run away disaster.
Interesting post, thanks.
Sound, common sense, well thought through, & logially applied, so it won’t be published in the MSM then!
Warmistas still haven’t & cannot to my (relatively limited) knowledge, answer the underlying question, that when the atmosphere contained 20 times the amount of CO2 half a billion years ago, than it does today, & there was no known climate catastrophe, why would it happen now, with a mere fraction of that amount of CO2 in the atmosphere, what mechanism has changed? Yes there were extinction level events, but as yet linked to (falling levels) of CO2!
Excellent piece. I wondered if the recent ‘increased’ warming rate incorporated UHI, as JohnL mentions:
JohnL says: June 2, 2011 at 3:37 am
[…]
A couple of questions follow from this work:
1. Is it possible that the first half/second half difference is related to the urban heat island and other site-trend effects that Anthony has been documenting?
If there is any UHI effect, then the CO2 sensitivity is even less than indicated.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature.
Er, no it is not. See figure 2.23 in IPCC AR4. Long lived Greenhouse Gas (LLGHG) forcing contributes about 0.35 W/m2 pre-1950. The rest of the warming (about 50%) was likely due to the increase in TSI. See IPCC ‘Understanding and Attributing Climate Change ‘
“It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.” http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-understanding-and.html
Also, you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans. You need to estimate the energy imbalance (e.g. from OHC numbers), but this gives you a value of around 3C for 2xCO2 ……
“the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.”
Your linear trend plus oscillation explains absolutely nothing. You have described the curve, not explained it. Explanations of the global air temperature record require physics, not numerology.
If by 2100 the global temperature has only risen by 0.42c, the faithful will still be claiming a second coming of warmth is due any moment.
Shaun D says:
June 2, 2011 at 2:52 am
I agree. This is real science. But I have no idea what it means.
It means, dare I say it, It’s not as bad as we thought!
Always good to see empirical studies on climate sensitivity – but they are fraught with difficulty. When IPCC first did their estimates, 1 watt/square metre of radiative forcing from CO2 and other GHGs (net of sulphurous cooling) looked to have produced 0.8 degrees C of global warming. Later, Keith Shine at Reading, dropped that to about 0.4 in a well argued paper (he was also a member of IPCC)….but all then assuming ALL the warming seen was AGW. In my own review in ‘Chill’ I argued from the data that about 80% appeared natural and that the sulphurous cooling was not global – the global chill (from dimming) was cloud, as was most of the warming due to a 4% global reduction in cloud cover from 1983-2001 (International Satellite Cloud Climatology Project data) – which AGWs assume was a positive feedback to CO2 warming. John Christy is on record with a 75% natural estimate (to the BBC).
So – Franks work is in line with these data – a 75% reduction on Shine’s 0.4 gives 0.1 and a figure for doubling of less than half a degree Celsius.
However….all of these figures assume there is no time lag for surface air temperatures due to ocean storage and release…..personally I don’t think there is as much as often implied but it needs to be considered. We are left with a question: what is the mechanism for the long-term recovery from the LIA? It is a very steady trend underneath the oscillations.
Sorry, this is the kind of “reduction ad adsurdem” which gave the alarmists a bad name. You have a total of two periods, you provide absolutely no analysis of the significance of the supposed 55year cycle and don’t e.g. show the correlation that could be obtained at other frequencies.
The size of this “signal” is 0.1C when the size of the rest of the variation is about the same which gives you a signal to noise level which abysmally small.
As I’ve said before, with a global temperature signal dominated by long term noise, it is highly likely that you will mistake natural variation with some kind of cycle or trend. Therefore you should be very robust in your analyse and I would say the minimum is a statement of significance either in the form of a statement of statistical significance of the difference in amplitude of this frequency compared to the general level of similar frequencies or a statement of signal to noise.
The link in the post to Demetris Koutsoyiannis is returing a 404.
Another example of good and solid science. Great to read this, which gives lie to the inbuilt bias of the IPCC whose mission is to scare us all into submission to deindistrialisation and hugely more expensive energy with their particular brand of nonscience.
“It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning.”
This is exactly what I have been saying to myself during these years of man-made global warming indoctrination: How can one say that the planet has warmed 0.6 C during the last 100 years (or thereabouts) by looking at thermometer readings taken 100 years ago and all between then and now? How accurate were these? And how accurate are the present readings? How can one be so sure and accurate, especially after the great weather staion cull?
Two more point:
1. are thermometers read and recorded continuously, or at intervals, or daily and nightly (max and min)?
2. Wouldn’t the integration of T/t into actual total planetary energy be a better measure of the thermodynamic state of the planet, rather than just averaged temperature statistics? I believe that we should measure total energy not averaged temperature statistics. Most probably that was what Kevin Trenbert had in mind when he expected the oceans to warm up, and therefore to dip the thermometers in the oceans, only to declare later on that “it is a travesty that we cannot find the heat” (in the oceans).
Ronaldo says:
And not a fudge factor in sight
Yep first thing I look at.. how many fudge factors..
Now I like fudge, but it really doesn’t have any place in a proper analysis, and the AGW hypothesis is absolutely riddled with them.
Very persuasive, and I’m not on your side of the argument. So based on your graphs, we’ll be getting 25 – 30 years more cooling. Let’s wait and see.
The observation and conclusion seems to be correct. However, with the current extent of urbanization and deforestation many of the natural carbon sinks like rain forests are gone.
This is different from the past! The vegetation loves CO2 and will grow faster using the extra carbon food.
We should be planting trees in a large scale again!
The relationship between theoretic and real CO2 forcing of global temperature is suggested by this palaeo data (thanks to Bill Illis):
http://img801.imageshack.us/img801/289/logwarmingpaleoclimate.png
Just a thought,
If you average the obital periods of the 9 planets you get 60 years.
Phil Clarke says:
June 2, 2011 at 4:19 am
The rest of the warming (about 50%) was likely due to the increase in TSI.
There is no such corresponding increase in TSI. There is a solar cycle in TSI. Where is that in the temperature data?
All I got to say is …… PUBLISH IT … in a peer reviewed journal before the next IPCC toilet paper roll comes out, such that this analysis can be included!
At that point, insist that this work is included for review .. and if not … let the media campaign begin to further prove the IPCC has no interest in science.
The analysis is interesting. I suspect the AGW crowd would argue that the analysis is incomplete because it does not account for the effects of aerosol and global dimming. If an increase in aerosols has muted the warming effect of greenhouse gases, then the climate sensitivity that you calculated would be underestimated during later part of the last century.
I am aware that estimating the dimming effect of aerosals is fraught with difficulty. However, I think the AGW will dismiss your analysis because it is not included in your simple model. I would be curious how you would respond to this criticism.
Kepler came up with empirical laws for the motions of the planets, but it wasn’t until Newton came along with a theory for planetary orbits that Kepler’s laws became truly “science”. There are any number of other empirical fits that ended up in the dustbin of history because they didn’t hold up over longer periods.
I have several concerns:
1) the data is a small signal with a lot of noise.
2) It is admitted that even much of the signal that is there could be an artifact of instrument problems.
3) The fit does not work well before the period over which the fit was done. CRU data exists for a few decades before 1880. According to the model, the data should be dropping steeply before 1880, when in fact it is nearly steady.
4) The years 1940-1960 are mysteriously left out of the analysis for the slopes. If those data points had not been thrown out, then the trend for the first half of the century would be noticeably less and the trend for the 2nd half noticeably greater. This would create major changes in the two major conclusions of the analysis (“Semi-empirical linear model: 0.84 C warmer by 2100; Fully empirical negative feedback model: 0.42 C warmer by 2100.”)
5) There are no a priori calculations to support the conclusions. I have done a fair number of empirical fits to various data. I have found a fair number of trends that seem to be real but with later turn out to be just coincidence. Without theoretical backing, I have little confidence in in-depth analysis of highly noisy signals.
How much of the trends extracted from the GISS data are actually GISS adjustments?
Not quite sure how to ask this, but I’ll give is shot.
What is the noisiest (ie. highest time increment resolution) data set availible?
I’d like to see an analysis of average daily global high temps, low temps, and average.
My thinking is that GHG effects (sans feedbacks) should be most visible in a Lows (a log fit) and the average trend should show GHG effect with feed backs and ENSO and highs feedbacks and ENSO only.
“So there we have it. Both data sets show the later period warmed more quickly than the earlier period.”
Nonrandom patterns in the residuals panel of Figure 1 make it crystal clear that, while the simple cosine decomposition might be a useful introductory level starting point for general public discussion of multidecadal fluctuations, superior methods are needed to further the discussion.
Thanks for the link to:
http://tamino.wordpress.com/2011/03/02/8000-years-of-amo/
(Tamino illustrating an impressive capacity for [at least occasional] balance …but certainly not being 100% perceptive about stat model assumptions.)
Request of those conducting armchair attacks on Pat Frank, who has volunteered time to the community:
Please present your alternative analyses. The community can then compare your approaches with Pat’s, side-by-side, on a level playing field.
Fantastic post Mr Frank, very plausible and difficult to refute. However, eyeballing the GISS data after de-trending I would say that the increased rate of warming after 1950 was due to sudden unexpected cooling in the years 1945 to 1950, after which the global climate played catch-up with itself until the rate of change flattened out, just as you have observed. If this part of the analysis is correct then the data would suggest that there is no real warming trend at all attributable to GHGs (but there may be a tendency towards global cooling perhaps due to setting off large numbers of explosive devices in a relatively short time period). The CRU data does not show this discontinuity but this is likely due to their tendency to try and massage out such sudden changes in their data handling algorithms (which in itself shows the danger of producing data handling algorithms aimed at finding the very data you were hoping to find – they have removed all the discontinuities that contributed to a trend to leave themselves with a trend without the discontinuities).
@Phil Clarke says:
June 2, 2011 at 4:19 am
“It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.” http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-understanding-and.html
==============================================
lmao…..
Phil, I agree. Pat’s numbers depend upon the assumption that the early warming was because of natural forcings or part of a natural cycle. I expected some detractors to bring that up. But, if you’re countering with some other assumptions based on nothing but pure speculation pulled from some posterior, you might as well have not stated anything. I’m sure Mr. Frank is enjoying a giggle or two at this.
lol, seven centuries. Let me guess……. from the treeometers. Stop!!! I’m at work and my co-workers are looking at me funny for my seemingly spontaneous outburst of hysterical laughter!!!! lmao!
TOWARDS A UNIFIED VIEW OF SO-CALLED “60 YEAR CYCLES”?…
Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011). Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability. Climate Dynamics. doi: 10.1007/s00382-011-1071-8.
Since (to my knowledge) there’s not yet a free version, see the conference poster and the guest post at Dr. R.A. Pielke Senior’s blog for the general idea:
a) Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011a). Poster: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability.
https://pantherfile.uwm.edu/kravtsov/www/downloads/WKT_poster.pdf
b) Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011b). Blog: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability.
http://pielkeclimatesci.wordpress.com/?s=Atlantic+Multidecadal+Oscillation+and+Northern+Hemisphere+Wyatt
Paul Vaughan says:
June 2, 2011 at 7:03 am
TOWARDS A UNIFIED VIEW OF SO-CALLED “60 YEAR CYCLES”?…
Without physics, this is as much numerology as Frank’s. Dressing it up with ‘spatio-temporal’ mumbo-jumbo does not advance anything.
Sorry, the word daily should not be in there.
I’m thinking the trend in high and low points. Eg., start at the right of the data plot, move to the left and connect to the next lower temp for lows.
For highs, start at the left and move right, selecting the next higher point.
For average, the 15 year moving average (15 years ~ the amount of data needed to observe a trend).
Scottish Sceptic sez “The size of this “signal” is 0.1C when the size of the rest of the variation is about the same which gives you a signal to noise level which abysmally small.”
I think that is an unstated point we should infer if we are thinking correctly about the UHI effect, instrument drift, GISS fiddling and so on. While the result is admirably demonstrative of a linear trend on a cosine (non-random) walk your point is well taken: there is no statistically significant change in the ‘rate of change’ in the measured temperature over a century of records, let alone that it might be attributed to an elevation in CO2 caused by anthropogenic emissions.
I think other contributors above have raised the essence of my comment: given the measurements we have taken in the way they have been, processed in the manner they endured, there is absolutely no detectable AGW signal. It may be real, and it may be there, but it is not detectable because it is lost down in the mud, possibly inseparable from it with the tools we have. The figure of 0.03 deg C is probable +- a much larger value as a result, were one to consider the instrument errors of about 0.5 deg C. 0.03 +-0.30??
Mr Frank, how about running your de-trended residuals through a discrete FFT like this one:-
http://www.random-science-tools.com/maths/FFT.htm
– to see if you can detect some higher frequency oscillations too. Looks like there is a strong signal with a period of about 3 years, superimposed on a lower frequency signal
@Leif Svalgaard (June 2, 2011 at 7:21 am)
Rather than attacking ad infinitum, please present your ‘physical’ explanation.
The phases of the cosines in fig 2 do not match the phases in fig 3. In Fig 2, the CRU peak is before 1940 while in fig 3 it is after. Which figure is correct?
James/Phil, it’s nowhere near 3 sigma, but there is a correlation of volcanic activity and 200yr solar cycles.
Lief:
White et al found that Solar cycles cause a flux in Sea surface temperatures
http://www.agu.org/pubs/crossref/1997/96JC03549.shtml
and Camp & Tung 2007 found a variance of about 0.2K in global temps attributable to the solar cycle:-
“By projecting surface temperature data (1959-2004) onto the spatial structure obtained objectively from the composite mean difference between solar max and solar min years, we obtain a global warming signal of almost 0.2 °K attributable to the 11-year solar cycle. The statistical significance of such a globally coherent solar response at the surface is established for the first time.”
http://www.amath.washington.edu/research/articles/Tung/journals/composite%20mean2.pdf
And the rise in TSI 1900-1950 and subsequent flatline is shown in various places, notably Lean et al 2008
“How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006”
http://pubs.giss.nasa.gov/docs/2008/2008_Lean_Rind.pdf
Paul Vaughan says:
June 2, 2011 at 7:47 am
Rather than attacking ad infinitum, please present your ‘physical’ explanation.
Pointing out that something is numerology can hardly be an ‘attack’. It has been clear for quite a while that there is a noisy 60-yr signal. Only more and better observational data can improve on this. I have no idea what the ‘physical’ explanation is [I would guess some natural period of the oceans], and don’t need any to in order to see the numerology in your ‘unified view’.
The difference in slope between the periods 1880-1940 (60 years) and 1960-2010 (50 years) doesn’t reflect AGW warming, it’s simply an artifact of the start and end times w.r.t. to the 60 year sinusoidal period. 1880-1940 started and ended at the same phase of the oscillating component, both dates were just before the peak – therefore the early period linear fit is valid. However, for the late period 1960 was approaching the sinusoidal trough while 2010 was just after the peak – the late period linear fit is therefore completely non-valid. Conclusions regarding any difference in the linear slope between early and late time periods are therefore completely faulty.
It would be much better to adjust the 2nd period to start at 1940 and end at 2000 , a valid 60 year period starting and ending at the same phase angle. Of course the cosine/linear fit that leaves no obvious low frequency trend in the residuals already indicates that there won’t be any significant difference.
A do-over is needed here.
Phil Clarke says:
June 2, 2011 at 8:07 am
and Camp & Tung 2007 found a variance of about 0.2K in global temps attributable to the solar cycle
which is the amplitude of Frank’s 60-yr cycle, but does not show up in the residuals he plots. There should be an expected 0.07 degree solar cycle effect.
And the rise in TSI 1900-1950 and subsequent flatline is shown in various places
There is no evidence for a rise in the background TSI. The flatlining starts when we begin to actually measuring TSI.
From eyeballing the residuals in fig 1, it looks like the next lower fourier harmonic period of ~120 years would be of similar significance. It looks as least as significant as the discontinuous linear fit in fig 4 and would probably reduce the trend difference even further.
Fred Haynie has presented a very simlar view of the climate cycles on his site. Worth a looK. Search on his name.
I’ve just written an essay on how global warming happened too.
http://tallbloke.wordpress.com/2011/06/02/what-caused-global-warming-in-the-late-c20th/
Phil Clarke:
You say, “you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans.”
In linear systems analysis, the response of a system with “inertia” to a ramp input does reach its own matching ramping slope after about one time constant of the system. Since Pat starts this part of his analysis in 1960, and the “ramp” of CO2 forcing started at least 100 years earlier, he is not ignoring ocean thermal inertia (more properly “capacitance”). And his 50-year period is hardly “instantaneous”.
Now, under this analysis, if we were able to suddenly freeze the forcing at the current level, warming would continue for a while (how long is hotly debated). But the “slope analysis” Pat does is fundamentally valid (it’s done in engineering systems analysis all the time).
It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100”
I don’t buy it.
I am pretty sure that the little ice age was at least in part caused by volcanic activity, which has since subsided quite a bit, as well as several solar minimums (maunder and others)
It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.
Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?
Leif and Phil Clarke (and others) criticize, probably correctly, for reasons physicists of various sorts understand. The question for all of us, though, is whether the criticisms lead to an incorrect conclusion; the other question for the warmists is whether, were the alleged mistakes corrected, there would be a significant difference in how the conclusion looked. Skeptics need the answer; the warmists need the appearance as well (shades of the Hockey Stick, in reverse).
Frank’s discussion is very straightforward and easy to follow regardless of errors of commission or omission. Could Leif’s and Phil’s comments be added in to the analysis? Could Leif/Phil not revise Frank’s post and show us how the revision affects the answer?
The negative comments seem like shooting ourselves (the skeptics) in the foot. Frank’s post appears to show the same results as other, more lettered posters have done. He has certainly done what many of us technically educated but not climatologically vetted citizen-scientists (love that term) are doing. Looking at the data and applying basic principles and reasoning to find how close the IPCC supermen come to our technical commen sense. Not close, of course.
In a previous WUWT post (I think it was) about how the IPCC various computations and adjustments can be considered a “black box”, it was said you can simulate the relationships inside bbs by looking at how the data-in compares to the data-out. The relationship, as demonstrated, is remarkably simple. As such, one doesn’t need, perhaps, all the detail analyses that Leif and Phil suggest. Most cancel out or contibute only within the error band.
I’ve done my own such analyses as Frank’s, and find the IPCC black box to not reflect simple considerations. If Leif and Phil are correct, that we have to discount Frank’s work because he hasn’t got all the pieces right, that that terminally weakens his conclusions, then all of us outside the authorities – the IPCC, Gavin and Jim – might as well roll over. But I think Frank’s approach has great value. He is not saying that the IPCC is 10% out. He is saying they are 80% or more out. So let Frank be 50% wrong, and Frank is still demonstrating that the IPCC is 40% out. That is still a deathblow as far as the non-natural theories go. No CAGW attributable to CO2. And are Leif-and-Phil’s criticisms even at the 50% error level?
The two nicely demonstrated patterns here are that a 60-year cosine function lies atop a linear function. The IPCC model does not have a cosine function. They don’t have a linear function either. And while there may be a CO2 function prior to 1965, the models are relevant to the post-1965 period, and especially the post-1980 period. Both sides can forget the prior period when looking to the near future, and act as if CO2 increases began the day Al Gore found his mission in life. So such criticism (even though true) is irerelevant.
The IPCC theme is that the past is not the predictor of the future, at least prior to 1965. The future is the product of the present. The skeptic theme is that the past is, indeed, the predictor of the future, though with some minor modification by the present. The pro- and con-CAGW arguments are rooted in this disconnect. “That was then, and this is now” is the fundamental break the warmists have from the skeptics. If the past is, indeed, a good predictor of the future, then Frank’s (mine and others’) simpler view is valid. The previous broad patterns continue into the future through the magic of our own “black box”. No, we don’t have the mechanisms, but to refute the IPCC we don’t need one, as our proof is in the observations, not in the beauty of the “projections”. The details that Leif-and-Phil’s are looking for are handled and hidden inside the box.
This is a great post in large part because it demonstrates how adherence to basic principles while using the IPCC data lead to a significantly different climate energy scenario in hard numbers. The scenario is therefore falsifiable, something that, at least in the 10-year term, the IPCC “projections” are not.
By Frank’s work, by 2022 (my estimate) the global temperature will have dropped by about 0.3C, over which time the IPCC says the temperature should have risen by another 0.22 – 0.30C. The two will then be apart by 0.5 to 0.6C, something terminal for the IPCC model. By 2015 the difference will be still within the error bands, but will be looking like 0.1C – in the wrong direction. We’re getting into the time-frame when Gavin and Jim look to retirement and accolades while they can.
There needs to be an empirical consideration to criticisms such as we have here. We need to know if errors are terminal, moderate or cosmetic. This is the stuff that the generalist can understand, maybe even the MSM journalist. So let’s build on it.
This seems to be an ideal analysis, for the good folks at Lucia’s blackboard, to sink their teeth into.
I certainly hope it gets published more widely than a few blogs… Is anyone considering a paper, so that it can be safely referred/cited to? GK
Even before you do the cosine analysis, the gradient 1910 to 1943 is just as steep as the gradient 1970 – 2000. On that basis there is really no evidence that mankind is doing something unusual to the earths temperatures – you would have to demonstrate that the cause of post war warming was different to the cause of pre-war warming. Otherwise the graphs are really showing just a 130 yr upward trend unaffected by modern technology.
bob says:
June 2, 2011 at 9:09 am
“It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100″
I don’t buy it………..
It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.”
=========================================
Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.
>> Phil Clarke says:
June 2, 2011 at 4:19 am
it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.”
Also, you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans. … <<
These two statements combine to say that early 20th century warming was caused by anthropogenic forcing from the 19th century. Do you really buy that?
To those who say, only the Western part of North America is experiencing unusually cold conditions, I present:
http://www.ansa.it/web/notizie/regioni/valledaosta/2011/06/01/visualizza_new.html_842381390.html
Phil Clarke says: June 2, 2011 at 4:19 am
Post Quote: “The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature.”
Phil Clarke response: “Er, no it is not. See figure 2.23 in IPCC AR4. Long lived Greenhouse Gas (LLGHG) forcing contributes about 0.35 W/m2 pre-1950…”
Thank you Phil. Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter. If the claim is not well founded it is often a sign of strawman cometh. In this post the claim above is well off the mark.
We are making the mistake of looking at real world data. As we keep getting told the real world data is wrong unless it matches the models, as it is only the models that are right.
Please remember that a snowball earth for the next 10 thousand years is not inconsistent with CAGW, that would just be weather not climate but a clear sunny day in summer is climate not weather /sarc
@”Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?”
Just tradition, based on FFT and maybe tidal analysis. A cosine curve is the same as a sine curve with a 90 degree phase shift, or A*cosine(w*t+p)=A*sine(w*t+p+pi/2).
Maybe it also makes it easier to interpret the fitted phase term p in that it is the offset of the peak rather than of the zero crossing.
In either case, fitting a single trig curve fits three parameters (or fudge factors): amplitude, frequency, and phase.
@Leif Svalgaard (June 2, 2011 at 8:16 am)
You owe Marcia Wyatt an apology.
Yesterday I noted the extremely cold conditions aloft, as an upper low came ashore in Western North America. I commented that with -27 deg C at 500mB, I was worried about twisters:
http://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/2011/06/01/state/n190225D61.DTL
I wonder what will happen when that package of air (now over the four corners region) hits the Eastern US?
By my own quick analysis, the best fit to the HADCRUT 3 data from 1880-2010 is
T = -11.7464 + 0.00597 /yr + 0.1331 cos (0.1062 * T + 1.1330)
That gives 0.0597 C/decade and a period of 59.3 years. For the years, 1880 to 2010, there is an average deviation of 0.09 C between the fit and the data (ie the average of |Fit – Actual| is 0.09), with about half above and half below.
The big problem is that before 1880, the fit is LOUSY! every single point from 1850 – 1883 is above the fit, by an average amount of 0.31 C. (And remember, that is using 5 free parameters to fit the data — slope, intercept, amplitude, period and phase angle.)
And even at the end, I get 9/10 points in the last decade as being above the fit. 2011 will be interesting to see. It started with a huge drop in global temperatures from the 2010 levels, but is jumping back up a bit. If the drop was a fluke, then the spike down might have been an anomaly, but if the spike is a trend, then we might be on a downward trend as Pat Frank’s analysis would predict.
Just in case anyone is still interested:
Demetris Koutsoyiannis
To support this analysis I reiterate my earlier comment to WUWT, wherein I calculate a sensitivity of 0.121 ℃ /(w/m²). This calculation ignored negative feedbacks from clouds, etc, whereas the present analysis would include these factors, so I would consider the two to be in agreement.
http://wattsupwiththat.com/2010/10/25/sensitivity-training-determining-the-correct-climate-sensitivity/#comment-516753
Paul Vaughan says:
June 2, 2011 at 10:43 am
You owe Marcia Wyatt an apology.
For what?
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
So how do you explain the LGM or the LIA and the MWP for that matter?
Tim Folkerts says:
June 2, 2011 at 11:29 am
The big problem is that before 1880, the fit is LOUSY!
This is the usual problem with numerology: once you go outside of the interval on which the original fit is based, the correlation breaks down.
Dave, does the convention make derivative analysis easier?
There is no much point delving into AMO , PDO and solar relationships unless science can relate to what is causing what.
AMO and PDO ‘drivers’ as I’ve identified from available data, are not perfectly synchronised either among themselves or with the solar activity.
http://www.vukcevic.talktalk.net/DAP.htm
However, one can’t escape impression that since 1900s (time of the reasonable data reliability) there is a loose relationship to the solar output, not perfect, but there is some commonality.
Since none of the data I used are TSI related, then one can say that the solar science is partially correct to say ‘it is not TSI’.
On the other hand solar scientists do not have monopoly on the Sun-Earth link knowledge.
Fitting certain “data” = numerology. LOLOL!
Mr. Frank
Sin/Cos correlation is usually described as numerology (my personal experience).
However in this case no need to use Cos function, just superimpose the true North Magnetic Pole (until 1996/7 located in the Hudson Bay Area) magnetic flux and you will get just as good match.
See graph on the index page of my website:
http://www.vukcevic.talktalk.net/
@Leif Svalgaard June 2, 2011 at 11:37 am
You’re out of your league facing off against Kravtsov & Tsonis.
Paul Vaughan says:
June 2, 2011 at 1:32 pm
You’re out of your league facing off against Kravtsov & Tsonis
Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful.
http://www.squaw.com/uber-cam
Too bad they closed. There’s more snow now than there was a month ago. They close leaving all the snow to go to waste due to insurance reasons plus, the flatlanders watching their boob tubes probably think there is no snow and everything is going tropical up there. After all, da man on da news sed dat dem tornados is doo to gwobo warmin’, can’t be no snow up dhere.
Just like models.
James Sexton says:
“Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.”
I am sorry, but that attitude belongs in the Medieval Warm Period.
If we don’t try and understand what causes the climate to change, then we will never be able to answer the question of whether or not burning fossil fuels will be detrimental to the human race.
I think a big problem currently before us is determining what the current forcings are. I don’t think we have a good measure of some of the current important ones such as the amount of aerosols currently being emmited.
And without that, predictions on which way the climate will go in the future are fraught with peril.
BTW involved in a debate (between the name calling and pseudo science put downs) on the ‘Say Yes Australia’ FB page and someone put in a reference to
http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf
Anyone care to comment and contrast? I’d like to further the meaningful debate. Thanks
It is incredibly easy to see sinusoids and linear trends in any set of data. Absent a reason or an explanation, one cannot just subtract something like that out of the temp series. And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution. That is a moronic and naive assumption, absent any further information or theory.
Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects. They also account for many different human effects that push the climate in opposing directions (after all, the human impact is not monolithic).
For your readers who find this article to be an exciting example of “real science!”, I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs. Real science requires genuine skepticism and commitment to rigor, not sloppy contrarianism.
Thanks for your impressive post, Dr. Frank.
I feel, however, the whole picture may be drastically changed if you use the UAH-MSU satellite temperature data, which already has a 32-year history, instead of the error-prone surface station data.
@Leif Svalgaard says:
June 2, 2011 at 1:49 pm: “…Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful….” I think that applies here. Pat Frank caught a fine trout. After gutting it, there was nothing left. Using the warmist’s bullshit data, he showed there’s nothing in that data.
@Ammonite says:
June 2, 2011 at 10:12 am: …Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter….” I’m sure that they state, somewhere in all their bloviation, that it is “likely” that pigs can fly. Likely, loosely defined, isn’t a scientific term, but then, the IPCC isn’t a scientific body.
@Doug Proctor says:
June 2, 2011 at 9:14 am: Good appeal to stop being the smartest guy in the classroom and actually pitch in and see if one can make a useful contribution to the discussion (in the case that the discussion isn’t widely viewed as idiotic).
@tallbloke (June 2, 2011 at 8:54 am )
Genuinely looking for some clarification on your essay…
You say the tail doesn’t wag the dog, but then you go on to emphasize the importance of the sunshade in the sky (clouds). Despite drawing attention to decadal patterns in specific humidity, you appear to restrict your conceptualization to anomalies, ignoring annual & semi-annual heat-pump cycles and the related intertwining of circulation geometry, oceans, & sunshade (which Stephen Wilde so strongly emphasizes in recent months). Reading your essay helped me understand vukcevic’s perspective (ocean-centric), but other perspectives also reveal a climate-dog chasing its tail (i.e. looping spatiotemporal causation chain that moots debate), so a line of productive inquiry is to step back far enough from the neverending loop to see what drives changes in the rate & amplitude of “tail chasing” […at scales supported by observation].
So my question is (& I sure hope it’s obvious by now that this is where I was heading)…
Do you disagree with Sidorenkov (2003 & 2005) on ice?
My understanding (A G Foster comment in a somewhat-recent WUWT thread) is that NASA’s R. Gross is now pursuing exactly this (…which, as I hope you know, matches -LOD, AMO, PMO, etc. in multidecadal phase).
@Aaron: The derivative of cos(x) is -sin(x) versus the derivative of sin(x) is cos(x), so if a negative sign is a significant complication to the derivative analysis, then it actually would make the derivative analysis significantly more complicated. On the other hand, since the integral of cos(x) is sin(x), the convention would similarly significantly simplify an integral analysis.
Still, in the figure 2 CRU curve, the period looks more like 70 years (pre 1940 to post-2000) than the 60 year period called out in fig 3. Something is amiss.
Thank-you very much, Anthony, for picking up my essay at Jeff’s tAV. It was a happy surprise to find it here today.
Thanks also to everyone for your very thoughtful comments. I’m a little overwhelmed with all your responses, and with 88 of them so far to go through. I’m a little stuck for time just now, but hope to post replies this weekend.
I’d like to acknowledge Bob Tisdale’s comment, though. As he mentioned, we’ve discussed the PDO+AMO periodicity described in my analysis. Those interested are encouraged to read the exchanges at tAV, at the link above.
It’s clear though, that to get to a place that benefits us all, Bob, you’ll have to work out your differences directly with Joe D’Aleo and Marcia Wyatt, et al., and make the conclusion public.
Later . . . 🙂
Good luck getting this analysis published. I doubt even E & E would touch it with a bargepole. It’s so full of holes it makes the Titanic look watertight.
@Matt
“It is incredibly easy to see sinusoids and linear trends in any set of data.”
Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. The point of this analysis is that the peaks and troughs shown in the data set are of a similar amplitude to the underlying straight line trend, so it is reasonable to see what happens mathematically if you fit the data set to a sine.
“Absent a reason or an explanation, one cannot just subtract something like that out of the temp series.”
Fair enough, and a valid criticism of Pat Frank’s explanation here, since he really concentrates on a simple mathematical analysis. But actually there is a perfectly good theory underlying Pat Frank’s analysis. There are two competing theories for observed variations in climate. One is held by the IPCC that AGW caused by a significant uptick in CO2 output after 1950 is causing a significant increase in the rate of warming after 1950. The other is held by the sceptic camp – since it denies AGW has a serious impact on climate then it is safe to assume that the sceptic camp believes the climate is fairly “stable”. Now we have to be careful here since “stable” can mean many things in this case – it can mean “flatlining” but it can also mean continuous oscillation and limit cycling (limit cycling is the dramatic and rapid change from one condition to another – the ice age/interglacial climate oscillation is an example of a limit cycle). Oscillation tends to occur in systems where there is negative feedback and energy storage in the system. In any system there can be multiple sources of feedback and energy storage and hence multiple sources of oscillation all at different frequencies and amplitudes that could be superimposed on each other. Pat Frank identifies one possible source of energy storage as the ocean, since water has a very high specific heat capacity and therefore can store enormous amounts of energy showing only a small rise in temperature. Given this scenario it is perfectly reasonable to look for sinusoidal oscillation within a climate data set and propose ocean heat storage as a possible cause of that oscillation. It is not pure “numerology” – it is fitting a function to a data set based on a hypothesis and looking to see how good the fit is. The fit certainly looks as good as fitting a pure linear trend to the data, and the reasoning behind is certainly no worse than fitting a pure linear trend to the post 1950 data and then deriving a gradient from that trend and proclaiming it to be the climate sensitivity.
“And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution.”
No, you can’t. But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW? We can’t – the data doesn’t allow us to do that. The dataset is completely inconclusive. CRU and GISS have been wasting their time. We cannot say that the gradient after 1950 is in any way exceptional and therefore related even in part to AGW. You could perfectly well derive from the dataset that AGW has no impact – in fact since that is the default position that would normally be the approach science would take, but proponents of AGW are claiming a special case here because they say the risks are very high (they neglect the risks of rolling back the great technological advances made in the West that are currently responsible for the survival of about 1billion people).
“That is a moronic and naive assumption, absent any further information or theory.”
Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient. Pat Frank is correct in that at the very least the gradient after 1950 is hardly any worse than the gradient before 1950 when AGW was minimal (since the CO2 in the atmosphere before 1950 is proposed to have been stable) – this is before we even get into the relatively small contribution in the acceleration that might be related to a sine oscillation in the climate with a period of 60years. Furthermore the most recent data from 2000 to 2010 shows deceleration not acceleration, so it hardly supports the theory that AGW is becoming the dominant contributor to temperature trends in the new century.
“Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects.”
Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001. What the hell happened there? An enormous cow fart? I doubt it. I doubt it had anything to do at all with AGW and yet that one month was 3 Celsius higher than a year previous. Smooth that month out over a whole year and it would still contribute 0.3Celsius increase in temperature for the whole year! In fact the difference in the whole year was much bigger than that – because all but one month in 2002 was warmer than 2001 for Lerwick, and in each case by at least 1.2Celsius. Why? Well not because of CO2. Those thermometer readings were measuring a temperature anomoly year to year that had nothing to do with CO2. I’m guessing cloud cover. 2002 was sunny and 2001 was cloudy, would be my guess (and that fits to my memory of 2002 as well). But maybe wind direction made a difference too. So when we look year to year at any location we can be 100% certain that differences in temperature have little to do with CO2 but are entirely due to cloud cover and wind direction. And yet, when we average out all these thermometric measurements of cloud cover and wind direction we assume that what we are left with is the contribution due to CO2???? That’s like making measurements of the speed of vehicles on a motorway/freeway over a 50 year period and coming to the conclusion that bicycles must be getting faster.
“For your readers who find this article to be an exciting example of “real science!”” – Well I don’t. There ain’t much science involved. The maths is OK however.
“I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs.”
My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.
The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. No surprise there. Thermometers in Stephenson screens at ground level can be used to measure cloud cover anomalies but not atmospheric temperature anomalies. Human development likes clouds because clouds = rain = drinking water+irrigation. So that’s where the thermometers tend to be – in cloudy places. What you have above is two graphs showing how cloud cover has decreased slightly over the last 100yrs. Worrying in itself perhaps, but it has no connection with AGW.
@Vince Whirlwind (June 2, 2011 at 10:24 pm)
Rather than blasting unsupported cheap shots from the safe cover of the periphery, please step right out into the open, volunteering to the community your alternative to Pat Frank’s approach.
Here’s a critique of this post: http://tamino.wordpress.com/2011/06/02/frankly-not/
REPLY: Heh, he’s got what he thinks is a clever label, “mathurbation”, this kills any rebuttal integrity right there. The faux Tamino, as self appointed time series policeman, would complain about a straight line with two data points if it appeared here, so it’s just the usual MO for him. I’ll leave it up to Pat Frank to respond if he wishes, my advice would be to provide an updated post here rather than there, because as we all know and has been deomstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.
– Anthony
Ryan: “that’s where the thermometers tend to be – in cloudy places.”
What? You lost me there.
Ryan @ “Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. ”
It most certainly clearly can fit if you chose a sinusoidal frequency on the order of 4+ times the length of the data. You fit a sinusoid by choosing a frequency w, calculating sin(w*t) and cos(w*t) and then so the same old ordinary least squares process to fit the model Y(t)=b0 + b1*cos(wt)+b2*sin(wt). Then the amplitude of the sinusoid is sqrt(b1^2+b2^2) and the phase of a cosine would be atan2(b2/b1). Some harmonic analysis codes even use the infinitely slow frequency of zero to model a constant intercept term, so a sinusoid can even model a constant.
Including the intercept term, the sinusoid/cosine model, does have 1+3=4 parameters compared to a linear model’s 1+1=2 terms, but clearly, it can fit the data at least as well as a straight line,
Frank’s methodology might be good math, but whether or not it is good stats would depend on a residuals and validation analysis, which in the above seems to be limited to visual analysis with repeated assertions of “clearly.”
“The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias.”
Yes, but since you’re looking at a cosine function to normalize your data, you really should pick similar points and durations along your curve. The 1880 point is near the top of the curve, and has a duration of 60 years (to 1940).
However, your 1960 start is below midway up, and since your duration (to 2010) is 50 years (less than the 60 year period of your cosine function), the start and end points bias the results. In this case, the end point necessarily will be artificially higher on the curve, resulting in a greater slope.
The upshot is that, while you showed a minor sensitivity for CO2, the unbiased 1960-2010 slope actually should show an even lower sensitivity.
Otherwise, nice job, especially in using assumptions which give conservative results.
Look on the visible satellite on the side bar of this Blog:
DISCUSSION…AS OF 9:30 AM PDT FRIDAY…MID AND HIGH CLOUDS ARE STREAMING OVER THE DISTRICT IN ADVANCE OF THE APPROACHING LATE- SPRING STORM. THE UPPER LOW CENTER…CURRENTLY LOCATED NEAR 40N/130W…IS DROPPING SOUTHWARD OVER THE COASTAL WATERS…AND IS DUE TO REMAIN OFF THE COAST UNTIL LATE SUNDAY WHEN IT IS PROGGED TO FINALLY SWING EASTWARD OVER CENTRAL CALIFORNIA. PLENTY OF MOISTURE HAS BEEN ENTRAINED INTO THIS SYSTEM…AND THE WAY THE SYSTEM WILL INTERACT WITH THE COAST IN TERMS OF OROGRAPHIC ENHANCEMENTS…THIS LOOKS LIKE A POTENTIALLY RECORD BREAKING EVENT FOR OUR AREA FOR EARLY JUNE.
LATEST AMSU PRECIPITABLE WATER ESTIMATES GIVE WELL OVER AN INCH OF RAIN WRAPPED UP IN THIS SYSTEM. MODELS CONTINUE PREVIOUS TRENDS OF BRINGING LIGHT RAIN TO THE NORTH BAY TODAY…AND SPREADING SOUTH THROUGH THE GREATER SF BAY BY EVENING…THEN REACHING THE MONTEREY BAY AREA BEFORE MIDNIGHT. HEAVIEST RAIN IS EXPECTED OVERNIGHT TONIGHT INTO SATURDAY MORNING. BUT AS THE UPPER LOW IS FORECAST TO REMAIN WOBBLING OFF THE COAST THROUGH SUNDAY…SHOWER CHANCES WILL PERSIST THROUGH THE WEEKEND.
CONFERRING WITH THE CALIFORNIA/NEVADA RIVER FORECAST CENTER ON QPFS…2-5 INCHES STORM TOTAL ARE POSSIBLE ACROSS THE WETTEST AREAS INCLUDING NORTH BAY HILLS…SANTA CRUZ MOUNTAINS…AND THE SANTA LUCIAS. INLAND LOWER AREAS COULD GET UPWARDS OF 1-2 INCHES TOTAL. ALTHOUGH THE BASINS CAN HANDLE THIS AMOUNT OF RAINFALL SPREAD OUT OVER TWO DAYS…THESE ARE STILL BIG NUMBERS GIVEN WHERE WE ARE IN THE CALENDAR. THUS…SOME RECORD RAINFALL AMOUNTS ARE HIGHLY LIKELY FOR JUNE.
GIVEN THE PROXIMITY OF THE COLD UPPER LOW…THUNDERSTORMS ARE ALSO A POSSIBILITY…AND WILL ADD A SLIGHT CHANCE TO THE AFTERNOON FORECAST PACKAGE.
SHOWERS TO END LATE SUNDAY AS THE UPPER LOW FINALLY EJECTS TO THE EAST. THE REST OF THE FORECAST PERIOD IS EXPECTED TO CONTINUE COOL AS A LONG-WAVE UPPER TROUGH REMAINS OVER THE WEST COAST. NOT RULING OUT FUTURE SHOWER CHANCES AS WELL…GIVEN THE PRESENCE OF THIS TROUGH.
=================================
Thank goodness this system has a very cold core. Otherwise, we would face a rather cataclysmic situation given the massive snow pack in the high country.
Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.
Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.
@ Ryan
I appreciate the thoughtful response.
“But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW?”
Nobody is ignoring the trend before 1950 and no one is saying that the trend from 1970 to 2000 is purely AGW. Read the 4th assessment IPCC report.
The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects. You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.
“My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. ”
I have the opposite story. I grew up an ardent “skeptic”. In grad school, I met some real climate scientists. At their encouragement, I started reading the literature and I was shocked to discover that the work is very thorough. I was also surprised at how open the community was about its uncertainties, contrary to how I was raised. I am not a climate scientist and do not purport to be an expert. However, as a experimental particle physicist I hope to be able to claim that I can see the difference between mature, rigorous scholarship and sloppy, hand-waving. This article is sloppy hand-waving.
“Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient.”
Again, read the attribution (finger-print) analyses. No one is following the procedure you have described. You are evoking a straw man for what the climate science is saying about temperature trends and human impact. First and foremost, aerosols have a cooling effect that obscured the full impact of greenhouse gasses for much of the 60s and 70s (pre-clean air act). Second, most of the known, natural climate forcing mechanisms have plateaued and even reversed over the last 50 years of the 20th century. Given this change in natural forcings, it is certainly wrong to subtract the trend of the first 50 years from the trend of the second 50 years. This also suggests, that the observed warming over much of the last 50 years is building on what otherwise would have probably been a cooling, absent human impact. One needs to be able to understand the magnitude and direction this natural trend before one can begin to separate out the human effect.
“Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001…”
I only listed some of the factors. But these are accounted for in the climate literature. Water vapor is admittedly one of the poorest understood of the feedbacks, but there is tremendous work being directed towards this question. I don’t know anything about your Lerwick story but it sounds like an anecdote (the favorite tool of contrarians). Very large fluctuations from month-to-month temperatures often occur at particular localities. This is meaningless to the global average temperature anomaly.
“Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. ”
What deliberate misinterpretation of temperature data?
“I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.”
Why do you come to this conclusion? I think the work of the climate science community is of a very high caliber. There is a cottage industry built around maligning the climate science establishment. This is the part of the whole skeptic thing that really turns me off. These personal attacks and accusations go far beyond academic discussions about the science. You really seem sincere and I strongly urge you to visit a local University and talk to actual publishing climate scientists. They will appreciate your tough questions, as long as they are coming from sincere curiosity and not with a rhetorical and cynical tone. You will be surprised at the experience.
“The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. ”
Read the IPCC AR4 report from working group 1. Not the summaries, but the actual report. It is a really good summary of the state of climate science, despite all of the attempts to paint it as a global liberal conspiracy. Even giving Frank the benefit of the doubt, this article is -at best- some preliminary speculation. But, I am afraid that it isn’t even interesting speculation. The talk sinusoid and slopes are repeated again and again in the contrarian rumor-mill, as if no one has thought about this stuff before. I’m sorry, but it is embarrassing that this guy would be so arrogant as to proclaim that people should “spread the word” of this calculation. It is such a rudimentary and flawed line of reasoning that it is utterly meaningless and not in the same Universe as the established attribution analyses.
Squaw Valley will reopen for 4th of July weekend.
Pat says: “…for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.” You are totally wrong to assume that they are real and meaningful. They are not as comparison with satellite temperature measurements will tell you. Obviously you have not read my book “What Warming?” or you would know that they fabricate temperature curves. One example is the period in the eighties and nineties they show as a steady temperature rise called the “late twentieth century warming.” I have proved that this so-called warming does not even exist. What does exist in the eighties and nineties is a temperature oscillation caused by the alternation of El Nino and La Nina phases of ENSO, up and down by half a degree for twenty years, but no rise until 1998. That is ten years after Hansen invoked global warming in front of the Senate in 1988. His testimony gave a kick start to the present global warming craze which turns out to have been founded on a non-existent warming. There is more – get my book from Amazon and read it. I can see why warmists want to ignore it but there is no reason for someone who wants to learn the truth about global warming not to know what is in it.
well done indeed – some very impressive sounding words like “oscillation”, “residuals” and “sensitivity”, nice curvy lines that fit the data properly, and, best of all, a conclusion that confirms by beliefs. I’m have no scientific training, let along understanding of climatology, but I know this must be real, empirical science (not like that IPCC rubbish). It gives the answer I want.
Anthony’s response to Joel Shore’s comment above commits the same blunder he accuses Tamino of — dismissal by rubbishing the integrity of the commenter. As I think I’ve said before here, “play the ball, not the man”
That said, I really would love to see a response from Frank!
SteveSadlov (June 3, 2011 at 10:14 am) wrote:
“Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.
Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.”
–
Requires too much thought for those who think in anomalies and can’t be bothered with changes of state of water. Looks like it will be decades before people clue in. Good to see evidence that there’s at least one person thinking — much appreciated.
“… as we all know and has been demonstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.”
That has not been my experience. He allows plenty of dissention, but he does not suffer fools gladly; nor should he. If Pat Franks is so confident of his analysis, he should submit it for publication to any of the peer reviewed climate journals, and then see where the chips fall.
I would be happy to see an exchange here or at Open Mind between Pat Franks and Tamino/Grant Foster. It seems to me that at this point, Mr. Franks has some explaining to do in responding to the critique of Mr. Foster and the others who responded in detail at Open Mind.
@Charles (June 3, 2011 at 8:57 pm)
Tamino is very heavy-handed with censorship, even of benign comments.
Matt says:
June 3, 2011 at 2:39 pm
“The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects.”
The problem with that is: process of elimination only works when your knowledge of all alternatives is complete. Climate Science has only been researched seriously for a very few decades, and the Earth’s climate system is immensely complex. Based on your sober writing, I doubt you would claim that every potentially significant effect which could cause a ~60 year temperature cycle has been investigated and demonstrated to be insignificant. If you did, the only effect on my perspective would be to lower my opinion of your sagacity.
“You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.”
I think you are misinterpreting the exercise. The author is performing an experiment in which he accepts the IPCC argument that significant change occurred mid-century, and follows the path where that leads. And, given the presence of a 60-ish year cycle in the data, it leads to less climate sensitivity than the IPCC claims.
Pace Tamino and his ilk, there clearly is a ~60 year cyclical process in the data over the last century evident by inspection. Is it a phantom of measurement error, or mere coincidence in timing between between an early transient and subsequent rise to significance of GHG forcing? Or, is it the excitation of a fundamental mode of the system which began a century or more ago, and has yet to damp out?
Given the third coincidence of peaking in the early part of this century right on schedule, I would tend to suspect the latter. In fact, this is precisely how the output of such a mode, coupled in series with an integrator or longer cycle mode, might look driven by white noise or any other random process within the bandwidth. I would suggest to the author trying out a fit with an amplitude modulated sinusoid, which looks to me could be contrived to give a better fit.
Leif Svalgaard says:
June 2, 2011 at 7:21 am
“Without physics, this is as much numerology as Frank’s.”
With incomplete knowledge of all significantly contributing physical processes, it’s all “numerology” at some level. When you do not know what is going on (and, don’t anyone try to tell me the climate establishment fully understands the lull in temperature rise of the last decade), you look at the data and try to tease out some order which can give you new directions in which to investigate.
Matt says:
June 3, 2011 at 2:39 pm
“I started reading the literature and I was shocked to discover that the work is very thorough.”
One last comment on this posting. No matter how brilliant the researchers or “thorough” their work, they can still be hopelessly wrong. Ptolemaic astronomers had an incredibly thorough and deeply researched methodology which, contrary to most peoples’ perceptions, gave a reasonably good and repeatable description of the movement of heavenly bodies with well established predictive power. It was just completely and utterly wrong in its driving assumptions. These were not primitive cave dwellers. They were profoundly knowledgeable and intellectually vibrant men who were limited only by the state of knowledge of the day.
Climate is the rock against which the ship of 20th century reductionist-inductive (linear catholic logic) science is going to founder. This rock bears a striking resemblance to the head of Karl Popper.
Thank you very much for an excellent article! Granted, the causes of everything is not explained. But would any one have criticized Tycho Brahe for his excellent work measuring the star positions? Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.
Werner Brozek says:
June 4, 2011 at 11:20 am
Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.
Initially Kepler fell into the same trap as Frank. Fitting crummy [limited] data to beautiful curves: http://www.georgehart.com/virtual-polyhedra/kepler.html
Leif Svalgaard says:
June 4, 2011 at 11:59 am
“Fitting crummy [limited] data to beautiful curves…”
Again, I think this is a misinterpretation. Frank is engaging in hypothesis testing. The IPCC says the data are good. Do the data, then, take us where the IPCC says we are going?
If the data are that crummy, then what information, if any, do they hold?
Bart says:
June 4, 2011 at 12:29 pm
Frank is engaging in hypothesis testing
his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside, for which there is no evidence [especially not for the future part]. This might be valid if there is a theory that says that it must be so. If no such theory is supplied, it it just numerology.
Bart says:
I think the IPCC has always been pretty clear in noting that the instrumental temperature record does not alone place very strong bounds on climate sensitivity. Rather, better empirical evidence is obtained from combining it with constraints from other events such as the least glacial maximum, the climate response to the Mt. Pinatubo eruption (which involves the instrumental temperature record but just a small portion of it), … And, these empirical constraints on climate sensitivity give a similar range as is found using climate models.
So, Frank’s post is really nothing new…If you make some assumptions regarding the instrumental temperature record, you can find a very low climate sensitivity; however, if you make other assumptions, you can find a very high climate sensitivity.
Leif Svalgaard says:
June 4, 2011 at 5:07 pm
“…for which there is no evidence…”
Kind of the entire AGW brouhaha in a nutshell, that.
The data may be crummy but until we get the BEST, we will have to use what we have. Scientists have always been forced to use less than perfect data, however I will readily admit the climate data is worse than most.
With regards to explaining the graphs, unless I am mistaken, I believe Willis Eschenbach, with regards to his post: http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/ may be able to go a long ways to explaining the spikes in the lower graphs of Figure 1. As for the sine or cosine curve part, I believe Bob Tisdale could take a good stab at explaining that. In terms of predicting the future climate, I believe the sine curve would have greater predictive value although having a good estimate of sunspots over the next decades, with help from Leif Svalgaard, should provide better forecasts than the IPCC projections.
As an experiment, I tried my ad hoc cosine series approximation method on the full range of the HadCRUTv global mean temperature data. I believe this method, using the Microsoft Excel Solver utility, attempts to explain the observed data as a discrete number of minimum amplitude sinusoidal (actually cosines) waveforms and as such, it is likely to be incomplete as it does not necessarily find all sinusoids or account for random forcing events (including data collection methodology changes.)
I used a binary, log-periodic series of cosine periods from 5 to 1280 years. I first ran the optimization adjusting only the amplitudes (deg C) and the base dates (nearest cosine peak to 1930.667 decimal year-date) and then I allowed it to optimize the periods as well. Each element of the series is calculated by subtracting the base date from the actual date and multiplying the result by two times pi() divided by the period (in decimal years)to create the argument for the cosine function. A temperature offset constant is also included in the sum of all elements. I used a method that forces a minimum element amplitude solution to prevent unrealistic solutions with large mutually cancelling element amplitudes over the known data interval. The Data to Error ratio is ten times the log of the sum of the squares (SUMSQ()) of the original data divided by the sum of the squares of the approximation error.
The final solution seems to predict a temperature drop of 0.4 degrees C from now to 2040 and seems to indicate temperatures dropped 0.2 from 1835 to 1845. The predictive validity of this method depends on how much our climate depends periodic on periodic processes. I note that periods close to one and two times the sunspot period do seem to be present. The elements with periods longer than the data interval (161.250 years) probably approximate the linear slope used in the main article.
Spector says:
June 5, 2011 at 12:27 pm
“I used a binary, log-periodic series of cosine periods from 5 to 1280 years. “
That’s pretty arbitrary. If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?
It definitely replicates the series of the last century. And, it captures the LIA as well. But, it falls apart at the MWP. In principle, you could always find a good replication over any given interval using any functional basis, so there is no particular reason to believe this has predictive power.
It does, however, highlight the fact that everything we see and have seen could easily be the effect of many steady state cyclical processes alternatingly interfering constructively and destructively.
Meant to say: “…so there is no particular reason to believe this has long term predictive power.” It’s probably not too far off for the immediate future.
RE: Bart: (June 5, 2011 at 3:47 pm)
Spector says:
June 5, 2011 at 12:27 pm
“If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?”
The plot is based solely on the HadCRUT3v data from Jan, 1850 to Mar, 2011 using the Microsoft Office 2007, Excel Solver utility to adjust the parameters for minimum square error. I forced a minimum amplitude of .001 deg C and required the periods to be sequential. To prevent unrealistic solutions, I also multiplied the error sum by one plus 0.1 times the square root of the sum of the squares of the trial amplitude factors. I believe that forcing a minimum energy solution reduces the likelihood that the approximation might be ill behaved at the end points. (Which it often is if I don’t.)
Given that the data interval was 161 years, I would be surprised if any predictability extended more than 40 years on either end. It seems to be treating our current warm interval as an enhanced repetition of the peaks of 1940 and 1880.
I based this technique on the fact that an FFT will not estimate the frequency of a small fraction of a sine wave contained in a multi-sample record, but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.
Sorry to be silent for so long. You’ve all provided intelligent commentary, and I regret not having time to participate and attempt replies.
But I did have some time today, and posted a reply at Tamino’s critique. We’ll see what happens. Those of you who put credence there are encouraged to take a look, and participate.
Why post there? AFAIK the main issues raised there were first raised here and this is a civilised uncensored forum unlike Tamino’s which doesn’t deserve patronage.
Leif, you wrote, “his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside…”
My hypothesis, first, was that the oscillation that appeared in the GISS 1999 anomalies, following addition of the SST anomalies to the land-only anomalies, reflected a net world ocean thermal oscillation. The cosine + linear fits proceeded from that hypothesis.
In the event, the cosine in the full fit had about the same period as the oscillation that appeared in the GISS data set after the SSTs were added.
Then, pace Bob Tisdale, the cosine period proved to be about the same as the PDO+AMO period noted by Joe D’Aleo and about the same as the persistent ocean periods of ~64 years reported by Marcia Wyatt, et al.
I took those correspondences — the appearance with SST, correspondence with the ocean thermal periods — to provide physical meaning to the oscillation in the global temperature anomaly data sets, represented by the cosine parts of the fits. This doesn’t seem unreasonable, and lifts the analysis above “numerology.”
Following the assignment of physical meaning, an empirical analysis such as the above must be hypothetically conservative and mustn’t ring in expectations from theory. If the early part of the 20th century showed warming generally accepted as almost entirely natural, then it is empirically unjustifiable to arbitrarily decide that the natural warming after 1950 is different than the natural warming before 1950.
That means the natural warming rate from the early part of the 20th century is most parsimoniously extrapolated into the later 20th century, absent any indicator of significant changes in the underlying climate mode.
The rest of my analysis follows directly from that. The net trend, after projecting the natural warming trend in evidence from the early 20th century, is that the later 20th century, through to 2010, warmed 0.03 C/decade faster than the early 20th century.
This excess rate may turn out to be wrong, when a valid theory of climate disentangles all the 20th century drivers and forcings. However, it is presently empirically justifiable.
The trend I extrapolated to 2100 wasn’t a prediction, but merely a projection, ala the IPCC, of what could happen if nothing changes between now and then. That, of course, is hardly to be expected, but at least I put that qualifier transparently in evidence. I.e., “Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.”
And so it goes. 🙂
Apologies for neglecting to close that link. 🙁
I second Alan’s motion. I always feel like I need to bathe after I look in over there.
Spector says:
June 5, 2011 at 6:46 pm
“…but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.”
I’m just saying, it would be nice to ask it to optimize something which could be pondered as having physical significance. You could use the proxy data. If you claim it’s crap, I won’t disagree. But, it might be interesting to see what a long term cyclic expansion predicts.
@Matt, @Leif Svalgaard,
Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so. When you really only have two cycles to go on that isn’t really enough. You need more data to be sure.
The obvious source of more data is the Central England Temperature Record:-
http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png
You can see the same peaks and troughs in the Central England Temperature Record from 1880 to 2007, i.e. the same 60 yr cycle is apparent, but sadly if you go back in time you can see that cycle breaks down. However, if you take this chart as a means of disputing Pat Frank’s claims you are in trouble if you are hoping to see AGW, since this dataset clearly shows that there is nothing special about temperatures post 1950. Thanks to the LIA recovery temperature records will likely get broken by 0.2Celsius every 60 yrs or so, and for the UK the last time just happened to be in 2002 by just that amount.
Oh by the way, it was 25Celsius in Southern UK on Saturday – had a nice BBQ and set up the kids trampoline. Sadly today it has clouded over and the wind is blowing from the north – it might reach 15Celsius today if we are lucky. Something must have sucked all the CO2 out of the atmosphere over the weekend……
RE: Bart: (June 5, 2011 at 11:26 pm)
“You could use the proxy data. …”
Do you have a preferred realistic public proxy? I used the HadCRUT3v data because it has the longest official record based on measured data and is at least similar to one of the data sets used in the main article.
After reading the thoughtful responses of commentors here, and Tamino’s analysis (you may not like the ton,e but the substance of his argument is valid), it seems like Frank needs to consider a thorough revision of his essay.
Spector says:
June 6, 2011 at 5:37 am
“Do you have a preferred realistic public proxy?”
Not really. They’re all dubious. But, at least including it in the analysis might give an idea of how sensitive the prediction going forward is to what was modeled coming before.
JOhn H says:
June 6, 2011 at 11:27 am
OK, I looked. And, I need a shower. His claims have no merit. Two full cycles is statistically significant. It is too much of a coincidence. In 1940, if you had said, “temperatures have risen in apparently cyclical fashion, and we should hit another peak in about the year 2000,” it would have been proper to say “there is not enough data to say that with any confidence.” But, when the second rise is confirmed on schedule, it is clear that there is something to it.
I looked further at his link here. Jeez, he’s using periodograms. How elementary and jejune. And, he fails utterly to make his case. The higher peaks in the graph following “The periodogram looks like this:” are at too low frequency in which to have any confidence given the time span of the data. The others are clearly not Cauchy peaks. Fail.
There is an additional point to make. There are some plots where he uses ridiculous piecewise fits and such and says that, since these cannot be said to reflect the underlying processes, neither can the sinusoidal fits. But, this ignores the ubiquity of cyclical processes in the entire panorama of natural processes in every field of science and engineering. This ubiquity comes about because A) trig functions for a functional base, and can be used in an expansion to describe any bounded process and B) because every physical process anywhere in the universe depends on vector projections, which are always proportional to a cosine function.
This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series, and one of the first things an investigator should look at when attempting to do so.
A very powerful riposte, Bart, thanks.
Bart says:
June 6, 2011 at 1:59 pm
This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series
The numerology is in the assumption. BTW, Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter:
http://www.leif.org/research/FFT%20Loehle%20Temps.png
Leif Svalgaard says:
June 8, 2011 at 8:32 am
Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter: http://www.leif.org/research/FFT%20Loehle%20Temps.png
For periods around 60 years.
Now, there is a curious sequence of peaks at higher frequencies: http://www.leif.org/research/FFT%20Loehle%20Temps-Freq.png
The spacing between the peaks is 0.0345 [in frequency per year]. This corresponds to a period of 29.0 years and is likely due to Loehle’s data being 30-yr averages sampled every year, creating appropriate fake periods. Again an example of how Fourier analysis misleads you.
“The numerology is in the assumption.”
“It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. “
Sorry. I don’t see it.
“BTW, Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter:”
This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise. PSD estimation is a lot more involved than that. I almost posted the below, but decided people probably wouldn’t be interested. Now, I think I will go ahead.
If you would like me to take a crack at it, let me know where I can get the data.
——————————-
Some pointers about constructing a PSD estimate, for those who might be interested. PSDs are well-defined only for stationary processes. People sometimes use them for quantifying non-stationary processes, but it’s not generally a good idea for a variety of reasons. Thus, before performing a PSD on data with both stationary and non-stationary components, some pre-treatment is advisable.
For FFT based methods, detrending, or subtracting out other higher order polynomial fits, is often useful for diminishing the impact of non-stationary components. However, one must be aware that this does introduce bias into the PSD estimate, particularly at low frequencies.
A PSD is the Fourier transform of the autocorrelation function. A periodogram is an estimator of the PSD calculated as the magnitude squared of the FFT of the data, divided by the data record length. As such, it is a biased estimator, though the bias decreases for stationary processes as the length of the data record increases. Furthermore, it is highly variable, and the variance does not go down as the length of the data record increases. Averaging together periodograms of chunks of data is a common method employed to reduce the variance, but at the cost of greater bias. Bias becomes particularly bad when those chunks are shorter than the longest significant correlation time.
A better FFT method is first to compute the autocorrelation estimate. By inspection, you can then see where the function is well behaved, how long the longest correlation time is, and where the autocorrelation estimate starts to lose coherence. You window it to that time with an appropriate window function (see, e.g., the classic text by Papoulis) and then compute the PSD. This method is generally far superior to averaging windowed periodograms, where one goes in blind without knowing any of the correlation details.
The FFT is actually a sampled version of a continuous function, the discrete Fourier Transform, where the frequency samples are spaced proportional to 1/N, where N is the length of the input record. A simple method to sample with higher density is to “zero pad” the autocorrelation estimate past the point where the window function goes to zero.
Once one determines parameters for the higher frequency content of the signal, an ARMA (autoregressive moving-average) model can be constructed for it, and this can be used to aid more sophisticated estimation methods, as desired.
Bart says:
June 8, 2011 at 11:57 am
This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise.
The temperature reconstruction is so noisy [and uncertain] that a more sophisticated analysis is hardly worth the effort, but have a go at it. The data is here: http://www.ncasi.org/publications/Detail.aspx?id=3025
It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.
There are others, but these seem have the most significant energy. A couple of apparent peaks also occur at 52 and 43 years, but they’re kind of ambiguous.
Could have sworn I posted back on this, but it has disappeared.
I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
Well, now it’s there. I was looking back because I wanted to add the 134 year one in.
Bart says:
June 8, 2011 at 1:26 pm
It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.
The time resolution is in reality 30 years. The 30-yr averages were then re-sampled every year, but that does not really create any new data.
Bart says:
June 8, 2011 at 2:48 pm
<i?I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.
Bart says:
June 8, 2011 at 2:48 pm
I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.
A standard ‘naive’ method of getting a handle on significance is simply to calculate the FFT for the two halves of the data. Here is what you get: httt://www.leig.org/research/FFT%20Loehle%20Temps-2%20Halves.png
You can see the effect of the oversampling in the dips and peaks below 30 years. Above 30 [or 60] there are no consistent peaks. This is not rocket science.
httt://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png
Leif Svalgaard says:
June 8, 2011 at 4:20 pm
http://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png
I’m extra fat-fingered today
That’s not how filters work. It isn’t a sharp cutoff. A 30 year average has its first zero at 1/30 years^-1. The next one is at 1/15 years^-1, then at 1/10 years^-1, and so on. At 1/23 years^-1, the gain is about 0.2. So, all this means is that the energy in the component at 23 years is, in reality, 25X larger than it appears in my PSD. And, the center of the peak could be a little shifted by the filter lobe, so it might really be +/- a couple of years.
It is the resampling which allows me to see the 23 year cycle. Otherwise, it would have been aliased to 98.6 years.
It may be of interest to note that 20 to 23 year cyclic behavior commonly crops up in environmental variables, as “Spector” found above in his fit. There is also a significant roughly 21 year cycle in the MLO CO2 measurements as well. Those measurements have significant sinusoidal components at roughly 1/4, 1/3, 1/2, 1, 3.6, 8.5, and 21 years.
Leif… stop. Read my note above. You are incorrect.
‘“most significant” should not be conflated with ‘just the largest’ peaks’
Indeed. Which is why I wrote “these seem have the most significant energy“.
Bart says:
June 8, 2011 at 7:12 pm
‘“most significant” should not be conflated with ‘just the largest’ peaks’
Indeed. Which is why I wrote “these seem have the most significant energy“.
“Energy” ? Perhaps you mean ‘power’? ‘Seem to have” either they have or they don’t. The 30-year average is a running average.
The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half. All the peaks and valleys you see below 30 years are not real. The resampling does not help you here. I could resample with one-month resolution and study the annual variation right?
Leif Svalgaard says:
June 8, 2011 at 7:42 pm
The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.
The FFT gives the amplitude of the sine wave, Your 22-yr period [when present, before 1000AD] has an amplitude of 0.01C which is way below the accuracy of the reconstruction. http://www.leif.org/research/FFT%20Loehle%20Temps%20Comp.png
As I said: numerology.
”“Energy” ? Perhaps you mean ‘power’?”
It is average power, which is energy divided by the record interval. Conventionally, we usually refer to the result of integrating the PSD as “energy” to avoid ambiguity. How widespread that convention is, I really am not sure, so perhaps I should have explained it.
“The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”
Nope, it’s still there. You just can’t see it because your analysis method is so lousy.
“Your 22-yr period … has an amplitude of 0.01C…”
This is a stochastic signal. Discussing “amplitude” is not really rigorous. In any case, as I explained, it is being significantly attenuated by the running average, so the actual signal is many times larger than what is observed.
“All the peaks and valleys you see below 30 years are not real.”
I’ve tried to explain it to you. Why are you insisting on something in an area in which you are not particularly proficient with someone who is? I feel like I’m arguing with Myrrh again.
“Discussing “amplitude” is not really rigorous.”
Let me try to explain this a little. What we are dealing with is a distributed parameter system. Distributed parameter systems are generally characterized by partial differential equations (e.g., equations of structural dynamics, Navier Stokes equations, etc…). Via functional analysis, we can determine certain eigenmodes, i.e., certain configurations (mode shapes) of the system which oscillate at particular sinusoidal frequencies in response to exogenous inputs.
For a given system, taken in isolation, there is generally a lowest frequency mode, which we call the fundamental mode, and various higher frequency modes which require steadily escalating energy input to excite (note: I may slip from time to time and refer interchangeably to the “mode” meaning the mode shape or the modal frequency – it is part of the jargon. It should be clear what I mean by the context). In general, the “bigger” the system, the lower the fundamental mode. Interaction of the various modes can create complex dynamics which alternatingly interfere constructively and destructively with one another.
Dissipation of energy leads to eventual damping of these responses. However, if a mode is continually fed by a wideband excitation source whose bandwidth encompasses the modal frequency, it can keep getting regenerated ad infinitum. Over time, this signal grows and fades. Depending on the rate of energy dissipation and the time span under observation, it can look like a steady state sinusoid, or it can look like a (generally nonuniformly) amplitude and phase modulated sinusoidal signal.
The climate is a distributed parameter system (or, perhaps more accurately, a series of overlapping piecewise continuous ones). It has certain modes which are excited by various energy inputs, from the Sun (electromagnetic radiation), from the Moon (tidal forces), from intergalactic cosmic rays, from internal heat dissipation, etc… We know some of these modes well: The PDO, the AMO, the ENSO… These are responses of the distributed parameter system of the Earth to wideband forcing(s). If you took away the forcings, they would gradually decay and die out.
For such a huge system as the climate system of the Earth, the fundamental modes are certain to be very, very long relative to our perceptions. But, there is ample energy to excite a plethora of higher frequency modes as well. And, of course, there are additionally steady state, near perfectly sinusoidally varying diurnal, monthly, seasonal, and longer term inputs, as well.
The constructive and destructive interference of all these modes, along with the steady state periodic excitations, form what we call “climate.”
PSD analysis is an excellent way to look for the modal frequencies and, once found, they may be observed to be quasi-steady state, or they may surge and fade. But, they will be recurring, because they are part of the physical system which defines, or constrains, or begets… however you want to say it… the climate system.
The 60 year cosine is fair start to this sort of crude approximation but why do you chose to fit a straight line? Because it’s straight ? Not a very good start.
CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up. Such a radiative forcing will afftect the rate of change of temperature not the temperature. If we appoximate CO2 level as increasing exponentially and then account for the saturation of the blocking effect (the absorption is reduced logarithmically as CO2 goes up) we get a linear increase in the forcing. This acts to produce an increasing rate of change ie accelerating warming, not a linear one. In fact this simple approximation gives a quadratic rise. It’s small but it is increasing faster as it goes along. In fact this is why you see your increasing slopes in figure 4.
You need to redo your fits with cosine plus quadratic and see what if gives.
But be warned your residuals here (that you have not put a scale on in figure 1 ) are about +/-0.2C and data than have a total range of only about +/-0.4C over the whole dataset . Any fits you do will only be weakly correlated to the data and the margin for error in any magnitudes (like the magnitude of the cosine or quad terms) are quite large.
You need to try to produce an error estimate for any result you find. Any result without that is not scientific.
0.009 is tiny but you need to say something like 0.009 +/- 0.001 to give it meaning.
If it turns out to be 0.009 +/- 0.85 you get a better idea of how meaningfut your answers are.
Bart says:
June 8, 2011 at 9:43 pm
“The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”
Nope, it’s still there. You just can’t see it because your analysis method is so lousy.
Show me your analysis. ‘Nope’ doesn’t cut it.
It is there. The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation. Moreover, a 21 year cycle (which is within a reasonable error bound) appears clearly in the 20th century direct measurements as well (see Spector @ June 5, 2011 at 12:27 pm).
Leif, your methods are poor. You use the FFT improperly. You do not understand aliasing. You do not understand transfer functions for FIR filters (the simplest of which is the sliding uniformly weighted average). You do not know what a PSD is. You do not understand stochastic processes. You are belligerent and accusatory with a guy who has been at this for over a quarter of a century, analyzing data and creating models which are employed in real world systems which you have almost certainly unwittingly used.
I see no value in continuing this conversation.
Bart says:
June 9, 2011 at 10:05 am
It is there. […] I see no value in continuing this conversation.
Show it.
Here is how Loehle describes his data:
“The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis. In Loehle (2007), interpolation was not done, but some of the data had already been interpolated before they were obtained, making the data coverage inconsistent. In order to use data with non-annual coverage, some type of interpolation is necessary, especially when the different series do not line up in dating. This interpolation introduces some unknown error into the reconstruction but is incapable of falsely generating the major patterns seen in the results below. An updated version of the Holmgren data was obtained. Data on duplicate dates were averaged in a few of the series. Data in each series (except Viau, because it already represents a known time interval) were smoothed with a 29-year running centered mean (previously called a 30 year running mean). This smoothing serves to emphasize long term climate patterns instead of short term variability. All data were then converted to anomalies by subtracting the mean of each series from that series. This was done instead of using a standardization date such as 1970 because series date intervals did not all line up or all extend to the same ending date. With only a single date over many decades and dating error, a short interval for determining a zero date for anomaly calculations is not valid. The mean of the eighteen anomaly series was then computed for the period 16 AD to 1980 AD. When missing values were encountered, means were computed for the sites having data. Note that the values do not represent annual values but rather are based on running means.”
My poor understanding was enough to actually conclude that he used a 29-year running mean. Your mistake is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs] and that the proxy data is good enough to find anything less than 30 years.
I will give an example of what I am talking about. The following code is written using MATLAB. Hopefully, it should be transparent for users of other languages.
First, set up the constants governing a particular mode with a 23 year quasi-period (resonant frequency near 1/23 year^-1):
zeta = 0.001;
a=2*exp(-zeta*2*pi/23)*cos(2*pi/23);
b=exp(-2*zeta*2*pi/23);
Define a data series representing the vibration of a slightly damped oscillating mode driven by Gaussian “white” noise:
x=zeros(1,1000);
for k = 3:1000
x(k) = a*x(k-1) – b*x(k-2) + randn;
end
We want to eliminate the initial transient response, so run it a few times, replacing the starting condition with the previous end condition:
x(1)=x(999);
x(2)=x(1000);
for k = 3:1000
x(k) = a*x(k-1) – b*x(k-2) + randn;
end
Now, plot “x”. What you should see is something that looks like a fairly steady oscillation with some small amplitude modulation. Now, reevaluate zeta as zeta = 0.01 and repeat. Now, you will see a lot more variation in the amplitude. Run enough cases, and you will see periods in which the oscillation virtually vanishes, only to be stirred up again by later random inputs. Try different values of zeta, and observe what it looks like. A raw FFT will start to show apparent splitting of the frequency line as zeta becomes larger. A properly executed PSD windowed over the appropriate correlation time will resolve the ambiguity.
The time constant is tau = 23/(2*pi*zeta). zeta = 1 is critical damping, at which point you should no longer see much in the way of coherent oscillation.
“Your mistake…”
I have made no mistakes. I suggest you study up on the subject and stop digging your hole deeper.
“…is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs]…”
Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.
“…and that the proxy data is good enough to find anything less than 30 years.”
Quality of the data is one issue. Ability to “see” particular frequencies is completely independent. Given the transmission characteristics of a 30 (or, 29, it makes little difference) year average, it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.
Are we done here? I think we should be.
Just one final note: I’m not engaging in alchemy, or going off on some flight of fancy of my own here. This is all industry standard operating procedure when designing systems involving compliant structures (buildings, trusses, air frames, what have you) or fluid containment vessels (water distribution (plumbing), pumping stations, fuel tanks…). This is what Finite Element Analysis (surely, you have all heard that catchphrase) is all about: determining the modes of oscillation of distributed parameter (continuum) systems.
Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception.
Bart says:
June 9, 2011 at 11:55 am
I will give an example of what I am talking about.
Now smooth the data and show what you get.
Bart says:
June 9, 2011 at 12:19 pm
Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.
Details brought up by you.
it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.
But you have not shown the result. You claim to detect 23-yr in both halves of the data. Prove it.
Are we done here? I think we should be.
If you continue to evade the issue, then perhaps we should be.
Bart says:
June 9, 2011 at 12:32 pm
Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception
No doubt about that, but that you can describe them in this fashion, does not mean that those cycles actually exist as physical entities [which is the only thing of interest – otherwise it would just be numerology]. Remember the old joke about fitting an elephant.
“…does not mean that those cycles actually exist as physical entities…”
It would only be shocking if they did not. Along the lines of discovering that gravity is a repulsive force.
“Prove it.”
Prove it to yourself. Learn about the subject. For the record, it is undeniably visible. But, if you understood half of what I have been telling you, you would realize it makes no difference whatsoever to my thesis. It chagrins me to say it, but you’ve really gone off the deep end here, Leif.
“Now smooth the data and show what you get.”
How about you try this exercise. Generate the data as instructed. Then, pass it through a 29 point sliding average, and run your FFT on it.
Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do, with an amplitude about 1/5 of the initial amplitude. As I’ve told you over, and over, and over, and over, and over, and….
Bart says:
June 9, 2011 at 5:20 pm
Prove it to yourself. Learn about the subject. For the record, it is undeniably visible.
The hole you are in is that you claim that there is a 22-year cycle in both halves of the data. I have proven to my satisfaction there is not, so show your PSDs. If you do not know how to plot the data or link to your plot, email the (x,y) point values to me and I’ll show them for you.
it makes no difference whatsoever to my thesis.
Wrong attitude.
Bart says:
June 9, 2011 at 5:49 pm
Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do,
I think I have isolated your problem: The Loehle data was not constructed by running a 29-point average over yearly data. The time resolution was much worse: of the order of 30 years or in some cases 100 years with data taken at irregular large intervals, interpolating between the gaps. Imagine you have 30 yearly values that are all the same [because you only have one actual data value], followed by another 30 years of equal [but likely different from the first 30 years] values, and so on. Instead of assuming a constant value, you could interpolate between the scattered points. The values have a large noise component [likely larger than the difference between adjacent 30-yr periods]. This is the data you have to deal with. You claim categorically that you have found a clear 22-yr period in the first half of the data [about a 1000 years] and also in the last half of the data [naturally with the same phase]. This is what I dispute and ask you to demonstrate.
Great analysis. Clearly there needs to be more research into feedback responses since computer models obviously couldn’t predict them all.
However, I would disagree about the continuous negative feedback having a high probability of being in place throughout the 21st century, and that the linearity in the data is likely to continue in the present fashion. Some things to consider are the possible positive feedbacks that would still work despite ongoing negative feedbacks, and although their co-interactions, if any, will likely be non-linear, the additional feedback processes are always an item to consider within any complex system. Some more important factors that could affect the future climate as it pertains to the analysis of a hypothetical negative-feedback inferred from your sinusoid-plus-trend correlation:
-Lag times between forcings and climate response. This includes both the immediate and long-term effects of various GHGs, solar forcing, oceans, oscillations, ice-melt patterns, etc. For example, the DIRECT immediate solar forcing appears to have a lag time of ~2.2 a (Scafetta and West, 2005).
-The cloud and water vapor feedbacks. This is a rather complex system: increased tropospheric WV from warmer SSTs would augment the greenhouse effect (Held and Soden, 2000), while recent higher convection in the tropical Pacific combined with a cooler stratosphere has removed this greenhouse gas from the upper levels, reducing overall warming (Rosenlof and Reid, 2008). However, this negative feedback effect only ramped up after 2000, meaning it may represent a tipping point toward negative feedbacks, or it may be inherently unstable and could reverse itself at any time.
-The CAUSE of post-1860 base warming. Since regular 60-year cycles appear to raise global temperatures by about 0.6C before hitting the peak and cooling by 0.3C, it is important to determine the underlying factor. Is it recovery from the LIA and coinciding ‘solar re-awakening’, or is something more in play here, such as some long-term ocean feedback, an extra forcing from GHGs, or a yet-undiscovered cause? If so, could this effect be weakening, and thus no longer contribute to most of the post-1970 warming, or have GHGs only begun to augment this effect? It is impractical to assume linearity, without knowing what causes it.
-Undetected positive feedbacks. This of course includes the additional release of GHGs from permafrost melting, pine beetle and fungus population growths, methane clathrate releases, peat bog fires, conflagarations in weakened forests caused by biome shifts, Arctic dipole anomalies resulting in colder winter northern hemisphere continents and thus lowered CO2 absorption in winter, and the like. Many computer models assume the positive feedbacks will outweigh the negative ones, which may be true, but we don’t actually know.
-Interactions between GHG-induced forcings and other anthropogenic factors such as soot, brown clouds, Arctic haze, the ozone holes, ground-level ozone and contrails. Many of these effects will change over time, as for example the ozone hole has strengthened the Antarctic polar vortex and thus caused surface cooling over East Antarctica, while ozone recovery will have other effects, as will the Arctic polar ozone anomaly, polar stratospheric clouds, noctilucent clouds and depth changes in the Arctic troposhere. Meanwhile, soot blocks out the sun and so may be delaying the GHG-induced warming until it is removed, but soot accelerates polar ice melting when it lands. Contrails have a much similar effect, causing both cooling and greenhouse-type warming under various circumstances, and additionally change the water vapor feedback. Many of these factors may simply be delaying, removing altogether or immediately increasing the effects of GHG forcing and associated warming.
The lowest quoted figure I’ve seen to date for GHG-induced 20th century warming has been a contribution of 0.1C, which this article’s analysis does not contradict, but if the baseline warming cannot explain the warming post-1970 then the effect may be much greater, on the scale of 0.5C of GHG-induced warming. This effect, present and future, will depend on anthropogenic emissions and future feedback processes. It is very likely that climate sensitivity is variable over time, depending on possible ameliorating or augmenting factors such as background CO2 levels, direction of GHG change, rate of temperature and CO2 change, presence of potential feedback factors, ocean CO2 and oxygen, solar activity, ice extent, heat contribution of the ocean, forest cover, water vapor concentrations and others. In one example, the temperature and CO2 trends became decoupled during the Cretaceous, and this may occur sometime in the future. Sea level rise and temperature correlations may also be affected.
One more thing to consider is the CO2-absorption ability of the oceans, and how it is impacted by conditions such as temperature, salinity, pH, ocean current flow, oxygen content, atmospheric GHGs, bioprocesses, etc. Most of the negative feedbacks can be explained by two things: the biosphere, and the hydrosphere. Throughout geological history, CO2 has only had a long-term effect on climate, whereas temperature change often is likely to create a positive feedback by raising CO2 levels whenever global temperature warms. Under such circumstances as increasing CO2 when the Earth is completely ice-covered, the biosphere and rock-sequestration processes no longer work, allowing the warming effect to take place more drasticly than otherwise. If the oceans become acidic and stagnant, any negative feedback processes will likely, excuse the pun, be negated. Climate change also seems to have an effect on volcanoes.
Inevitably, the melting of large ice volumes increases the sequestration of CO2, by reducing both salinity and temperature and thus increasing the ocean’s uptake ability for absorbing the GHGs, without necessarily having much of a positive effect on plankton populations. Both the populations of plankton and coral are decreasing drasticly, and the recovery will likely be too slow to re-activate the negative feedback processes by absorbing the CO2 that they normally do. The result is likely to be an abundance of positive feedbacks, then a series of negative feedbacks taking their place, assuming that modern GHG emissions continue as usual before plateauing due to resource depletion. Some factors are likely to be linear, some exponential, and others oscillatory. Climate sensitivity may depend on the change itself. As the removal of all GHGs would require a reasonably large sensitivity to drop Earth’s temperatures ~50C lower than it is today, so should it be reasonable for the rapidity of current GHG increases and associated factors to influence this sensitivity. Of course, my guess is probably no better than computer models, which fail at holistic processing when it only receives 10% of the input required for the holistic process to work.
The one major positive feature of the article is that it refers to current analysis of warming rather than some oft-quoted graph of Phanaerozoic climate proxies being unaffected by assumed long-term CO2 levels. The likelihood of glaciations likely depends on factors other than CO2 and solar output.
This is most likely not my longest comment on a climate blog, so there’s no need to credit me for taking this onto a skeptic (refrain from non-sequitur River in Africa tangents, please) website.
REFERENCES
http://www.annualreviews.org/doi/abs/10.1146%2Fannurev.energy.25.1.441
http://www.agu.org/pubs/crossref/2008/2007JD009109.shtml
http://www.fel.duke.edu/~scafetta/pdf/2005GL023849.pdf
Leif, I made no assumption about functional forms.
The decision to try a cosine fit stemmed from the observation of a sinusoid in the GISS (land+SST) minus GISS (land-only) difference anomalies, over 1880-2010. That difference sinusoid had a period of about 60 years and showed two full cycles. It’s clearly a sign that there is an oscillation within the SST anomalies.
There’s no numerology in the difference observation, and it justifies testing a cosine function in a fit to the entire (land + SST) anomaly data set. Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
Ryan, “Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so..
By now, given my responses, you ought to know that I didn’t use a cosine fit just because there were attractive peaks and troughs in the centennial air temperature anomalies. I had “more data,” namely the net oscillation that appeared in the (land+SST) minus (land-only) difference anomalies.
Regarding your comment about the CET, I have test-fit the Central England Temperature anomaly data set. It’s very noisy, but one can get a pretty good fit using a ~60 year period, plus a longer period of 289 years, and a positive linear trend. Starting in 1650, the ~60 year period again propagates nicely into the peaks and troughs at ~1880, ~1940, and ~2005 in the CET data, just as it did in the more limited 130 year instrumental anomalies.
The line, by the way, implies a net non-cyclic warming of 1.1 C over the 355 intervening years.
Pat Frank says:
June 9, 2011 at 7:53 pm
Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.
John H, Tamino’s critique centrally depends on invalid models. I’ll have more to say about that on his blog.
J. Simpson, “CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up.”
Radiation physics tells us that added CO2 will put added energy into the climate system. It tells us nothing of what the climate will do with that energy, or how the climate will respond. To suppose the IPCC’s point of view about a change of temperature specifically in the atmosphere is to impose onto an empirical analysis the very theory being tested. This is to engage in a circular analysis.
Removing the empirically-justified oscillation from the total anomaly data left a positive trend that really is linear within the noise, and extending over the entire 130 year period. There’s no valid point in making an empirical analysis more complicated than the data themselves exhibit. Even dividing the data into early and late trends is a little more than a totally conservative approach to the data would permit. The most empirically conservative view of Figure 1 is that there has been no evident increase in the warming rate of the atmosphere for 130 years.
Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.”
Not correct, Leif. An oscillation is apparent in the GISS (land+SST) minus (land-only) difference anomalies. Likewise in the CRU (land+SST) minus GISS (land-only) difference anomalies.
Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.
Pat Frank says:
June 9, 2011 at 7:53 pm
Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1997. It would be numerology in the same sense as yours is. Here is the DJI numerology 1997-2008: http://www.leif.org/research/DJI-1997-2008.png
A straight trend plus a sine curve. The fit is good [I only show the sine part], but has no meaning at all, pure numerology. And so is yours.
Here I have added the trend back in: http://www.leif.org/research/DJI-1997-2008-with-trend.png
Pat Frank says:
June 9, 2011 at 9:03 pm
Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.
I can build a wall in my backyard where the height of the wall [each brick horizontally] is proportional to the DJI, then I have a physical observable. Without a reason or plausible possible explanation, it would always be numerology. Balmer’s famous formula was numerology for a long time: Balmer noticed in 1885 that a single number had a relation to every line in the hydrogen spectrum that was in the visible light region. That number was 364.56 nm. When any integer higher than 2 was squared and then divided by itself squared minus 4, then that number multiplied by 364.56 gave a wavelength of another line in the hydrogen spectrum. Niels Bohr in 1913 ‘explained’ why the formula worked, but the real explanation came only in the 1920s with the advent of quantum mechanics.
Leif, as I noted, my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.
Pat Frank says:
June 9, 2011 at 9:36 pm
my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.
Without a plausible reason or theoretical expectation, any correlation that appears between physical observables is numerology.
Leif Svalgaard says:
June 9, 2011 at 5:51 pm
“The hole you are in…”
I am in no hole.
“…email the (x,y) point values to me…”
I have no intention of revealing personal information over so trivial a matter. I really don’t give a rodent’s derriere if you believe me or not. Assume I’m lying if you like. My thesis is still compelling.
Leif Svalgaard says:
June 9, 2011 at 6:55 pm
“Wrong attitude.”
What is my thesis, Leif? Do you have any idea? Go back and read and reread until you understand it. Play around with the simple simulation model I gave to help you understand it.
Pat Frank says:
June 9, 2011 at 9:36 pm
Your analysis is justified by the glaring fact that it is legitimate, due to the ubiquity of sinusoidal inputs and modal responses to noise in every distributed parameter system in the universe, as I have painstakingly documented in the foregoing. Leif is quite simply wrong, but he has a burr in his saddle, and you are not going to satisfy him no matter what you do.
Pat Frank says:
June 9, 2011 at 9:03 pm
Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.”
The funny thing about this is, that is exactly what the quants on Wall Street do every day. And, they make obscene amounts of money doing it.
They’ve had recent setbacks, mainly because they do more than merely observe, they interact with the system based on their observations. This creates feedback. It became significant feedback in the recent decade, and it was not designed specifically to be stabilizing feedback. But, no investment house has liquidated it’s financial analysis department in response. And, won’t.
The government also does a lot of this kind of thing. How do you think they come up with “seasonally adjusted” economic statistics?
Leif, when a multi-decadal oscillation appears in the difference between two temperature anomaly data sets, one of which is land+SSTs and the other of which is land-only, and when the world oceans are known to exhibit multi-decadal thermal oscillations, one has a direct physical inference. Your numerical dismissal is still ill-founded.
However, to test this inference further, I made a difference between the cosine-alone portions of the cosine+linear fits to the GISS 1999 (land-only) and GISS 2007 (land+SST) data sets. The difference oscillation of the two fitted cosines alone, tracks very well through the oscillation representing the difference of the two full anomaly data sets. The appearance of this difference correspondence indicates these independently fitted cosines capture an oscillation in the original full data sets.
If anything, the oscillation expressing the difference between the full cosine+linear fits for 1999 (land-only) and 2007 (land+SST) tracks even better through the difference oscillation of the anomaly data sets themselves.
Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.
Bart says:
June 10, 2011 at 12:09 am
Assume I’m lying if you like.
If there were a statistically significant 22-year signal in the 2000 year temperature reconstruction that would be an important result. You categorically claim [several times] that there is, based on your superior understanding of distributed parameter systems in the universe. All I ask is that you produce the evidence for that
Pat Frank says:
June 10, 2011 at 12:30 am
Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.
Curve fitting without understanding of the physics is and has always been numerology. Just as Balmer’s formula until it was understood, or the Bode-Titius ‘law’ [ http://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law ].
Leif, curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.
Pat Frank says:
June 10, 2011 at 9:43 am
curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.
Of course it is numerology. Not to say that numerology cannot be useful, like the example of Balmer’s formula shows. Why are you so upset about numerology? There was a time when the purported relationship between sunspots and geomagnetic disturbances was numerology. A century later we discovered the physical process that takes the relationship out of numerology and into physics. On the other hand, the Bode-Titius law is still numerology.
Leif – I think maybe you are laboring under a misapprehension that I claimed it was present and at equal strength in the 2nd half of the data. But, as I stated here: “The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation.”
I then gave you a simple simulation model to give insight into how these processes vary in time. Try setting zeta = 0.01, and observe how the energy of oscillation surges and fades. This is fully compatible with random modal excitation. It only depends on how fast the energy of oscillation dissipates in the absence of reinforcing excitation.
Some modes, which have ready access to sympathetic energy sinks, drain quickly, and some do not. Those which do not, we tend to see as steadier quasi-periodic oscillations, and these have longer term predictive power. Oscillations at modal frequencies are not necessarily persistent, but they are recurring, due to random forcing input which will occasionally reinforce, and occasionally either fail to reinforce or actually weaken, the oscillations.
And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own. I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.
Bart says:
June 10, 2011 at 12:11 pm
And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own.
You claimed a significant signal was present in both halves as visible in your PSDs. You have evaded showing the evidence for that. That settles the issue for me.
I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.
That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years. And there is power at any period, so clearly Spector would pick up such periods. Show your PSD for the modern data. BTW, we expect there to be a 0.1 degree solar cycle effect in any case.
Leif PCA outside of physical theory, e.g., is numerology. Deriving a physically valid inference and fitting observational data using physical reasoning, in the context of known physical phenomena, is not numerology.
Why do you insist on disparaging semi-empirical work? Data always lead theory. Analyzing such data using physical reasoning is not numerology. It’s standard practice in science when the theory is incomplete.
For any who would like to get a handle on what I have been talking about, here is a good video to see the modal analysis of a tuning fork. You can find other discussions if you google “modal analysis”. Most hits tend to be in regard to structures. But, you can google “fluid modal analysis” and find some specific to fluids. And, you can look up rheology and modal analysis. That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident (e.g., I would expect the vibration modes of the oceans to appear prominently in climate variables).
Leif Svalgaard says:
June 10, 2011 at 12:29 pm
“You claimed a significant signal was present in both halves as visible in your PSDs.”
I claimed an observable signal was present in both halves. You have latched onto this triviality like a pit bull, and have blinded yourself to all else. I’m sure others viewing our discussion have formed their own opinions of the validity of my arguments for better or for worse, and there is nothing more I can say now which will change their opinions, so I give up. You are a lost cause.
“That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years.”
Sigh… do you ever, you know, read what people write before forming your opinions?
“…and then I allowed it to optimize the periods as well.“
Pat Frank says:
June 10, 2011 at 12:31 pm
Why do you insist on disparaging semi-empirical work?
Who says that numerology is disparagement? Numerology is OK as long as you KNOW it is numerology. The problem comes when you begin to believe that your numerology is understanding.
Bart says:
June 10, 2011 at 12:41 pm
That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident
You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data]. And still no demonstration of the 22-yr cycle in the PSDs for the two halves. Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods. I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle. If so, that is fine with me, because I don’t it either.
Leif Svalgaard says:
June 10, 2011 at 1:21 pm
“You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data].”
Yet, that is precisely what you yourself claimed, in so many words:
What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.
“Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods.”
I never said it was a steady state oscillation. I have gone to great lengths to explain why it would not generally be expected to be. All of this has apparently gone sailing right over your head.
“I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle.”
I never did. I said that it was “there”, i.e., that it was observable. And, as it is attenuated by a factor of 1/5 due to the averaging taking place, that would indicate that it is, in fact, much more significant in reality.
“If so, that is fine with me, because I don’t it either.”
There’s a lot you don’t see, because you are unfamiliar with spectral estimation methods, and your analysis is very crude.
Leif Svalgaard says:
June 10, 2011 at 1:21 pm
“Who says that numerology is disparagement?”
From dictionary.com: numerology — n
the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs
At the very least, you are guilty of gross hyperbole.
Bart says:
June 10, 2011 at 1:40 pm
What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.
That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.
I never did. I said that it was “there”, i.e., that it was observable.
There is power at any and all frequencies, the thing is if it is significant.
And, as it is attenuated by a factor of 1/5 due to the averaging taking place
There is no averaging of higher sampling rate data. I might have expressed that clumsily, but what I have said over and over and over again is that the scarce widely scattered data from many dataset were lumpend into 30-year intervals.
But you have still not showed the PSDs, so have no real support for your claims.
Bart says:
June 10, 2011 at 1:46 pm
From dictionary.com: numerology — n
the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs
That particular example very many people are firm believers in. Some even think that the positions of the planets influence the climate and the sun. A better example of [useful] numerology is Balmer’s formula.
Balmer’s formula wasn’t numerology. If was phenomenological; made to represent a physical observable. Numerology has no particular connection to the physical. Phenomenological equations, by contrast, are used in physics all the time, either when theory is inadequate or when it is too complex to solve exactly.
Phenomenological approaches to data are classically the bridge that allows observables to be systematically examined when theory fails. When the phenomenological context is physical, the approach is entirely scientific.
Your use of “numerology” has been distinctly disparaging Lief.
Leif Svalgaard says:
June 10, 2011 at 3:30 pm
“That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.”
Not only are you contradicting your earlier post, you are contradicting the source:
“
”
You even discerned the signature of the pattern of zeros of the transfer function yourself. Maybe, I should just sit back and let you argue it out with yourself?
The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years. Instead, you threw away your credibility by trying to play gotcha’ games over something you did not understand.
Numerology, my fanny.
Pat Frank says:
June 10, 2011 at 4:16 pm
Numerology has no particular connection to the physical.
Any curve fitting to physical parameters is numerology when there is no theory or plausible expectation that the fit should occurs.
Your use of “numerology” has been distinctly disparaging
I said that numerology was OK if you KNOW it is numerology. If you deny it is numerology, then it becomes dubious.
Bart says:
June 10, 2011 at 7:11 pm
Not only are you contradicting your earlier post, you are contradicting the source:
This is what the source says:
“The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis.”
No contradiction.
Bart says:
June 10, 2011 at 7:19 pm
The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years.
I showed that there is no such 60 year cycle in the 2000-yr series. Now you claim there is, so you have to provide PSDs to back up that claim as well in addition to the 22-yr cycle you also claim. We are still waiting for you to comply. If you cannot or will not, then you have no credible claims. It looks more and more like this being the case as you are evading bringing forward any evidence.
Leif Svalgaard says:
June 10, 2011 at 10:01 pm
“No contradiction.”\
Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples. The resolution decreases with depth, but that is merely an effect of spatial filtering, which effectively is temporal filtering, since the layers accumulate in time. So, there is additional filtering beyond the 30 year sliding average, and the 23 year process is even more than 5X more significant in reality.
“I showed that there is no such 60 year cycle in the 2000-yr series.”
You showed nothing of the kind. Your analysis is crap. The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.
Bart says:
June 11, 2011 at 12:52 am
Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples.
Loehle does not give the original data. But his 2000-yr series is the average of 18 data series, each re-sampled to 1-yr resolution and those he does give. I have here plotted all of them. The heavy black curve is his average temperature reconstruction: http://www.leif.org/research/Loehle-Mean.png
Here are the individual data series [three to each plot]. It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years http://www.leif.org/research/Loehle-1-18.png
The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.
None of the ‘spikes’ are significant with the exception of the big 2000-yr wave. Cut the data in two halves, make a PSD for each half and you’ll see. The input data is simply not good enough to show a 22-year cycle even if present.
“It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years…”
Anything that is in there will appear in the analysis. So, it does not matter what the “vast majority” does.
“None of the ‘spikes’ are significant…”
You are woefully, painfully wrong. In the raw data, the 88 year process has an RMS of 0.044 degC. The 62 year process 0.041 decC. The 23 year process 0.013 degC. Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.
“So, it does not matter what the “vast majority” does.”
Which is to say, it only serves to weight how prominent it will be in the result.
I’m not saying my results are gospel truth. I realize that the data are not particularly reliable. But, the quasi-cyclic processes at 88, 62, and 23 years are significant, and reinforce the hypothesis (expectation, really) that there should be modal signatures visible in the data. In the end, this lends support to Pat Frank’s method of analysis, and the conclusion that 20th century temperatures are largely explained by natural quasi-cyclic processes reinforcing constructively to drive them up to a local peak in the first decade of the 21st century.
“Cut the data in two halves, make a PSD for each half and you’ll see. “
This is an arbitrary, capricious, and irrelevant method of analysis of a stochastic signal. I have explained this over and over and over, and it simply does not penetrate. The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.
Bart says:
Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.
Which all are smaller than the standard error quoted by Loehle of 0.14 degC. And you have not shown the PSDs yet. Remember to split the data into two halves. What do you mean by ‘raw’ data? The 18 series that I plotted here:
http://www.leif.org/research/Loehle-1-18.png
Please note (because I don’t want anyone to be confused), the RMS of the 62 year process being at 0.041 degC, and the 23 year process being at 0.013 degC means that the energy ratio is (0.041/0.013)^2 = 9.95 or about 10X, as I stated above. Adjusting for the weighting of the sliding average show these processes have comparable RMS.
On this: “I realize that the data are not particularly reliable.” I also realize that the quasi-cyclic processes I observe could be artifacts of the way the data were constructed by Loehle. But, the 62 year and 23 year processes have similar periods to those noted in the 20th century data, so I suspect they are related.
“The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.”
It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.
“Which all are smaller than the standard error quoted by Loehle of 0.14 degC.”
Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?
Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.
I am away for the rest of the day. Temporary non-response should not be construed as acquiescence.
Bart says:
June 11, 2011 at 12:04 pm
It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.
1000 years is just not ‘for a time’. It is a very valid procedure to test if the signal is present in subsets of the data. Now, you might have a point if 88, 62, and 23 years were the only peaks in the PSDs, but they are not, so show the PSDs. The signal is what is important, don’t inflate it by squaring it to get the power. The 18 series are here: http://www.leif.org/research/Loehle-18-series.xls calculate the PSDs for each and report on what you find.
Leif, what I’m saying is that what you’re calling numerology isn’t numerology. It’s physical phenomenology, which is entirely within the scope of science.
You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.
Bart says:
June 11, 2011 at 12:17 pm
Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?
Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.
Except the data points are not independent.
Pat Frank says:
June 11, 2011 at 12:42 pm
You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.
The Titius-Bode ‘law’ is still a good example:
The law relates the semi-major axis, a, of each planet outward from the Sun in units such that the Earth’s semi-major axis is equal to 10: a = 4+n where n = 0,3,6,12,24,48, … where each value of n [except the first] is three times the previous one.
You would call this “physical phenomenology”. It fitted well until Neptune was discovered, but then fell as the first case outside of the defining domain was found. The Titius-Bode law was discussed as an example of fallacious reasoning by the astronomer and logician Peirce in 1898. The planetary science journal Icarus does not accept papers on the ‘law’.
As I said, it is OK to do numerology as long as you KNOW it is that. Once you believe that you can extrapolate outside of the defining domain without having another reason than just the fit, it is no longer OK.
“Except the data points are not independent.”
It does not matter if the “data” are not independent. What matters is if the errors are independent. Or, more precisely, how they are correlated.
“1000 years is just not ‘for a time’.
In geological terms, it is the blink of an eye. Moreover, it did not just disappear. It is simply near the noise floor (though still observable).
“It fitted well until Neptune was” yada, yada, yada…
But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.
Should have said “It is simply nearer the noise floor.” Taking a second look, apparent average power is reduced by about 1/2, so RMS is about 70% of what it was. But, the PSD itself is significantly more variable, because I have half the data to smooth, so this is not, by any means, a certainty.
You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).
Bart says:
June 11, 2011 at 6:53 pm
What matters is if the errors are independent
For running means they are not.
But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.
You have not justified that at all, and one does not expect sinusoidal variations in all natural physical processes. Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.
Bart says:
June 11, 2011 at 7:10 pm
You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).
I don’t seem to be getting through to you, so perhaps as you suggested it is not even worth trying.
“For running means they are not.”
That depends on the correlation of the raw data. If they are uncorrelated sample to sample, a single average of a sliding average filter is still uncorrelated with other averages with which it does not overlap. Furthermore, while the sample mean reduces uncertainty for uncorrelated data by the inverse square root of N, where N is the number of samples, the uncertainty in estimates of other properties can go down faster than this. For example, when performing a linear trend on data with uncorrelated errors, the uncertainty in the slope estimate goes down as the inverse of N to the 3/2’s power. When the thing you are trying to estimate has a definite pattern or signature which can be easily discerned from the noise, you can get very rapid reduction in uncertainty.
“Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.”
Poke your omelet with a fork. Does it not tend to jiggle in response at a particular frequency? Are tornadoes not spawned cyclically? Are the Sun and other stars really exceptions to this universal rule?
“I don’t seem to be getting through to you…”
What is getting through to me is that you haven’t studied PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.
Bart says:
June 12, 2011 at 3:07 am
That depends on the correlation of the raw data.
There are 18 samples. The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.
this universal rule?
There is no such universal rule. You are confusing ‘natural’ vibrations where the frequency depends on the vibrating matter [and which are quickly damped out] and ‘forced’ vibrations where the frequency is that of an applied cyclic force [e.g. the solar cycle].
PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.
The PSD is no more than the FT of the autocorrelation of the signal. In a more innocent age long ago, the autocorrelation function was often used to tease out periodicities, e.g. http://www.leif.org/research/Rigid-Rotation-Corona.pdf and can, of course, today be used for the same purpose.
What I’m not getting through to you is the nature of the Loehle data. Here are the FFT of series 1, 7, and 13. http://www.leif.org/research/Loehle-1-7-13.png. The peaks are artifacts of the construction of the data, e.g. the lack of power at 29, 29/2, 29/3, 29/4, etc for series 7. The data is simply not good enough for demonstrating a 22-yr peak. And may not be good enough either for a 60-yr peak, in which case I may have been guilty of overreaching when stating that there was no genuine 60-yr peak in the Loehle reconstruction.
The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’, but let that slide, your words speak for themselves.
Leif Svalgaard says:
June 12, 2011 at 9:16 am
“The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.”
I am talking about temporal smoothing of the time series in order to pick out components of the signal.
“…which are quickly damped out… ”
They are not, in general, quickly damped out – damping is a function of energy dissipation, and energy dissipation depends on the availability of sympathetic sinks. When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated. Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.
“The PSD is no more than the FT of the autocorrelation of the signal.”
But, a PSD estimate formed by the Fourier transform of noisy data is NOT consistent (in a statistical sense) or well behaved. There are reams of literature on this subject, on how to improve the estimation process, to which you choose to turn a blind eye. I gave you pointers on the subject here.
A proper PSD estimate clearly shows well defined peaks at the 88 year, 62 year, and 23 year associated frequencies as I have stated. And, your analysis is crap.
“Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.”
Eventually, the energy of the universe will all be locked away where no more sinks are available, and the universal heat death will ensue.
“The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’…”
And, yours, by the use of words like “numerology”, and by categorical statements like “Fourier analysis on global temperatures [e.g. Loehle’s] show no power at 60 years, or any other period for that matter” when you have not performed a valid analysis.
capricious adjective 1. subject to, led by, or indicative of caprice or whim; erratic:
You seem to think it means something other than what it does.
“The data is simply not good enough for demonstrating a 22-yr peak.”
We can argue the quality of the data separately. The peak is there regardless. As I stated, I believe it is likely valid simply because a similar peak also appears in the 20th century data. This latter point on its validity is open to debate. The existence of the peak in a proper PSD estimate from the data is not.
“When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated.”
A rather dramatic example of this, taught to neophyte engineering students, is the excitation of the twisting mode of the Tacoma Narrows Bridge, which led to its ultimate collapse.
Indeed, the universality of modal excitation extends to the quantum world. The vibration modes of an atom are those of the bound states of electrons. In classical mechanics, these modes would eventually decay through loss of energy. But, they are continually excited by the quantum potential in the Hamilton-Jacobi equation, as present by Bohm (pages 28,29).
I’ve now posted a three-part response to Tamino’s second round of criticisms. We’ll see how it fares.
Leif, “The Titius-Bode ‘law’ … You would call this “physical phenomenology”.”
No, I wouldn’t. I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics. Other examples include linear free energy relationships in organic chemistry, including the Hammett equation.
Pat Frank says:
June 12, 2011 at 6:10 pm
I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.
It was made contrary to the explanatory theory of the day [Maxwell] and was not ‘hewing’ as much as possible to known physics. It was completely contrary to known physics. And was numerology in its day.
Bart says:
June 12, 2011 at 1:50 pm
‘capricious’ You seem to think it means something other than what it does.
In what meaning did you employ it?
Your infatuation with cyclomania shall stand for your own account, disconnected from reality. BTW, the ever faster expanding Universe will always have a heat sink.
There will always be peaks, the question is if they are significant in view of the data.
Pat Frank says:
June 12, 2011 at 6:10 pm
I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.
From http://www.chemteam.info/Electrons/Balmer-Formula.html :
At the time, Balmer was nearly 60 years old and taught mathematics and calligraphy at a high school for girls as well as giving classes at the University of Basle. […] Balmer was devoted to numerology and was interested in things like how many sheep were in a flock or the number of steps of a Pyramid. He had reconstructed the design of the Temple given in Chapters 40-43 of the Book of Ezekiel in the Bible. How then, you may ask, did he come to select the hydrogen spectrum as a problem to solve?
One day, as it happened, Balmer complained to a friend he had “run out of things to do.” The friend replied: “Well, you are interested in numbers, why don’t you see what you can make of this set of numbers that come from the spectrum of hydrogen?” […] Many of the experimentally measured values were very, very close to Balmer’s values, within 0.1 Å or less. There was at least one line, however, that was about 4 Å off. Balmer expressed doubt about the experimentally measured value, NOT his formula! ”
From http://www.owlnet.rice.edu/~dodds/Files231/atomspec.pdf :
“Although the formula was very successful, it was only numerology until the development of quantum mechanics led to a spectacularly successful explanation of all atomic spectra and many similar puzzles”
From http://www.theophoretos.hostmatrix.org/quantummechanics.htm
“A Swiss school mathematics teacher, Johann Jakob Balmer, tried to find a formula involving whole numbers which would predict exactly the frequencies of the four prominently visible spectra lines of hydrogen; if he could, then he would have discovered the eidos underlying the hydrogen spectra lines. And he did find the formula in 1885 [..] The formula was a feat of numerology, not of physics. ”
And so on.
Leif Svalgaard says:
June 12, 2011 at 8:19 pm
“In what meaning did you employ it?”
Coupled with “arbitrary”, as in the legal phrase.
“There will always be peaks, the question is if they are significant in view of the data.”
Exactly. Such behavior is the rule rather than the exception. So, when you see two full cycles of an evidently periodic process, as we do in the 20th century global temperature record, it is fully reasonable to expect that this may be the expression of a major mode of the system which has been recently, or is still being, excited.
On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements. Here is another example of successful empiricism: What do we call the transformation of Special Relativity? Why is it not called the “Einstein Transformation”?
Bart says:
June 13, 2011 at 8:53 am
Leif Svalgaard says:
Coupled with “arbitrary”, as in the legal phrase.
There is nothing arbitrary in dividing the data into two consecutive subsets.
“There will always be peaks, the question is if they are significant in view of the data.”
Exactly. Such behavior is the rule rather than the exception.
No, the most peaks are not significant, and especially not with this particular data. There are statistical methods for estimated the significance of the peaks. Try to use them.
On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements.
Numerology is using these empirical numbers without physical justification. This was clearly shown in the several links I gave about Balmer’s formula. Here is another one: the height of the Cheops pyramid is very close to a nano-Astronomical unit. Clearly, the Egyptians must have known the accurate distance to the Sun.