“Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2″

Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record.  He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now.  – Anthony

Future Perfect

By Pat Frank

In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves.  It turned out they have a lot to reveal.

As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.

Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.

For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.

I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.

The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.

Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.

Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like?  Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.

Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.

Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.

Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.

Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.

The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.

Figure 3: Comparison of the GISS and CRU fitted cosines.

The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).

The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.

In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.

So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.

But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.

The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.

Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.

But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010.  In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.

Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.

Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.

The fit results of the early and later temperature anomaly trends are in Table 1.

 

Table 1: Decadal Warming Rates for the Early and Late Periods.

Data Set

C/d (1880-1940)

C/d (1960-2010)

(late minus early)

GISS

0.056

0.087

0.031

CRU

0.044

0.073

0.029

“C/d” is the slope of the fitted lines in Celsius per decade.

So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.

If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.

Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.

Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.

I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.

All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2).  That plot is in Figure 5.

Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.

There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.

So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.

Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.

The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.

Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase.  This value is 4.4x -13x smaller than the range projected by the IPCC.

The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?

But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.

But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.

So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.

If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.

It couldn’t be  the negative feedback of Roy Spencer’s clouds, could it?

The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.

Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.

The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.

The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.

If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.

Semi-empirical linear model: 0.84 C warmer by 2100.

Fully empirical negative feedback model: 0.42 C warmer by 2100.

And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.

So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.

The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.

And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.

So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.

So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.

Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.

But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.

The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.

In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

About these ads
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

337 Responses to “Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2″

  1. Andy G55 says:

    This is the sort of REAL analysis I love to see.

    propa science !!!

    well done, mate !!

  2. Bob Tisdale says:

    As discussed on the thread of tAV cross post, this post would have been better without the reference to the meaningless PDO+AMO dataset. That discussion starts with this comment…
    http://noconsensus.wordpress.com/2011/05/24/future-perfect/#comment-50454
    …which reads:

    Pat Frank: The AMO and PDO data cannot be combined as you’ve presented. The PDO and AMO are not similar datasets and cannot be added or averaged. The AMO is created by detrending North Atlantic SST anomalies. On the other hand, the PDO is the product of a principal component analysis of detrended North Pacific SST anomalies, north of 20N. Basically, the PDO represents the pattern of the North Pacific SST anomalies that are similar to those created by El Niño and La Niña events. If one were to detrend the SST anomalies of the North Pacific, north of 20N, and compare it to the PDO, the two curves (smoothed with a 121-month filter) appear to be inversely related:
    http://i52.tinypic.com/fvi92b.jpg

    Detrended North Pacific SST anomalies for the area north of 20N run in and out of synch with the AMO:
    http://i56.tinypic.com/t9zhua.jpg

  3. Ronaldo says:

    Andy G55 says:
    June 2, 2011 at 2:26 am

    Here Here!

    And not a fudge factor in sight.

    Very well done.

  4. Dr A Burns says:

    Well done.

  5. John Marshall says:

    According to some physicists Global Average Temperature is a meaningless concept so is not a valid proxy for climate change. I tend to agree.

    If more heat is fed into a system then there is more heat loss, at least that is what the study of thermodynamics tells us. As the surface is heated, the air above gains heat through conduction and is forced to rise by convection removing that heat to higher atmospheric levels forming clouds if enough water vapour is present. This will reduce solar radiation to the surface.

    So Dr. Roy Spencer could be correct.

  6. Shaun D says:

    I agree. This is real science. But I have no idea what it means.

  7. DJ says:

    Donald Brown would prefer that you believe the IPCC theory rather than your empirical scientific observations. Ethics, ya know. Even if the IPCC is almost an order of magnitude higher in it’s prediction than what calculations based on observations reveal.

    But 0.03C/decade?? Isn’t that in the noise?

  8. Jim Barker says:

    Very well done.

  9. JohnL says:

    Outstanding insight into how to tease something out of the temperature record. This is a real breakthrough precisely because it depends on a few known assumptions.

    A couple of questions follow from this work:

    1. Is it possible that the first half/second half difference is related to the urban heat island and other site-trend effects that Anthony has been documenting? In different terms, is it possible that GHG emissions and UHI are both coincident symptoms of a third variable, industrial growth and urbanization? This might explain why some of the other signals of GHG warming are missing.

    2. The lower atmosphere must have a large heat storage capacity due to the amount of energy involved in vaporizing water at the ocean surface. When 1 kg of water is evaporated at the surface of the ocean, it absorbs 1000 times the energy that it takes to raise 1 kg of water/water vapor by a degree. Raising the surface temperature of the oceans results in more evaporation which prevents or delays temperature increases. Same thing happens in reverse during cooling. This effect is independent of the cloud effects that Dr Spencer has been developing, although the higher humidity will enhance the opportunity for cloud formation.

  10. rbateman says:

    This analysis tells me that if the Earth’s warming out of the Little Ice Age stops and reverses, the climate sensitivity to manmade greenhouse gases won’t save us, given the assumptions upon which the cosine fit relies upon are true.
    The key metric, then, is the sea levels.
    Downscope.

  11. tallbloke says:

    Excellent! Nice job Mr Franks.

  12. Lawi Odera says:

    really indepth research done…bravo

    please visit http://schizoidlawi.wordpress.com
    would appreciate comments :)

  13. Richard S Courtney says:

    Pat Frank:

    This is an interesting and informative analysis desite the caveats that you rightly provide. Thankyou.

    Your analysis provides several important findings. I note that one of these is an indication of climate sensitivity of 0.090 C/W-m^-2 (which corresponds to a temperature increase of 0.37 Celsius for a doubling of CO2).

    This result is similar to the climate sensitivity that Idso determined from his 8 ‘natural experiments’. He reported:

    “Best estimate 0.10 C/W/m2. The corresponds to a temperature increase of 0.37 Celsius for a doubling of CO2.”

    His findings are summarised at, and his paper reporting the ‘natural experiments’ is linked from,
    http://members.shaw.ca/sch25/FOS/Idso_CO2_induced_Global_Warming.htm

    Richard

  14. Richard S Courtney says:

    Oops!

    Of course I intended to type
    (which corresponds to a temperature increase of 0.34 Celsius for a doubling of CO2).

    Sorry.

    Richard

  15. richard verney says:

    “Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.”
    /////////////////////////////////////////////////
    I seem to recall that Phil Jones admitted that there was no statistical difference in the rate of warming in the first half of the 20th century and that in the second part (ie., no difference in rate of warming between say 1910 – 1940 and 1970 -1995). This admission has always been a killer since CO2 could not be responsible for the warming in the first part of the 20th century but miraculously and quite unexplainably whatever caused that warming certainly stopped and CO2 suddenly kicked in post 1970s.
    This is a big hole in the AGW theory until such time that they can explain/identify what caused the warming in the first half of the 20th century and explain why that warming influence came to a halt and is not operative today.
    The fact that the rate of warming in the first half of the 20th century is the same as in the second half coupled with the stalling/flatening off in warming post 1998 is a big hole in the CAGW thoery. Clearly present day warming is not unprecedented and there is no evidence of run away disaster.

    Interesting post, thanks.

  16. Alan the Brit says:

    Sound, common sense, well thought through, & logially applied, so it won’t be published in the MSM then!

    Warmistas still haven’t & cannot to my (relatively limited) knowledge, answer the underlying question, that when the atmosphere contained 20 times the amount of CO2 half a billion years ago, than it does today, & there was no known climate catastrophe, why would it happen now, with a mere fraction of that amount of CO2 in the atmosphere, what mechanism has changed? Yes there were extinction level events, but as yet linked to (falling levels) of CO2!

  17. Steve Keohane says:

    Excellent piece. I wondered if the recent ‘increased’ warming rate incorporated UHI, as JohnL mentions:
    JohnL says: June 2, 2011 at 3:37 am
    [...]
    A couple of questions follow from this work:

    1. Is it possible that the first half/second half difference is related to the urban heat island and other site-trend effects that Anthony has been documenting?

    If there is any UHI effect, then the CO2 sensitivity is even less than indicated.

  18. Phil Clarke says:

    The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature.

    Er, no it is not. See figure 2.23 in IPCC AR4. Long lived Greenhouse Gas (LLGHG) forcing contributes about 0.35 W/m2 pre-1950. The rest of the warming (about 50%) was likely due to the increase in TSI. See IPCC ‘Understanding and Attributing Climate Change ‘

    “It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-understanding-and.html

    Also, you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans. You need to estimate the energy imbalance (e.g. from OHC numbers), but this gives you a value of around 3C for 2xCO2 ……

  19. richard telford says:

    “the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.”

    Your linear trend plus oscillation explains absolutely nothing. You have described the curve, not explained it. Explanations of the global air temperature record require physics, not numerology.

  20. Bloke down the pub says:

    If by 2100 the global temperature has only risen by 0.42c, the faithful will still be claiming a second coming of warmth is due any moment.

  21. Kelvin Vaughan says:

    Shaun D says:
    June 2, 2011 at 2:52 am

    I agree. This is real science. But I have no idea what it means.

    It means, dare I say it, It’s not as bad as we thought!

  22. Peter Taylor says:

    Always good to see empirical studies on climate sensitivity – but they are fraught with difficulty. When IPCC first did their estimates, 1 watt/square metre of radiative forcing from CO2 and other GHGs (net of sulphurous cooling) looked to have produced 0.8 degrees C of global warming. Later, Keith Shine at Reading, dropped that to about 0.4 in a well argued paper (he was also a member of IPCC)….but all then assuming ALL the warming seen was AGW. In my own review in ‘Chill’ I argued from the data that about 80% appeared natural and that the sulphurous cooling was not global – the global chill (from dimming) was cloud, as was most of the warming due to a 4% global reduction in cloud cover from 1983-2001 (International Satellite Cloud Climatology Project data) – which AGWs assume was a positive feedback to CO2 warming. John Christy is on record with a 75% natural estimate (to the BBC).

    So – Franks work is in line with these data – a 75% reduction on Shine’s 0.4 gives 0.1 and a figure for doubling of less than half a degree Celsius.

    However….all of these figures assume there is no time lag for surface air temperatures due to ocean storage and release…..personally I don’t think there is as much as often implied but it needs to be considered. We are left with a question: what is the mechanism for the long-term recovery from the LIA? It is a very steady trend underneath the oscillations.

  23. Sorry, this is the kind of “reduction ad adsurdem” which gave the alarmists a bad name. You have a total of two periods, you provide absolutely no analysis of the significance of the supposed 55year cycle and don’t e.g. show the correlation that could be obtained at other frequencies.

    The size of this “signal” is 0.1C when the size of the rest of the variation is about the same which gives you a signal to noise level which abysmally small.

    As I’ve said before, with a global temperature signal dominated by long term noise, it is highly likely that you will mistake natural variation with some kind of cycle or trend. Therefore you should be very robust in your analyse and I would say the minimum is a statement of significance either in the form of a statement of statistical significance of the difference in amplitude of this frequency compared to the general level of similar frequencies or a statement of signal to noise.

  24. Patrick Kelly says:

    The link in the post to Demetris Koutsoyiannis is returing a 404.

  25. Alexander K says:

    Another example of good and solid science. Great to read this, which gives lie to the inbuilt bias of the IPCC whose mission is to scare us all into submission to deindistrialisation and hugely more expensive energy with their particular brand of nonscience.

  26. Alex the skeptic says:

    “It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning.”

    This is exactly what I have been saying to myself during these years of man-made global warming indoctrination: How can one say that the planet has warmed 0.6 C during the last 100 years (or thereabouts) by looking at thermometer readings taken 100 years ago and all between then and now? How accurate were these? And how accurate are the present readings? How can one be so sure and accurate, especially after the great weather staion cull?

    Two more point:
    1. are thermometers read and recorded continuously, or at intervals, or daily and nightly (max and min)?

    2. Wouldn’t the integration of T/t into actual total planetary energy be a better measure of the thermodynamic state of the planet, rather than just averaged temperature statistics? I believe that we should measure total energy not averaged temperature statistics. Most probably that was what Kevin Trenbert had in mind when he expected the oceans to warm up, and therefore to dip the thermometers in the oceans, only to declare later on that “it is a travesty that we cannot find the heat” (in the oceans).

  27. Andy G55 says:

    Ronaldo says:
    And not a fudge factor in sight

    Yep first thing I look at.. how many fudge factors..

    Now I like fudge, but it really doesn’t have any place in a proper analysis, and the AGW hypothesis is absolutely riddled with them.

  28. John Brookes says:

    Very persuasive, and I’m not on your side of the argument. So based on your graphs, we’ll be getting 25 – 30 years more cooling. Let’s wait and see.

  29. Det says:

    The observation and conclusion seems to be correct. However, with the current extent of urbanization and deforestation many of the natural carbon sinks like rain forests are gone.
    This is different from the past! The vegetation loves CO2 and will grow faster using the extra carbon food.
    We should be planting trees in a large scale again!

  30. phlogiston says:

    The relationship between theoretic and real CO2 forcing of global temperature is suggested by this palaeo data (thanks to Bill Illis):

    http://img801.imageshack.us/img801/289/logwarmingpaleoclimate.png

  31. Kelvin Vaughan says:

    Just a thought,

    If you average the obital periods of the 9 planets you get 60 years.

  32. Phil Clarke says:
    June 2, 2011 at 4:19 am
    The rest of the warming (about 50%) was likely due to the increase in TSI.
    There is no such corresponding increase in TSI. There is a solar cycle in TSI. Where is that in the temperature data?

  33. Deanster says:

    All I got to say is …… PUBLISH IT … in a peer reviewed journal before the next IPCC toilet paper roll comes out, such that this analysis can be included!

    At that point, insist that this work is included for review .. and if not … let the media campaign begin to further prove the IPCC has no interest in science.

  34. PaulD says:

    The analysis is interesting. I suspect the AGW crowd would argue that the analysis is incomplete because it does not account for the effects of aerosol and global dimming. If an increase in aerosols has muted the warming effect of greenhouse gases, then the climate sensitivity that you calculated would be underestimated during later part of the last century.

    I am aware that estimating the dimming effect of aerosals is fraught with difficulty. However, I think the AGW will dismiss your analysis because it is not included in your simple model. I would be curious how you would respond to this criticism.

  35. Tim Folkerts says:

    Kepler came up with empirical laws for the motions of the planets, but it wasn’t until Newton came along with a theory for planetary orbits that Kepler’s laws became truly “science”. There are any number of other empirical fits that ended up in the dustbin of history because they didn’t hold up over longer periods.

    I have several concerns:
    1) the data is a small signal with a lot of noise.

    2) It is admitted that even much of the signal that is there could be an artifact of instrument problems.

    3) The fit does not work well before the period over which the fit was done. CRU data exists for a few decades before 1880. According to the model, the data should be dropping steeply before 1880, when in fact it is nearly steady.

    4) The years 1940-1960 are mysteriously left out of the analysis for the slopes. If those data points had not been thrown out, then the trend for the first half of the century would be noticeably less and the trend for the 2nd half noticeably greater. This would create major changes in the two major conclusions of the analysis (“Semi-empirical linear model: 0.84 C warmer by 2100; Fully empirical negative feedback model: 0.42 C warmer by 2100.”)

    5) There are no a priori calculations to support the conclusions. I have done a fair number of empirical fits to various data. I have found a fair number of trends that seem to be real but with later turn out to be just coincidence. Without theoretical backing, I have little confidence in in-depth analysis of highly noisy signals.

  36. AnonyMoose says:

    How much of the trends extracted from the GISS data are actually GISS adjustments?

  37. aaron says:

    Not quite sure how to ask this, but I’ll give is shot.

    What is the noisiest (ie. highest time increment resolution) data set availible?

    I’d like to see an analysis of average daily global high temps, low temps, and average.

    My thinking is that GHG effects (sans feedbacks) should be most visible in a Lows (a log fit) and the average trend should show GHG effect with feed backs and ENSO and highs feedbacks and ENSO only.

  38. Paul Vaughan says:

    “So there we have it. Both data sets show the later period warmed more quickly than the earlier period.”

    Nonrandom patterns in the residuals panel of Figure 1 make it crystal clear that, while the simple cosine decomposition might be a useful introductory level starting point for general public discussion of multidecadal fluctuations, superior methods are needed to further the discussion.

    Thanks for the link to:
    http://tamino.wordpress.com/2011/03/02/8000-years-of-amo/
    (Tamino illustrating an impressive capacity for [at least occasional] balance …but certainly not being 100% perceptive about stat model assumptions.)

    Request of those conducting armchair attacks on Pat Frank, who has volunteered time to the community:

    Please present your alternative analyses. The community can then compare your approaches with Pat’s, side-by-side, on a level playing field.

  39. Ryan says:

    Fantastic post Mr Frank, very plausible and difficult to refute. However, eyeballing the GISS data after de-trending I would say that the increased rate of warming after 1950 was due to sudden unexpected cooling in the years 1945 to 1950, after which the global climate played catch-up with itself until the rate of change flattened out, just as you have observed. If this part of the analysis is correct then the data would suggest that there is no real warming trend at all attributable to GHGs (but there may be a tendency towards global cooling perhaps due to setting off large numbers of explosive devices in a relatively short time period). The CRU data does not show this discontinuity but this is likely due to their tendency to try and massage out such sudden changes in their data handling algorithms (which in itself shows the danger of producing data handling algorithms aimed at finding the very data you were hoping to find – they have removed all the discontinuities that contributed to a trend to leave themselves with a trend without the discontinuities).

  40. James Sexton says:

    @Phil Clarke says:
    June 2, 2011 at 4:19 am

    “It is very unlikely that climate changes of at least the seven centuries prior to 1950 were due to variability generated within the climate system alone. A significant fraction of the reconstructed Northern Hemisphere inter-decadal temperature variability over those centuries is very likely attributable to volcanic eruptions and changes in solar irradiance, and it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.” http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-understanding-and.html
    ==============================================
    lmao…..
    Phil, I agree. Pat’s numbers depend upon the assumption that the early warming was because of natural forcings or part of a natural cycle. I expected some detractors to bring that up. But, if you’re countering with some other assumptions based on nothing but pure speculation pulled from some posterior, you might as well have not stated anything. I’m sure Mr. Frank is enjoying a giggle or two at this.

    lol, seven centuries. Let me guess……. from the treeometers. Stop!!! I’m at work and my co-workers are looking at me funny for my seemingly spontaneous outburst of hysterical laughter!!!! lmao!

  41. Paul Vaughan says:

    TOWARDS A UNIFIED VIEW OF SO-CALLED “60 YEAR CYCLES”?

    Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011). Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability. Climate Dynamics. doi: 10.1007/s00382-011-1071-8.

    Since (to my knowledge) there’s not yet a free version, see the conference poster and the guest post at Dr. R.A. Pielke Senior’s blog for the general idea:

    a) Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011a). Poster: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability.
    https://pantherfile.uwm.edu/kravtsov/www/downloads/WKT_poster.pdf

    b) Wyatt, M.G.; Kravtsov, S.; & Tsonis, A.A. (2011b). Blog: Atlantic Multidecadal Oscillation and Northern Hemisphere’s climate variability.
    http://pielkeclimatesci.wordpress.com/?s=Atlantic+Multidecadal+Oscillation+and+Northern+Hemisphere+Wyatt

  42. Paul Vaughan says:
    June 2, 2011 at 7:03 am
    TOWARDS A UNIFIED VIEW OF SO-CALLED “60 YEAR CYCLES”?…
    Without physics, this is as much numerology as Frank’s. Dressing it up with ‘spatio-temporal’ mumbo-jumbo does not advance anything.

  43. aaron says:

    Sorry, the word daily should not be in there.

    I’m thinking the trend in high and low points. Eg., start at the right of the data plot, move to the left and connect to the next lower temp for lows.

    For highs, start at the left and move right, selecting the next higher point.

    For average, the 15 year moving average (15 years ~ the amount of data needed to observe a trend).

  44. Crispin in Waterloo says:

    Scottish Sceptic sez “The size of this “signal” is 0.1C when the size of the rest of the variation is about the same which gives you a signal to noise level which abysmally small.”

    I think that is an unstated point we should infer if we are thinking correctly about the UHI effect, instrument drift, GISS fiddling and so on. While the result is admirably demonstrative of a linear trend on a cosine (non-random) walk your point is well taken: there is no statistically significant change in the ‘rate of change’ in the measured temperature over a century of records, let alone that it might be attributed to an elevation in CO2 caused by anthropogenic emissions.

    I think other contributors above have raised the essence of my comment: given the measurements we have taken in the way they have been, processed in the manner they endured, there is absolutely no detectable AGW signal. It may be real, and it may be there, but it is not detectable because it is lost down in the mud, possibly inseparable from it with the tools we have. The figure of 0.03 deg C is probable +- a much larger value as a result, were one to consider the instrument errors of about 0.5 deg C. 0.03 +-0.30??

  45. Ryan says:

    Mr Frank, how about running your de-trended residuals through a discrete FFT like this one:-

    http://www.random-science-tools.com/maths/FFT.htm

    - to see if you can detect some higher frequency oscillations too. Looks like there is a strong signal with a period of about 3 years, superimposed on a lower frequency signal

  46. Paul Vaughan says:

    @Leif Svalgaard (June 2, 2011 at 7:21 am)

    Rather than attacking ad infinitum, please present your ‘physical’ explanation.

  47. Dave X says:

    The phases of the cosines in fig 2 do not match the phases in fig 3. In Fig 2, the CRU peak is before 1940 while in fig 3 it is after. Which figure is correct?

  48. aaron says:

    James/Phil, it’s nowhere near 3 sigma, but there is a correlation of volcanic activity and 200yr solar cycles.

  49. Phil Clarke says:

    Lief:

    White et al found that Solar cycles cause a flux in Sea surface temperatures
    http://www.agu.org/pubs/crossref/1997/96JC03549.shtml

    and Camp & Tung 2007 found a variance of about 0.2K in global temps attributable to the solar cycle:-

    “By projecting surface temperature data (1959-2004) onto the spatial structure obtained objectively from the composite mean difference between solar max and solar min years, we obtain a global warming signal of almost 0.2 °K attributable to the 11-year solar cycle. The statistical significance of such a globally coherent solar response at the surface is established for the first time.”

    http://www.amath.washington.edu/research/articles/Tung/journals/composite%20mean2.pdf

    And the rise in TSI 1900-1950 and subsequent flatline is shown in various places, notably Lean et al 2008

    “How natural and anthropogenic influences alter global and regional surface temperatures: 1889 to 2006″

    http://pubs.giss.nasa.gov/docs/2008/2008_Lean_Rind.pdf

  50. Paul Vaughan says:
    June 2, 2011 at 7:47 am
    Rather than attacking ad infinitum, please present your ‘physical’ explanation.
    Pointing out that something is numerology can hardly be an ‘attack’. It has been clear for quite a while that there is a noisy 60-yr signal. Only more and better observational data can improve on this. I have no idea what the ‘physical’ explanation is [I would guess some natural period of the oceans], and don’t need any to in order to see the numerology in your ‘unified view’.

  51. meab says:

    The difference in slope between the periods 1880-1940 (60 years) and 1960-2010 (50 years) doesn’t reflect AGW warming, it’s simply an artifact of the start and end times w.r.t. to the 60 year sinusoidal period. 1880-1940 started and ended at the same phase of the oscillating component, both dates were just before the peak – therefore the early period linear fit is valid. However, for the late period 1960 was approaching the sinusoidal trough while 2010 was just after the peak – the late period linear fit is therefore completely non-valid. Conclusions regarding any difference in the linear slope between early and late time periods are therefore completely faulty.

    It would be much better to adjust the 2nd period to start at 1940 and end at 2000 , a valid 60 year period starting and ending at the same phase angle. Of course the cosine/linear fit that leaves no obvious low frequency trend in the residuals already indicates that there won’t be any significant difference.

    A do-over is needed here.

  52. Phil Clarke says:
    June 2, 2011 at 8:07 am
    and Camp & Tung 2007 found a variance of about 0.2K in global temps attributable to the solar cycle
    which is the amplitude of Frank’s 60-yr cycle, but does not show up in the residuals he plots. There should be an expected 0.07 degree solar cycle effect.

    And the rise in TSI 1900-1950 and subsequent flatline is shown in various places
    There is no evidence for a rise in the background TSI. The flatlining starts when we begin to actually measuring TSI.

  53. Dave X says:

    From eyeballing the residuals in fig 1, it looks like the next lower fourier harmonic period of ~120 years would be of similar significance. It looks as least as significant as the discontinuous linear fit in fig 4 and would probably reduce the trend difference even further.

  54. stephen richards says:

    Fred Haynie has presented a very simlar view of the climate cycles on his site. Worth a looK. Search on his name.

  55. Curt says:

    Phil Clarke:

    You say, “you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans.”

    In linear systems analysis, the response of a system with “inertia” to a ramp input does reach its own matching ramping slope after about one time constant of the system. Since Pat starts this part of his analysis in 1960, and the “ramp” of CO2 forcing started at least 100 years earlier, he is not ignoring ocean thermal inertia (more properly “capacitance”). And his 50-year period is hardly “instantaneous”.

    Now, under this analysis, if we were able to suddenly freeze the forcing at the current level, warming would continue for a while (how long is hotly debated). But the “slope analysis” Pat does is fundamentally valid (it’s done in engineering systems analysis all the time).

  56. bob says:

    It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100″

    I don’t buy it.

    I am pretty sure that the little ice age was at least in part caused by volcanic activity, which has since subsided quite a bit, as well as several solar minimums (maunder and others)

    It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.

  57. juanslayton says:

    Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?

  58. Doug Proctor says:

    Leif and Phil Clarke (and others) criticize, probably correctly, for reasons physicists of various sorts understand. The question for all of us, though, is whether the criticisms lead to an incorrect conclusion; the other question for the warmists is whether, were the alleged mistakes corrected, there would be a significant difference in how the conclusion looked. Skeptics need the answer; the warmists need the appearance as well (shades of the Hockey Stick, in reverse).

    Frank’s discussion is very straightforward and easy to follow regardless of errors of commission or omission. Could Leif’s and Phil’s comments be added in to the analysis? Could Leif/Phil not revise Frank’s post and show us how the revision affects the answer?

    The negative comments seem like shooting ourselves (the skeptics) in the foot. Frank’s post appears to show the same results as other, more lettered posters have done. He has certainly done what many of us technically educated but not climatologically vetted citizen-scientists (love that term) are doing. Looking at the data and applying basic principles and reasoning to find how close the IPCC supermen come to our technical commen sense. Not close, of course.

    In a previous WUWT post (I think it was) about how the IPCC various computations and adjustments can be considered a “black box”, it was said you can simulate the relationships inside bbs by looking at how the data-in compares to the data-out. The relationship, as demonstrated, is remarkably simple. As such, one doesn’t need, perhaps, all the detail analyses that Leif and Phil suggest. Most cancel out or contibute only within the error band.

    I’ve done my own such analyses as Frank’s, and find the IPCC black box to not reflect simple considerations. If Leif and Phil are correct, that we have to discount Frank’s work because he hasn’t got all the pieces right, that that terminally weakens his conclusions, then all of us outside the authorities – the IPCC, Gavin and Jim – might as well roll over. But I think Frank’s approach has great value. He is not saying that the IPCC is 10% out. He is saying they are 80% or more out. So let Frank be 50% wrong, and Frank is still demonstrating that the IPCC is 40% out. That is still a deathblow as far as the non-natural theories go. No CAGW attributable to CO2. And are Leif-and-Phil’s criticisms even at the 50% error level?

    The two nicely demonstrated patterns here are that a 60-year cosine function lies atop a linear function. The IPCC model does not have a cosine function. They don’t have a linear function either. And while there may be a CO2 function prior to 1965, the models are relevant to the post-1965 period, and especially the post-1980 period. Both sides can forget the prior period when looking to the near future, and act as if CO2 increases began the day Al Gore found his mission in life. So such criticism (even though true) is irerelevant.

    The IPCC theme is that the past is not the predictor of the future, at least prior to 1965. The future is the product of the present. The skeptic theme is that the past is, indeed, the predictor of the future, though with some minor modification by the present. The pro- and con-CAGW arguments are rooted in this disconnect. “That was then, and this is now” is the fundamental break the warmists have from the skeptics. If the past is, indeed, a good predictor of the future, then Frank’s (mine and others’) simpler view is valid. The previous broad patterns continue into the future through the magic of our own “black box”. No, we don’t have the mechanisms, but to refute the IPCC we don’t need one, as our proof is in the observations, not in the beauty of the “projections”. The details that Leif-and-Phil’s are looking for are handled and hidden inside the box.

    This is a great post in large part because it demonstrates how adherence to basic principles while using the IPCC data lead to a significantly different climate energy scenario in hard numbers. The scenario is therefore falsifiable, something that, at least in the 10-year term, the IPCC “projections” are not.

    By Frank’s work, by 2022 (my estimate) the global temperature will have dropped by about 0.3C, over which time the IPCC says the temperature should have risen by another 0.22 – 0.30C. The two will then be apart by 0.5 to 0.6C, something terminal for the IPCC model. By 2015 the difference will be still within the error bands, but will be looking like 0.1C – in the wrong direction. We’re getting into the time-frame when Gavin and Jim look to retirement and accolades while they can.

    There needs to be an empirical consideration to criticisms such as we have here. We need to know if errors are terminal, moderate or cosmetic. This is the stuff that the generalist can understand, maybe even the MSM journalist. So let’s build on it.

  59. G. Karst says:

    This seems to be an ideal analysis, for the good folks at Lucia’s blackboard, to sink their teeth into.

    I certainly hope it gets published more widely than a few blogs… Is anyone considering a paper, so that it can be safely referred/cited to? GK

  60. Ryan says:

    Even before you do the cosine analysis, the gradient 1910 to 1943 is just as steep as the gradient 1970 – 2000. On that basis there is really no evidence that mankind is doing something unusual to the earths temperatures – you would have to demonstrate that the cause of post war warming was different to the cause of pre-war warming. Otherwise the graphs are really showing just a 130 yr upward trend unaffected by modern technology.

  61. James Sexton says:

    bob says:
    June 2, 2011 at 9:09 am

    “It all boils down to “we are still recovering from the little ice age and will continue to recover from the little ice age through to 2100″

    I don’t buy it………..

    It is the forcings that matter, and the recovery from the little ice age doesn’t count as a forcing.”
    =========================================

    Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.

  62. Tom_R says:

    >> Phil Clarke says:
    June 2, 2011 at 4:19 am

    it is likely that anthropogenic forcing contributed to the early 20th-century warming evident in these records.”

    Also, you cannot calculate sensitivity by simply dividing instantaneous delta-t by forcing as this ignores the considerable thermal inertia of the Earth system, especially the Oceans. … <<

    These two statements combine to say that early 20th century warming was caused by anthropogenic forcing from the 19th century. Do you really buy that?

  63. SteveSadlov says:

    To those who say, only the Western part of North America is experiencing unusually cold conditions, I present:

    http://www.ansa.it/web/notizie/regioni/valledaosta/2011/06/01/visualizza_new.html_842381390.html

  64. Ammonite says:

    Phil Clarke says: June 2, 2011 at 4:19 am
    Post Quote: “The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature.”

    Phil Clarke response: “Er, no it is not. See figure 2.23 in IPCC AR4. Long lived Greenhouse Gas (LLGHG) forcing contributes about 0.35 W/m2 pre-1950…”

    Thank you Phil. Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter. If the claim is not well founded it is often a sign of strawman cometh. In this post the claim above is well off the mark.

  65. Sarah says:

    We are making the mistake of looking at real world data. As we keep getting told the real world data is wrong unless it matches the models, as it is only the models that are right.

    Please remember that a snowball earth for the next 10 thousand years is not inconsistent with CAGW, that would just be weather not climate but a clear sunny day in summer is climate not weather /sarc

  66. Dave X says:

    @”Probably a dumb question from a geezer whose last trig class was in Pomona High School in 1957: Is there a reason the periodicity is expressed as a cosine, rather than a a sign?”

    Just tradition, based on FFT and maybe tidal analysis. A cosine curve is the same as a sine curve with a 90 degree phase shift, or A*cosine(w*t+p)=A*sine(w*t+p+pi/2).

    Maybe it also makes it easier to interpret the fitted phase term p in that it is the offset of the peak rather than of the zero crossing.

    In either case, fitting a single trig curve fits three parameters (or fudge factors): amplitude, frequency, and phase.

  67. Paul Vaughan says:

    @Leif Svalgaard (June 2, 2011 at 8:16 am)

    You owe Marcia Wyatt an apology.

  68. SteveSadlov says:

    Yesterday I noted the extremely cold conditions aloft, as an upper low came ashore in Western North America. I commented that with -27 deg C at 500mB, I was worried about twisters:

    http://www.sfgate.com/cgi-bin/article.cgi?f=/n/a/2011/06/01/state/n190225D61.DTL

    I wonder what will happen when that package of air (now over the four corners region) hits the Eastern US?

  69. Tim Folkerts says:

    By my own quick analysis, the best fit to the HADCRUT 3 data from 1880-2010 is

    T = -11.7464 + 0.00597 /yr + 0.1331 cos (0.1062 * T + 1.1330)

    That gives 0.0597 C/decade and a period of 59.3 years. For the years, 1880 to 2010, there is an average deviation of 0.09 C between the fit and the data (ie the average of |Fit – Actual| is 0.09), with about half above and half below.

    The big problem is that before 1880, the fit is LOUSY! every single point from 1850 – 1883 is above the fit, by an average amount of 0.31 C. (And remember, that is using 5 free parameters to fit the data — slope, intercept, amplitude, period and phase angle.)

    And even at the end, I get 9/10 points in the last decade as being above the fit. 2011 will be interesting to see. It started with a huge drop in global temperatures from the 2010 levels, but is jumping back up a bit. If the drop was a fluke, then the spike down might have been an anomaly, but if the spike is a trend, then we might be on a downward trend as Pat Frank’s analysis would predict.

  70. Greg, Spokane WA says:

    Just in case anyone is still interested:

    Demetris Koutsoyiannis

  71. pochas says:

    To support this analysis I reiterate my earlier comment to WUWT, wherein I calculate a sensitivity of 0.121 ℃ /(w/m²). This calculation ignored negative feedbacks from clouds, etc, whereas the present analysis would include these factors, so I would consider the two to be in agreement.

    http://wattsupwiththat.com/2010/10/25/sensitivity-training-determining-the-correct-climate-sensitivity/#comment-516753

  72. Paul Vaughan says:
    June 2, 2011 at 10:43 am
    You owe Marcia Wyatt an apology.
    For what?

  73. John Finn says:

    Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.

    So how do you explain the LGM or the LIA and the MWP for that matter?

  74. Tim Folkerts says:
    June 2, 2011 at 11:29 am
    The big problem is that before 1880, the fit is LOUSY!
    This is the usual problem with numerology: once you go outside of the interval on which the original fit is based, the correlation breaks down.

  75. aaron says:

    Dave, does the convention make derivative analysis easier?

  76. vukcevic says:

    There is no much point delving into AMO , PDO and solar relationships unless science can relate to what is causing what.
    AMO and PDO ‘drivers’ as I’ve identified from available data, are not perfectly synchronised either among themselves or with the solar activity.
    http://www.vukcevic.talktalk.net/DAP.htm
    However, one can’t escape impression that since 1900s (time of the reasonable data reliability) there is a loose relationship to the solar output, not perfect, but there is some commonality.
    Since none of the data I used are TSI related, then one can say that the solar science is partially correct to say ‘it is not TSI’.
    On the other hand solar scientists do not have monopoly on the Sun-Earth link knowledge.

  77. SteveSadlov says:

    Fitting certain “data” = numerology. LOLOL!

  78. vukcevic says:

    Mr. Frank
    Sin/Cos correlation is usually described as numerology (my personal experience).
    However in this case no need to use Cos function, just superimpose the true North Magnetic Pole (until 1996/7 located in the Hudson Bay Area) magnetic flux and you will get just as good match.
    See graph on the index page of my website:
    http://www.vukcevic.talktalk.net/

  79. Paul Vaughan says:

    @Leif Svalgaard June 2, 2011 at 11:37 am

    You’re out of your league facing off against Kravtsov & Tsonis.

  80. Paul Vaughan says:
    June 2, 2011 at 1:32 pm
    You’re out of your league facing off against Kravtsov & Tsonis
    Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful.

  81. SteveSadlov says:

    http://www.squaw.com/uber-cam

    Too bad they closed. There’s more snow now than there was a month ago. They close leaving all the snow to go to waste due to insurance reasons plus, the flatlanders watching their boob tubes probably think there is no snow and everything is going tropical up there. After all, da man on da news sed dat dem tornados is doo to gwobo warmin’, can’t be no snow up dhere.

  82. aaron says:

    Just like models.

  83. bob says:

    James Sexton says:

    “Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.”

    I am sorry, but that attitude belongs in the Medieval Warm Period.

    If we don’t try and understand what causes the climate to change, then we will never be able to answer the question of whether or not burning fossil fuels will be detrimental to the human race.

    I think a big problem currently before us is determining what the current forcings are. I don’t think we have a good measure of some of the current important ones such as the amount of aerosols currently being emmited.
    And without that, predictions on which way the climate will go in the future are fraught with peril.

  84. ecowho says:

    BTW involved in a debate (between the name calling and pseudo science put downs) on the ‘Say Yes Australia’ FB page and someone put in a reference to

    http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf

    Anyone care to comment and contrast? I’d like to further the meaningful debate. Thanks

  85. Matt says:

    It is incredibly easy to see sinusoids and linear trends in any set of data. Absent a reason or an explanation, one cannot just subtract something like that out of the temp series. And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution. That is a moronic and naive assumption, absent any further information or theory.

    Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects. They also account for many different human effects that push the climate in opposing directions (after all, the human impact is not monolithic).

    For your readers who find this article to be an exciting example of “real science!”, I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs. Real science requires genuine skepticism and commitment to rigor, not sloppy contrarianism.

  86. tokyoboy says:

    Thanks for your impressive post, Dr. Frank.
    I feel, however, the whole picture may be drastically changed if you use the UAH-MSU satellite temperature data, which already has a 32-year history, instead of the error-prone surface station data.

  87. JimF says:

    @Leif Svalgaard says:
    June 2, 2011 at 1:49 pm: “…Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful….” I think that applies here. Pat Frank caught a fine trout. After gutting it, there was nothing left. Using the warmist’s bullshit data, he showed there’s nothing in that data.

    @Ammonite says:
    June 2, 2011 at 10:12 am: …Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter….” I’m sure that they state, somewhere in all their bloviation, that it is “likely” that pigs can fly. Likely, loosely defined, isn’t a scientific term, but then, the IPCC isn’t a scientific body.

    @Doug Proctor says:
    June 2, 2011 at 9:14 am: Good appeal to stop being the smartest guy in the classroom and actually pitch in and see if one can make a useful contribution to the discussion (in the case that the discussion isn’t widely viewed as idiotic).

  88. Paul Vaughan says:

    @tallbloke (June 2, 2011 at 8:54 am )

    Genuinely looking for some clarification on your essay…

    You say the tail doesn’t wag the dog, but then you go on to emphasize the importance of the sunshade in the sky (clouds). Despite drawing attention to decadal patterns in specific humidity, you appear to restrict your conceptualization to anomalies, ignoring annual & semi-annual heat-pump cycles and the related intertwining of circulation geometry, oceans, & sunshade (which Stephen Wilde so strongly emphasizes in recent months). Reading your essay helped me understand vukcevic’s perspective (ocean-centric), but other perspectives also reveal a climate-dog chasing its tail (i.e. looping spatiotemporal causation chain that moots debate), so a line of productive inquiry is to step back far enough from the neverending loop to see what drives changes in the rate & amplitude of “tail chasing” [...at scales supported by observation].

    So my question is (& I sure hope it’s obvious by now that this is where I was heading)…

    Do you disagree with Sidorenkov (2003 & 2005) on ice?

    My understanding (A G Foster comment in a somewhat-recent WUWT thread) is that NASA’s R. Gross is now pursuing exactly this (…which, as I hope you know, matches -LOD, AMO, PMO, etc. in multidecadal phase).

  89. Dave X says:

    @Aaron: The derivative of cos(x) is -sin(x) versus the derivative of sin(x) is cos(x), so if a negative sign is a significant complication to the derivative analysis, then it actually would make the derivative analysis significantly more complicated. On the other hand, since the integral of cos(x) is sin(x), the convention would similarly significantly simplify an integral analysis.

    Still, in the figure 2 CRU curve, the period looks more like 70 years (pre 1940 to post-2000) than the 60 year period called out in fig 3. Something is amiss.

  90. Pat Frank says:

    Thank-you very much, Anthony, for picking up my essay at Jeff’s tAV. It was a happy surprise to find it here today.

    Thanks also to everyone for your very thoughtful comments. I’m a little overwhelmed with all your responses, and with 88 of them so far to go through. I’m a little stuck for time just now, but hope to post replies this weekend.

    I’d like to acknowledge Bob Tisdale’s comment, though. As he mentioned, we’ve discussed the PDO+AMO periodicity described in my analysis. Those interested are encouraged to read the exchanges at tAV, at the link above.

    It’s clear though, that to get to a place that benefits us all, Bob, you’ll have to work out your differences directly with Joe D’Aleo and Marcia Wyatt, et al., and make the conclusion public.

    Later . . . :-)

  91. Vince Whirlwind says:

    Good luck getting this analysis published. I doubt even E & E would touch it with a bargepole. It’s so full of holes it makes the Titanic look watertight.

  92. Ryan says:

    @Matt

    “It is incredibly easy to see sinusoids and linear trends in any set of data.”

    Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. The point of this analysis is that the peaks and troughs shown in the data set are of a similar amplitude to the underlying straight line trend, so it is reasonable to see what happens mathematically if you fit the data set to a sine.

    “Absent a reason or an explanation, one cannot just subtract something like that out of the temp series.”

    Fair enough, and a valid criticism of Pat Frank’s explanation here, since he really concentrates on a simple mathematical analysis. But actually there is a perfectly good theory underlying Pat Frank’s analysis. There are two competing theories for observed variations in climate. One is held by the IPCC that AGW caused by a significant uptick in CO2 output after 1950 is causing a significant increase in the rate of warming after 1950. The other is held by the sceptic camp – since it denies AGW has a serious impact on climate then it is safe to assume that the sceptic camp believes the climate is fairly “stable”. Now we have to be careful here since “stable” can mean many things in this case – it can mean “flatlining” but it can also mean continuous oscillation and limit cycling (limit cycling is the dramatic and rapid change from one condition to another – the ice age/interglacial climate oscillation is an example of a limit cycle). Oscillation tends to occur in systems where there is negative feedback and energy storage in the system. In any system there can be multiple sources of feedback and energy storage and hence multiple sources of oscillation all at different frequencies and amplitudes that could be superimposed on each other. Pat Frank identifies one possible source of energy storage as the ocean, since water has a very high specific heat capacity and therefore can store enormous amounts of energy showing only a small rise in temperature. Given this scenario it is perfectly reasonable to look for sinusoidal oscillation within a climate data set and propose ocean heat storage as a possible cause of that oscillation. It is not pure “numerology” – it is fitting a function to a data set based on a hypothesis and looking to see how good the fit is. The fit certainly looks as good as fitting a pure linear trend to the data, and the reasoning behind is certainly no worse than fitting a pure linear trend to the post 1950 data and then deriving a gradient from that trend and proclaiming it to be the climate sensitivity.

    “And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution.”

    No, you can’t. But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW? We can’t – the data doesn’t allow us to do that. The dataset is completely inconclusive. CRU and GISS have been wasting their time. We cannot say that the gradient after 1950 is in any way exceptional and therefore related even in part to AGW. You could perfectly well derive from the dataset that AGW has no impact – in fact since that is the default position that would normally be the approach science would take, but proponents of AGW are claiming a special case here because they say the risks are very high (they neglect the risks of rolling back the great technological advances made in the West that are currently responsible for the survival of about 1billion people).

    “That is a moronic and naive assumption, absent any further information or theory.”

    Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient. Pat Frank is correct in that at the very least the gradient after 1950 is hardly any worse than the gradient before 1950 when AGW was minimal (since the CO2 in the atmosphere before 1950 is proposed to have been stable) – this is before we even get into the relatively small contribution in the acceleration that might be related to a sine oscillation in the climate with a period of 60years. Furthermore the most recent data from 2000 to 2010 shows deceleration not acceleration, so it hardly supports the theory that AGW is becoming the dominant contributor to temperature trends in the new century.

    “Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects.”

    Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001. What the hell happened there? An enormous cow fart? I doubt it. I doubt it had anything to do at all with AGW and yet that one month was 3 Celsius higher than a year previous. Smooth that month out over a whole year and it would still contribute 0.3Celsius increase in temperature for the whole year! In fact the difference in the whole year was much bigger than that – because all but one month in 2002 was warmer than 2001 for Lerwick, and in each case by at least 1.2Celsius. Why? Well not because of CO2. Those thermometer readings were measuring a temperature anomoly year to year that had nothing to do with CO2. I’m guessing cloud cover. 2002 was sunny and 2001 was cloudy, would be my guess (and that fits to my memory of 2002 as well). But maybe wind direction made a difference too. So when we look year to year at any location we can be 100% certain that differences in temperature have little to do with CO2 but are entirely due to cloud cover and wind direction. And yet, when we average out all these thermometric measurements of cloud cover and wind direction we assume that what we are left with is the contribution due to CO2???? That’s like making measurements of the speed of vehicles on a motorway/freeway over a 50 year period and coming to the conclusion that bicycles must be getting faster.

    “For your readers who find this article to be an exciting example of “real science!”” – Well I don’t. There ain’t much science involved. The maths is OK however.

    “I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs.”

    My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.

    The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. No surprise there. Thermometers in Stephenson screens at ground level can be used to measure cloud cover anomalies but not atmospheric temperature anomalies. Human development likes clouds because clouds = rain = drinking water+irrigation. So that’s where the thermometers tend to be – in cloudy places. What you have above is two graphs showing how cloud cover has decreased slightly over the last 100yrs. Worrying in itself perhaps, but it has no connection with AGW.

  93. Paul Vaughan says:

    @Vince Whirlwind (June 2, 2011 at 10:24 pm)

    Rather than blasting unsupported cheap shots from the safe cover of the periphery, please step right out into the open, volunteering to the community your alternative to Pat Frank’s approach.

  94. Joel Shore says:

    Here’s a critique of this post: http://tamino.wordpress.com/2011/06/02/frankly-not/

    REPLY: Heh, he’s got what he thinks is a clever label, “mathurbation”, this kills any rebuttal integrity right there. The faux Tamino, as self appointed time series policeman, would complain about a straight line with two data points if it appeared here, so it’s just the usual MO for him. I’ll leave it up to Pat Frank to respond if he wishes, my advice would be to provide an updated post here rather than there, because as we all know and has been deomstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.

    - Anthony

  95. Greg says:

    Ryan: “that’s where the thermometers tend to be – in cloudy places.”

    What? You lost me there.

  96. Dave X says:

    Ryan @ “Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. ”

    It most certainly clearly can fit if you chose a sinusoidal frequency on the order of 4+ times the length of the data. You fit a sinusoid by choosing a frequency w, calculating sin(w*t) and cos(w*t) and then so the same old ordinary least squares process to fit the model Y(t)=b0 + b1*cos(wt)+b2*sin(wt). Then the amplitude of the sinusoid is sqrt(b1^2+b2^2) and the phase of a cosine would be atan2(b2/b1). Some harmonic analysis codes even use the infinitely slow frequency of zero to model a constant intercept term, so a sinusoid can even model a constant.

    Including the intercept term, the sinusoid/cosine model, does have 1+3=4 parameters compared to a linear model’s 1+1=2 terms, but clearly, it can fit the data at least as well as a straight line,

    Frank’s methodology might be good math, but whether or not it is good stats would depend on a residuals and validation analysis, which in the above seems to be limited to visual analysis with repeated assertions of “clearly.”

  97. Bob Shapiro says:

    “The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias.”

    Yes, but since you’re looking at a cosine function to normalize your data, you really should pick similar points and durations along your curve. The 1880 point is near the top of the curve, and has a duration of 60 years (to 1940).

    However, your 1960 start is below midway up, and since your duration (to 2010) is 50 years (less than the 60 year period of your cosine function), the start and end points bias the results. In this case, the end point necessarily will be artificially higher on the curve, resulting in a greater slope.

    The upshot is that, while you showed a minor sensitivity for CO2, the unbiased 1960-2010 slope actually should show an even lower sensitivity.

    Otherwise, nice job, especially in using assumptions which give conservative results.

  98. SteveSadlov says:

    Look on the visible satellite on the side bar of this Blog:

    DISCUSSION…AS OF 9:30 AM PDT FRIDAY…MID AND HIGH CLOUDS ARE STREAMING OVER THE DISTRICT IN ADVANCE OF THE APPROACHING LATE- SPRING STORM. THE UPPER LOW CENTER…CURRENTLY LOCATED NEAR 40N/130W…IS DROPPING SOUTHWARD OVER THE COASTAL WATERS…AND IS DUE TO REMAIN OFF THE COAST UNTIL LATE SUNDAY WHEN IT IS PROGGED TO FINALLY SWING EASTWARD OVER CENTRAL CALIFORNIA. PLENTY OF MOISTURE HAS BEEN ENTRAINED INTO THIS SYSTEM…AND THE WAY THE SYSTEM WILL INTERACT WITH THE COAST IN TERMS OF OROGRAPHIC ENHANCEMENTS…THIS LOOKS LIKE A POTENTIALLY RECORD BREAKING EVENT FOR OUR AREA FOR EARLY JUNE.

    LATEST AMSU PRECIPITABLE WATER ESTIMATES GIVE WELL OVER AN INCH OF RAIN WRAPPED UP IN THIS SYSTEM. MODELS CONTINUE PREVIOUS TRENDS OF BRINGING LIGHT RAIN TO THE NORTH BAY TODAY…AND SPREADING SOUTH THROUGH THE GREATER SF BAY BY EVENING…THEN REACHING THE MONTEREY BAY AREA BEFORE MIDNIGHT. HEAVIEST RAIN IS EXPECTED OVERNIGHT TONIGHT INTO SATURDAY MORNING. BUT AS THE UPPER LOW IS FORECAST TO REMAIN WOBBLING OFF THE COAST THROUGH SUNDAY…SHOWER CHANCES WILL PERSIST THROUGH THE WEEKEND.

    CONFERRING WITH THE CALIFORNIA/NEVADA RIVER FORECAST CENTER ON QPFS…2-5 INCHES STORM TOTAL ARE POSSIBLE ACROSS THE WETTEST AREAS INCLUDING NORTH BAY HILLS…SANTA CRUZ MOUNTAINS…AND THE SANTA LUCIAS. INLAND LOWER AREAS COULD GET UPWARDS OF 1-2 INCHES TOTAL. ALTHOUGH THE BASINS CAN HANDLE THIS AMOUNT OF RAINFALL SPREAD OUT OVER TWO DAYS…THESE ARE STILL BIG NUMBERS GIVEN WHERE WE ARE IN THE CALENDAR. THUS…SOME RECORD RAINFALL AMOUNTS ARE HIGHLY LIKELY FOR JUNE.

    GIVEN THE PROXIMITY OF THE COLD UPPER LOW…THUNDERSTORMS ARE ALSO A POSSIBILITY…AND WILL ADD A SLIGHT CHANCE TO THE AFTERNOON FORECAST PACKAGE.

    SHOWERS TO END LATE SUNDAY AS THE UPPER LOW FINALLY EJECTS TO THE EAST. THE REST OF THE FORECAST PERIOD IS EXPECTED TO CONTINUE COOL AS A LONG-WAVE UPPER TROUGH REMAINS OVER THE WEST COAST. NOT RULING OUT FUTURE SHOWER CHANCES AS WELL…GIVEN THE PRESENCE OF THIS TROUGH.

    =================================

    Thank goodness this system has a very cold core. Otherwise, we would face a rather cataclysmic situation given the massive snow pack in the high country.

    Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.

    Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.

  99. Matt says:

    @ Ryan

    I appreciate the thoughtful response.

    “But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW?”

    Nobody is ignoring the trend before 1950 and no one is saying that the trend from 1970 to 2000 is purely AGW. Read the 4th assessment IPCC report.

    The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects. You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.

    “My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. ”

    I have the opposite story. I grew up an ardent “skeptic”. In grad school, I met some real climate scientists. At their encouragement, I started reading the literature and I was shocked to discover that the work is very thorough. I was also surprised at how open the community was about its uncertainties, contrary to how I was raised. I am not a climate scientist and do not purport to be an expert. However, as a experimental particle physicist I hope to be able to claim that I can see the difference between mature, rigorous scholarship and sloppy, hand-waving. This article is sloppy hand-waving.

    “Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient.”

    Again, read the attribution (finger-print) analyses. No one is following the procedure you have described. You are evoking a straw man for what the climate science is saying about temperature trends and human impact. First and foremost, aerosols have a cooling effect that obscured the full impact of greenhouse gasses for much of the 60s and 70s (pre-clean air act). Second, most of the known, natural climate forcing mechanisms have plateaued and even reversed over the last 50 years of the 20th century. Given this change in natural forcings, it is certainly wrong to subtract the trend of the first 50 years from the trend of the second 50 years. This also suggests, that the observed warming over much of the last 50 years is building on what otherwise would have probably been a cooling, absent human impact. One needs to be able to understand the magnitude and direction this natural trend before one can begin to separate out the human effect.

    “Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001…”

    I only listed some of the factors. But these are accounted for in the climate literature. Water vapor is admittedly one of the poorest understood of the feedbacks, but there is tremendous work being directed towards this question. I don’t know anything about your Lerwick story but it sounds like an anecdote (the favorite tool of contrarians). Very large fluctuations from month-to-month temperatures often occur at particular localities. This is meaningless to the global average temperature anomaly.

    “Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. ”

    What deliberate misinterpretation of temperature data?

    “I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.”

    Why do you come to this conclusion? I think the work of the climate science community is of a very high caliber. There is a cottage industry built around maligning the climate science establishment. This is the part of the whole skeptic thing that really turns me off. These personal attacks and accusations go far beyond academic discussions about the science. You really seem sincere and I strongly urge you to visit a local University and talk to actual publishing climate scientists. They will appreciate your tough questions, as long as they are coming from sincere curiosity and not with a rhetorical and cynical tone. You will be surprised at the experience.

    “The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. ”

    Read the IPCC AR4 report from working group 1. Not the summaries, but the actual report. It is a really good summary of the state of climate science, despite all of the attempts to paint it as a global liberal conspiracy. Even giving Frank the benefit of the doubt, this article is -at best- some preliminary speculation. But, I am afraid that it isn’t even interesting speculation. The talk sinusoid and slopes are repeated again and again in the contrarian rumor-mill, as if no one has thought about this stuff before. I’m sorry, but it is embarrassing that this guy would be so arrogant as to proclaim that people should “spread the word” of this calculation. It is such a rudimentary and flawed line of reasoning that it is utterly meaningless and not in the same Universe as the established attribution analyses.

  100. SteveSadlov says:

    Squaw Valley will reopen for 4th of July weekend.

  101. Arno Arrak says:

    Pat says: “…for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.” You are totally wrong to assume that they are real and meaningful. They are not as comparison with satellite temperature measurements will tell you. Obviously you have not read my book “What Warming?” or you would know that they fabricate temperature curves. One example is the period in the eighties and nineties they show as a steady temperature rise called the “late twentieth century warming.” I have proved that this so-called warming does not even exist. What does exist in the eighties and nineties is a temperature oscillation caused by the alternation of El Nino and La Nina phases of ENSO, up and down by half a degree for twenty years, but no rise until 1998. That is ten years after Hansen invoked global warming in front of the Senate in 1988. His testimony gave a kick start to the present global warming craze which turns out to have been founded on a non-existent warming. There is more – get my book from Amazon and read it. I can see why warmists want to ignore it but there is no reason for someone who wants to learn the truth about global warming not to know what is in it.

  102. dc says:

    well done indeed – some very impressive sounding words like “oscillation”, “residuals” and “sensitivity”, nice curvy lines that fit the data properly, and, best of all, a conclusion that confirms by beliefs. I’m have no scientific training, let along understanding of climatology, but I know this must be real, empirical science (not like that IPCC rubbish). It gives the answer I want.

  103. fredb says:

    Anthony’s response to Joel Shore’s comment above commits the same blunder he accuses Tamino of — dismissal by rubbishing the integrity of the commenter. As I think I’ve said before here, “play the ball, not the man”

    That said, I really would love to see a response from Frank!

  104. Paul Vaughan says:

    SteveSadlov (June 3, 2011 at 10:14 am) wrote:

    “Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.

    Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.”

    -
    Requires too much thought for those who think in anomalies and can’t be bothered with changes of state of water. Looks like it will be decades before people clue in. Good to see evidence that there’s at least one person thinking — much appreciated.

  105. charles says:

    “… as we all know and has been demonstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.”

    That has not been my experience. He allows plenty of dissention, but he does not suffer fools gladly; nor should he. If Pat Franks is so confident of his analysis, he should submit it for publication to any of the peer reviewed climate journals, and then see where the chips fall.

    I would be happy to see an exchange here or at Open Mind between Pat Franks and Tamino/Grant Foster. It seems to me that at this point, Mr. Franks has some explaining to do in responding to the critique of Mr. Foster and the others who responded in detail at Open Mind.

  106. Paul Vaughan says:

    @charles (June 3, 2011 at 8:57 pm)

    Tamino is very heavy-handed with censorship, even of benign comments.

  107. Bart says:

    Matt says:
    June 3, 2011 at 2:39 pm

    “The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects.”

    The problem with that is: process of elimination only works when your knowledge of all alternatives is complete. Climate Science has only been researched seriously for a very few decades, and the Earth’s climate system is immensely complex. Based on your sober writing, I doubt you would claim that every potentially significant effect which could cause a ~60 year temperature cycle has been investigated and demonstrated to be insignificant. If you did, the only effect on my perspective would be to lower my opinion of your sagacity.

    “You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.”

    I think you are misinterpreting the exercise. The author is performing an experiment in which he accepts the IPCC argument that significant change occurred mid-century, and follows the path where that leads. And, given the presence of a 60-ish year cycle in the data, it leads to less climate sensitivity than the IPCC claims.

    Pace Tamino and his ilk, there clearly is a ~60 year cyclical process in the data over the last century evident by inspection. Is it a phantom of measurement error, or mere coincidence in timing between between an early transient and subsequent rise to significance of GHG forcing? Or, is it the excitation of a fundamental mode of the system which began a century or more ago, and has yet to damp out?

    Given the third coincidence of peaking in the early part of this century right on schedule, I would tend to suspect the latter. In fact, this is precisely how the output of such a mode, coupled in series with an integrator or longer cycle mode, might look driven by white noise or any other random process within the bandwidth. I would suggest to the author trying out a fit with an amplitude modulated sinusoid, which looks to me could be contrived to give a better fit.

  108. Bart says:

    Leif Svalgaard says:
    June 2, 2011 at 7:21 am

    “Without physics, this is as much numerology as Frank’s.”

    With incomplete knowledge of all significantly contributing physical processes, it’s all “numerology” at some level. When you do not know what is going on (and, don’t anyone try to tell me the climate establishment fully understands the lull in temperature rise of the last decade), you look at the data and try to tease out some order which can give you new directions in which to investigate.

  109. Bart says:

    Matt says:
    June 3, 2011 at 2:39 pm

    “I started reading the literature and I was shocked to discover that the work is very thorough.”

    One last comment on this posting. No matter how brilliant the researchers or “thorough” their work, they can still be hopelessly wrong. Ptolemaic astronomers had an incredibly thorough and deeply researched methodology which, contrary to most peoples’ perceptions, gave a reasonably good and repeatable description of the movement of heavenly bodies with well established predictive power. It was just completely and utterly wrong in its driving assumptions. These were not primitive cave dwellers. They were profoundly knowledgeable and intellectually vibrant men who were limited only by the state of knowledge of the day.

  110. phlogiston says:

    Climate is the rock against which the ship of 20th century reductionist-inductive (linear catholic logic) science is going to founder. This rock bears a striking resemblance to the head of Karl Popper.

  111. Werner Brozek says:

    Thank you very much for an excellent article! Granted, the causes of everything is not explained. But would any one have criticized Tycho Brahe for his excellent work measuring the star positions? Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.

  112. Werner Brozek says:
    June 4, 2011 at 11:20 am
    Perhaps some fine tuning on the numbers can be done. However now we need a ‘Kepler’ and ‘Newton’ to explain these graphs.
    Initially Kepler fell into the same trap as Frank. Fitting crummy [limited] data to beautiful curves: http://www.georgehart.com/virtual-polyhedra/kepler.html

  113. Bart says:

    Leif Svalgaard says:
    June 4, 2011 at 11:59 am

    “Fitting crummy [limited] data to beautiful curves…”

    Again, I think this is a misinterpretation. Frank is engaging in hypothesis testing. The IPCC says the data are good. Do the data, then, take us where the IPCC says we are going?

    If the data are that crummy, then what information, if any, do they hold?

  114. Bart says:
    June 4, 2011 at 12:29 pm
    Frank is engaging in hypothesis testing
    his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside, for which there is no evidence [especially not for the future part]. This might be valid if there is a theory that says that it must be so. If no such theory is supplied, it it just numerology.

  115. Joel Shore says:

    Bart says:

    Again, I think this is a misinterpretation. Frank is engaging in hypothesis testing. The IPCC says the data are good. Do the data, then, take us where the IPCC says we are going?

    If the data are that crummy, then what information, if any, do they hold?

    I think the IPCC has always been pretty clear in noting that the instrumental temperature record does not alone place very strong bounds on climate sensitivity. Rather, better empirical evidence is obtained from combining it with constraints from other events such as the least glacial maximum, the climate response to the Mt. Pinatubo eruption (which involves the instrumental temperature record but just a small portion of it), … And, these empirical constraints on climate sensitivity give a similar range as is found using climate models.

    So, Frank’s post is really nothing new…If you make some assumptions regarding the instrumental temperature record, you can find a very low climate sensitivity; however, if you make other assumptions, you can find a very high climate sensitivity.

  116. Bart says:

    Leif Svalgaard says:
    June 4, 2011 at 5:07 pm

    “…for which there is no evidence…”

    Kind of the entire AGW brouhaha in a nutshell, that.

  117. Werner Brozek says:

    The data may be crummy but until we get the BEST, we will have to use what we have. Scientists have always been forced to use less than perfect data, however I will readily admit the climate data is worse than most.
    With regards to explaining the graphs, unless I am mistaken, I believe Willis Eschenbach, with regards to his post: http://wattsupwiththat.com/2011/05/14/life-is-like-a-black-box-of-chocolates/ may be able to go a long ways to explaining the spikes in the lower graphs of Figure 1. As for the sine or cosine curve part, I believe Bob Tisdale could take a good stab at explaining that. In terms of predicting the future climate, I believe the sine curve would have greater predictive value although having a good estimate of sunspots over the next decades, with help from Leif Svalgaard, should provide better forecasts than the IPCC projections.

  118. Spector says:

    As an experiment, I tried my ad hoc cosine series approximation method on the full range of the HadCRUTv global mean temperature data. I believe this method, using the Microsoft Excel Solver utility, attempts to explain the observed data as a discrete number of minimum amplitude sinusoidal (actually cosines) waveforms and as such, it is likely to be incomplete as it does not necessarily find all sinusoids or account for random forcing events (including data collection methodology changes.)

    I used a binary, log-periodic series of cosine periods from 5 to 1280 years. I first ran the optimization adjusting only the amplitudes (deg C) and the base dates (nearest cosine peak to 1930.667 decimal year-date) and then I allowed it to optimize the periods as well. Each element of the series is calculated by subtracting the base date from the actual date and multiplying the result by two times pi() divided by the period (in decimal years)to create the argument for the cosine function. A temperature offset constant is also included in the sum of all elements. I used a method that forces a minimum element amplitude solution to prevent unrealistic solutions with large mutually cancelling element amplitudes over the known data interval. The Data to Error ratio is ten times the log of the sum of the squares (SUMSQ()) of the original data divided by the sum of the squares of the approximation error.

    The final solution seems to predict a temperature drop of 0.4 degrees C from now to 2040 and seems to indicate temperatures dropped 0.2 from 1835 to 1845. The predictive validity of this method depends on how much our climate depends periodic on periodic processes. I note that periods close to one and two times the sunspot period do seem to be present. The elements with periods longer than the data interval (161.250 years) probably approximate the linear slope used in the main article.

    Non-Optimized Periods
    
    Data to Error Ratio (dB):     7.348
    Offset Constant  (deg C):    -0.081
                            
    Element   Element   Element     Element
    Number    Period    Base Date   Amplitude
      9     1,280.000   2370.092    0.1275
      8       640.000   2111.027    0.2050
      7       320.000   2005.172    0.3065
      6       160.000   1873.217    0.2121
      5        80.000   1942.644    0.1219
      4        40.000   1933.768    0.0208
      3        20.000   1940.484    0.0393
      2        10.000   1930.322    0.0226
      1         5.000   1932.714    0.0085
                            
    
    Optimized Periods
                            
    Data to Error Ratio (dB):     7.856
    Offset Constant  (deg C):    -0.0540
                            
    Element   Element  Element     Element
    Number    Period   Base Date   Amplitude
      9     1,279.987  2041.910    0.0010
      8       639.818  2119.186    0.1757
      7       319.570  2042.161    0.2079
      6       159.023  2009.200    0.0533
      5       75.451   1926.991    0.0401
      4       61.120   1942.975    0.1112
      3       21.300   1940.845    0.0498
      2       10.187   1929.856    0.0291
      1        6.016   1932.170    0.0280
     
  119. Bart says:

    Spector says:
    June 5, 2011 at 12:27 pm

    “I used a binary, log-periodic series of cosine periods from 5 to 1280 years. “

    That’s pretty arbitrary. If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?

    It definitely replicates the series of the last century. And, it captures the LIA as well. But, it falls apart at the MWP. In principle, you could always find a good replication over any given interval using any functional basis, so there is no particular reason to believe this has predictive power.

    It does, however, highlight the fact that everything we see and have seen could easily be the effect of many steady state cyclical processes alternatingly interfering constructively and destructively.

  120. Bart says:

    Meant to say: “…so there is no particular reason to believe this has long term predictive power.” It’s probably not too far off for the immediate future.

  121. Spector says:

    RE: Bart: (June 5, 2011 at 3:47 pm)
    Spector says:
    June 5, 2011 at 12:27 pm

    “If you do a PSD, you can find the periods which best describe the data for the last century. Beyond that… how to choose? Analyze proxy data?”

    The plot is based solely on the HadCRUT3v data from Jan, 1850 to Mar, 2011 using the Microsoft Office 2007, Excel Solver utility to adjust the parameters for minimum square error. I forced a minimum amplitude of .001 deg C and required the periods to be sequential. To prevent unrealistic solutions, I also multiplied the error sum by one plus 0.1 times the square root of the sum of the squares of the trial amplitude factors. I believe that forcing a minimum energy solution reduces the likelihood that the approximation might be ill behaved at the end points. (Which it often is if I don’t.)

    Given that the data interval was 161 years, I would be surprised if any predictability extended more than 40 years on either end. It seems to be treating our current warm interval as an enhanced repetition of the peaks of 1940 and 1880.

    I based this technique on the fact that an FFT will not estimate the frequency of a small fraction of a sine wave contained in a multi-sample record, but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.

  122. Pat Frank says:

    Sorry to be silent for so long. You’ve all provided intelligent commentary, and I regret not having time to participate and attempt replies.

    But I did have some time today, and posted a reply at Tamino’s critique. We’ll see what happens. Those of you who put credence there are encouraged to take a look, and participate.

  123. Alan Wilkinson says:

    Why post there? AFAIK the main issues raised there were first raised here and this is a civilised uncensored forum unlike Tamino’s which doesn’t deserve patronage.

  124. Pat Frank says:

    Leif, you wrote, “his hypothesis then is that the curves and trend found for the data in the fitting window are also valid outside…

    My hypothesis, first, was that the oscillation that appeared in the GISS 1999 anomalies, following addition of the SST anomalies to the land-only anomalies, reflected a net world ocean thermal oscillation. The cosine + linear fits proceeded from that hypothesis.

    In the event, the cosine in the full fit had about the same period as the oscillation that appeared in the GISS data set after the SSTs were added.

    Then, pace Bob Tisdale, the cosine period proved to be about the same as the PDO+AMO period noted by Joe D’Aleo and about the same as the persistent ocean periods of ~64 years reported by Marcia Wyatt, et al.

    I took those correspondences — the appearance with SST, correspondence with the ocean thermal periods — to provide physical meaning to the oscillation in the global temperature anomaly data sets, represented by the cosine parts of the fits. This doesn’t seem unreasonable, and lifts the analysis above “numerology.”

    Following the assignment of physical meaning, an empirical analysis such as the above must be hypothetically conservative and mustn’t ring in expectations from theory. If the early part of the 20th century showed warming generally accepted as almost entirely natural, then it is empirically unjustifiable to arbitrarily decide that the natural warming after 1950 is different than the natural warming before 1950.

    That means the natural warming rate from the early part of the 20th century is most parsimoniously extrapolated into the later 20th century, absent any indicator of significant changes in the underlying climate mode.

    The rest of my analysis follows directly from that. The net trend, after projecting the natural warming trend in evidence from the early 20th century, is that the later 20th century, through to 2010, warmed 0.03 C/decade faster than the early 20th century.

    This excess rate may turn out to be wrong, when a valid theory of climate disentangles all the 20th century drivers and forcings. However, it is presently empirically justifiable.

    The trend I extrapolated to 2100 wasn’t a prediction, but merely a projection, ala the IPCC, of what could happen if nothing changes between now and then. That, of course, is hardly to be expected, but at least I put that qualifier transparently in evidence. I.e., “Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.

    And so it goes. :-)

  125. Pat Frank says:

    Apologies for neglecting to close that link. :-(

  126. Bart says:

    I second Alan’s motion. I always feel like I need to bathe after I look in over there.

    Spector says:
    June 5, 2011 at 6:46 pm

    “…but if you ask an optimization program to find the best fitting sine curve, it may give you a good answer.”

    I’m just saying, it would be nice to ask it to optimize something which could be pondered as having physical significance. You could use the proxy data. If you claim it’s crap, I won’t disagree. But, it might be interesting to see what a long term cyclic expansion predicts.

  127. Ryan says:

    @Matt, @Leif Svalgaard,

    Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so. When you really only have two cycles to go on that isn’t really enough. You need more data to be sure.

    The obvious source of more data is the Central England Temperature Record:-

    http://en.wikipedia.org/wiki/File:CET_Full_Temperature_Yearly.png

    You can see the same peaks and troughs in the Central England Temperature Record from 1880 to 2007, i.e. the same 60 yr cycle is apparent, but sadly if you go back in time you can see that cycle breaks down. However, if you take this chart as a means of disputing Pat Frank’s claims you are in trouble if you are hoping to see AGW, since this dataset clearly shows that there is nothing special about temperatures post 1950. Thanks to the LIA recovery temperature records will likely get broken by 0.2Celsius every 60 yrs or so, and for the UK the last time just happened to be in 2002 by just that amount.

    Oh by the way, it was 25Celsius in Southern UK on Saturday – had a nice BBQ and set up the kids trampoline. Sadly today it has clouded over and the wind is blowing from the north – it might reach 15Celsius today if we are lucky. Something must have sucked all the CO2 out of the atmosphere over the weekend……

  128. Spector says:

    RE: Bart: (June 5, 2011 at 11:26 pm)
    “You could use the proxy data. …”

    Do you have a preferred realistic public proxy? I used the HadCRUT3v data because it has the longest official record based on measured data and is at least similar to one of the data sets used in the main article.

  129. JOhn H says:

    After reading the thoughtful responses of commentors here, and Tamino’s analysis (you may not like the ton,e but the substance of his argument is valid), it seems like Frank needs to consider a thorough revision of his essay.

  130. Bart says:

    Spector says:
    June 6, 2011 at 5:37 am

    “Do you have a preferred realistic public proxy?”

    Not really. They’re all dubious. But, at least including it in the analysis might give an idea of how sensitive the prediction going forward is to what was modeled coming before.

    JOhn H says:
    June 6, 2011 at 11:27 am

    OK, I looked. And, I need a shower. His claims have no merit. Two full cycles is statistically significant. It is too much of a coincidence. In 1940, if you had said, “temperatures have risen in apparently cyclical fashion, and we should hit another peak in about the year 2000,” it would have been proper to say “there is not enough data to say that with any confidence.” But, when the second rise is confirmed on schedule, it is clear that there is something to it.

    I looked further at his link here. Jeez, he’s using periodograms. How elementary and jejune. And, he fails utterly to make his case. The higher peaks in the graph following “The periodogram looks like this:” are at too low frequency in which to have any confidence given the time span of the data. The others are clearly not Cauchy peaks. Fail.

  131. Bart says:

    There is an additional point to make. There are some plots where he uses ridiculous piecewise fits and such and says that, since these cannot be said to reflect the underlying processes, neither can the sinusoidal fits. But, this ignores the ubiquity of cyclical processes in the entire panorama of natural processes in every field of science and engineering. This ubiquity comes about because A) trig functions for a functional base, and can be used in an expansion to describe any bounded process and B) because every physical process anywhere in the universe depends on vector projections, which are always proportional to a cosine function.

    This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series, and one of the first things an investigator should look at when attempting to do so.

  132. Pat Frank says:

    A very powerful riposte, Bart, thanks.

  133. Bart says:
    June 6, 2011 at 1:59 pm
    This is not numerology in any way, shape, or form. It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. This is why Fourier analysis is such a powerful tool for ferreting out the underlying principles governing any natural time series
    The numerology is in the assumption. BTW, Fourier analysis on global temperatures [e.g. Loehle's] show no power at 60 years, or any other period for that matter:
    http://www.leif.org/research/FFT%20Loehle%20Temps.png

  134. Leif Svalgaard says:
    June 8, 2011 at 8:32 am
    Fourier analysis on global temperatures [e.g. Loehle's] show no power at 60 years, or any other period for that matter: http://www.leif.org/research/FFT%20Loehle%20Temps.png
    For periods around 60 years.

    Now, there is a curious sequence of peaks at higher frequencies: http://www.leif.org/research/FFT%20Loehle%20Temps-Freq.png
    The spacing between the peaks is 0.0345 [in frequency per year]. This corresponds to a period of 29.0 years and is likely due to Loehle’s data being 30-yr averages sampled every year, creating appropriate fake periods. Again an example of how Fourier analysis misleads you.

  135. Bart says:

    “The numerology is in the assumption.”
    “It is making the assumption that the functional form of the process we are observing is likely to be the same as that of every other natural process we have ever observed. “

    Sorry. I don’t see it.

    “BTW, Fourier analysis on global temperatures [e.g. Loehle's] show no power at 60 years, or any other period for that matter:”

    This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise. PSD estimation is a lot more involved than that. I almost posted the below, but decided people probably wouldn’t be interested. Now, I think I will go ahead.

    If you would like me to take a crack at it, let me know where I can get the data.

    ——————————-

    Some pointers about constructing a PSD estimate, for those who might be interested. PSDs are well-defined only for stationary processes. People sometimes use them for quantifying non-stationary processes, but it’s not generally a good idea for a variety of reasons. Thus, before performing a PSD on data with both stationary and non-stationary components, some pre-treatment is advisable.

    For FFT based methods, detrending, or subtracting out other higher order polynomial fits, is often useful for diminishing the impact of non-stationary components. However, one must be aware that this does introduce bias into the PSD estimate, particularly at low frequencies.

    A PSD is the Fourier transform of the autocorrelation function. A periodogram is an estimator of the PSD calculated as the magnitude squared of the FFT of the data, divided by the data record length. As such, it is a biased estimator, though the bias decreases for stationary processes as the length of the data record increases. Furthermore, it is highly variable, and the variance does not go down as the length of the data record increases. Averaging together periodograms of chunks of data is a common method employed to reduce the variance, but at the cost of greater bias. Bias becomes particularly bad when those chunks are shorter than the longest significant correlation time.

    A better FFT method is first to compute the autocorrelation estimate. By inspection, you can then see where the function is well behaved, how long the longest correlation time is, and where the autocorrelation estimate starts to lose coherence. You window it to that time with an appropriate window function (see, e.g., the classic text by Papoulis) and then compute the PSD. This method is generally far superior to averaging windowed periodograms, where one goes in blind without knowing any of the correlation details.

    The FFT is actually a sampled version of a continuous function, the discrete Fourier Transform, where the frequency samples are spaced proportional to 1/N, where N is the length of the input record. A simple method to sample with higher density is to “zero pad” the autocorrelation estimate past the point where the window function goes to zero.

    Once one determines parameters for the higher frequency content of the signal, an ARMA (autoregressive moving-average) model can be constructed for it, and this can be used to aid more sophisticated estimation methods, as desired.

  136. Bart says:
    June 8, 2011 at 11:57 am
    This is a naive analysis. All you’ve got here is essentially a reflection of the offset and trend in the data and a bunch of noise.
    The temperature reconstruction is so noisy [and uncertain] that a more sophisticated analysis is hardly worth the effort, but have a go at it. The data is here: http://www.ncasi.org/publications/Detail.aspx?id=3025

  137. Bart says:

    It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.

    There are others, but these seem have the most significant energy. A couple of apparent peaks also occur at 52 and 43 years, but they’re kind of ambiguous.

  138. Bart says:

    Could have sworn I posted back on this, but it has disappeared.

    I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.

  139. Bart says:

    Well, now it’s there. I was looking back because I wanted to add the 134 year one in.

  140. Bart says:
    June 8, 2011 at 1:26 pm
    It isn’t pretty. I didn’t realize this was proxy reconstruction. But, I do discern peaks at 88, 62, and 23 years.
    The time resolution is in reality 30 years. The 30-yr averages were then re-sampled every year, but that does not really create any new data.

  141. Bart says:
    June 8, 2011 at 2:48 pm
    <i?I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
    for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.

  142. Bart says:
    June 8, 2011 at 2:48 pm
    I did not realize you were looking at proxy data. Very messy. But, I am able to pick out the most significant peaks at 134, 88, 62, and 23 years. A couple more at 52 and 43 years are kind of ambiguous.
    for 30-year data, you cannot pick out anything below 2*30 years [remember Nyquist?]. “most significant” should not be conflated with ‘just the largest’ peaks. a peak can be the largest, yet not be significant.
    A standard ‘naive’ method of getting a handle on significance is simply to calculate the FFT for the two halves of the data. Here is what you get: httt://www.leig.org/research/FFT%20Loehle%20Temps-2%20Halves.png
    You can see the effect of the oversampling in the dips and peaks below 30 years. Above 30 [or 60] there are no consistent peaks. This is not rocket science.

  143. httt://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png

  144. Leif Svalgaard says:
    June 8, 2011 at 4:20 pm
    http://www.leif.org/research/FFT%20Loehle%20Temps-2%20Halves.png
    I’m extra fat-fingered today

  145. Bart says:

    That’s not how filters work. It isn’t a sharp cutoff. A 30 year average has its first zero at 1/30 years^-1. The next one is at 1/15 years^-1, then at 1/10 years^-1, and so on. At 1/23 years^-1, the gain is about 0.2. So, all this means is that the energy in the component at 23 years is, in reality, 25X larger than it appears in my PSD. And, the center of the peak could be a little shifted by the filter lobe, so it might really be +/- a couple of years.

    It is the resampling which allows me to see the 23 year cycle. Otherwise, it would have been aliased to 98.6 years.

  146. Bart says:

    It may be of interest to note that 20 to 23 year cyclic behavior commonly crops up in environmental variables, as “Spector” found above in his fit. There is also a significant roughly 21 year cycle in the MLO CO2 measurements as well. Those measurements have significant sinusoidal components at roughly 1/4, 1/3, 1/2, 1, 3.6, 8.5, and 21 years.

  147. Bart says:

    Leif… stop. Read my note above. You are incorrect.
    ‘“most significant” should not be conflated with ‘just the largest’ peaks’
    Indeed. Which is why I wrote “these seem have the most significant energy“.

  148. Bart says:
    June 8, 2011 at 7:12 pm
    ‘“most significant” should not be conflated with ‘just the largest’ peaks’
    Indeed. Which is why I wrote “these seem have the most significant energy“.

    “Energy” ? Perhaps you mean ‘power’? ‘Seem to have” either they have or they don’t. The 30-year average is a running average.
    The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half. All the peaks and valleys you see below 30 years are not real. The resampling does not help you here. I could resample with one-month resolution and study the annual variation right?

  149. Leif Svalgaard says:
    June 8, 2011 at 7:42 pm
    The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.
    The FFT gives the amplitude of the sine wave, Your 22-yr period [when present, before 1000AD] has an amplitude of 0.01C which is way below the accuracy of the reconstruction. http://www.leif.org/research/FFT%20Loehle%20Temps%20Comp.png
    As I said: numerology.

  150. Bart says:

    ”“Energy” ? Perhaps you mean ‘power’?”

    It is average power, which is energy divided by the record interval. Conventionally, we usually refer to the result of integrating the PSD as “energy” to avoid ambiguity. How widespread that convention is, I really am not sure, so perhaps I should have explained it.

    “The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”

    Nope, it’s still there. You just can’t see it because your analysis method is so lousy.

    “Your 22-yr period … has an amplitude of 0.01C…”

    This is a stochastic signal. Discussing “amplitude” is not really rigorous. In any case, as I explained, it is being significantly attenuated by the running average, so the actual signal is many times larger than what is observed.

    “All the peaks and valleys you see below 30 years are not real.”

    I’ve tried to explain it to you. Why are you insisting on something in an area in which you are not particularly proficient with someone who is? I feel like I’m arguing with Myrrh again.

  151. Bart says:

    “Discussing “amplitude” is not really rigorous.”

    Let me try to explain this a little. What we are dealing with is a distributed parameter system. Distributed parameter systems are generally characterized by partial differential equations (e.g., equations of structural dynamics, Navier Stokes equations, etc…). Via functional analysis, we can determine certain eigenmodes, i.e., certain configurations (mode shapes) of the system which oscillate at particular sinusoidal frequencies in response to exogenous inputs.

    For a given system, taken in isolation, there is generally a lowest frequency mode, which we call the fundamental mode, and various higher frequency modes which require steadily escalating energy input to excite (note: I may slip from time to time and refer interchangeably to the “mode” meaning the mode shape or the modal frequency – it is part of the jargon. It should be clear what I mean by the context). In general, the “bigger” the system, the lower the fundamental mode. Interaction of the various modes can create complex dynamics which alternatingly interfere constructively and destructively with one another.

    Dissipation of energy leads to eventual damping of these responses. However, if a mode is continually fed by a wideband excitation source whose bandwidth encompasses the modal frequency, it can keep getting regenerated ad infinitum. Over time, this signal grows and fades. Depending on the rate of energy dissipation and the time span under observation, it can look like a steady state sinusoid, or it can look like a (generally nonuniformly) amplitude and phase modulated sinusoidal signal.

    The climate is a distributed parameter system (or, perhaps more accurately, a series of overlapping piecewise continuous ones). It has certain modes which are excited by various energy inputs, from the Sun (electromagnetic radiation), from the Moon (tidal forces), from intergalactic cosmic rays, from internal heat dissipation, etc… We know some of these modes well: The PDO, the AMO, the ENSO… These are responses of the distributed parameter system of the Earth to wideband forcing(s). If you took away the forcings, they would gradually decay and die out.

    For such a huge system as the climate system of the Earth, the fundamental modes are certain to be very, very long relative to our perceptions. But, there is ample energy to excite a plethora of higher frequency modes as well. And, of course, there are additionally steady state, near perfectly sinusoidally varying diurnal, monthly, seasonal, and longer term inputs, as well.
    The constructive and destructive interference of all these modes, along with the steady state periodic excitations, form what we call “climate.”

    PSD analysis is an excellent way to look for the modal frequencies and, once found, they may be observed to be quasi-steady state, or they may surge and fade. But, they will be recurring, because they are part of the physical system which defines, or constrains, or begets… however you want to say it… the climate system.

  152. J. Simpson says:

    The 60 year cosine is fair start to this sort of crude approximation but why do you chose to fit a straight line? Because it’s straight ? Not a very good start.

    CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up. Such a radiative forcing will afftect the rate of change of temperature not the temperature. If we appoximate CO2 level as increasing exponentially and then account for the saturation of the blocking effect (the absorption is reduced logarithmically as CO2 goes up) we get a linear increase in the forcing. This acts to produce an increasing rate of change ie accelerating warming, not a linear one. In fact this simple approximation gives a quadratic rise. It’s small but it is increasing faster as it goes along. In fact this is why you see your increasing slopes in figure 4.

    You need to redo your fits with cosine plus quadratic and see what if gives.

    But be warned your residuals here (that you have not put a scale on in figure 1 ) are about +/-0.2C and data than have a total range of only about +/-0.4C over the whole dataset . Any fits you do will only be weakly correlated to the data and the margin for error in any magnitudes (like the magnitude of the cosine or quad terms) are quite large.

    You need to try to produce an error estimate for any result you find. Any result without that is not scientific.

    0.009 is tiny but you need to say something like 0.009 +/- 0.001 to give it meaning.

    If it turns out to be 0.009 +/- 0.85 you get a better idea of how meaningfut your answers are.

  153. Bart says:
    June 8, 2011 at 9:43 pm
    “The 22-year peak is not stable. It occurs only in the first half of the data, not in the last half.”
    Nope, it’s still there. You just can’t see it because your analysis method is so lousy.

    Show me your analysis. ‘Nope’ doesn’t cut it.

  154. Bart says:

    It is there. The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation. Moreover, a 21 year cycle (which is within a reasonable error bound) appears clearly in the 20th century direct measurements as well (see Spector @ June 5, 2011 at 12:27 pm).

    Leif, your methods are poor. You use the FFT improperly. You do not understand aliasing. You do not understand transfer functions for FIR filters (the simplest of which is the sliding uniformly weighted average). You do not know what a PSD is. You do not understand stochastic processes. You are belligerent and accusatory with a guy who has been at this for over a quarter of a century, analyzing data and creating models which are employed in real world systems which you have almost certainly unwittingly used.

    I see no value in continuing this conversation.

  155. Bart says:
    June 9, 2011 at 10:05 am
    It is there. [...] I see no value in continuing this conversation.
    Show it.
    Here is how Loehle describes his data:
    “The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis. In Loehle (2007), interpolation was not done, but some of the data had already been interpolated before they were obtained, making the data coverage inconsistent. In order to use data with non-annual coverage, some type of interpolation is necessary, especially when the different series do not line up in dating. This interpolation introduces some unknown error into the reconstruction but is incapable of falsely generating the major patterns seen in the results below. An updated version of the Holmgren data was obtained. Data on duplicate dates were averaged in a few of the series. Data in each series (except Viau, because it already represents a known time interval) were smoothed with a 29-year running centered mean (previously called a 30 year running mean). This smoothing serves to emphasize long term climate patterns instead of short term variability. All data were then converted to anomalies by subtracting the mean of each series from that series. This was done instead of using a standardization date such as 1970 because series date intervals did not all line up or all extend to the same ending date. With only a single date over many decades and dating error, a short interval for determining a zero date for anomaly calculations is not valid. The mean of the eighteen anomaly series was then computed for the period 16 AD to 1980 AD. When missing values were encountered, means were computed for the sites having data. Note that the values do not represent annual values but rather are based on running means.”

    My poor understanding was enough to actually conclude that he used a 29-year running mean. Your mistake is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs] and that the proxy data is good enough to find anything less than 30 years.

  156. Bart says:

    I will give an example of what I am talking about. The following code is written using MATLAB. Hopefully, it should be transparent for users of other languages.

    First, set up the constants governing a particular mode with a 23 year quasi-period (resonant frequency near 1/23 year^-1):

    zeta = 0.001;
    a=2*exp(-zeta*2*pi/23)*cos(2*pi/23);
    b=exp(-2*zeta*2*pi/23);

    Define a data series representing the vibration of a slightly damped oscillating mode driven by Gaussian “white” noise:

    x=zeros(1,1000);
    for k = 3:1000
    x(k) = a*x(k-1) – b*x(k-2) + randn;
    end

    We want to eliminate the initial transient response, so run it a few times, replacing the starting condition with the previous end condition:

    x(1)=x(999);
    x(2)=x(1000);
    for k = 3:1000
    x(k) = a*x(k-1) – b*x(k-2) + randn;
    end

    Now, plot “x”. What you should see is something that looks like a fairly steady oscillation with some small amplitude modulation. Now, reevaluate zeta as zeta = 0.01 and repeat. Now, you will see a lot more variation in the amplitude. Run enough cases, and you will see periods in which the oscillation virtually vanishes, only to be stirred up again by later random inputs. Try different values of zeta, and observe what it looks like. A raw FFT will start to show apparent splitting of the frequency line as zeta becomes larger. A properly executed PSD windowed over the appropriate correlation time will resolve the ambiguity.

    The time constant is tau = 23/(2*pi*zeta). zeta = 1 is critical damping, at which point you should no longer see much in the way of coherent oscillation.

  157. Bart says:

    “Your mistake…”

    I have made no mistakes. I suggest you study up on the subject and stop digging your hole deeper.

    “…is to assume that the climate system respond to very many actual cycles [e.g. of 8.5 and 3.6 yrs]…”

    Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.

    “…and that the proxy data is good enough to find anything less than 30 years.”

    Quality of the data is one issue. Ability to “see” particular frequencies is completely independent. Given the transmission characteristics of a 30 (or, 29, it makes little difference) year average, it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.

    Are we done here? I think we should be.

  158. Bart says:

    Just one final note: I’m not engaging in alchemy, or going off on some flight of fancy of my own here. This is all industry standard operating procedure when designing systems involving compliant structures (buildings, trusses, air frames, what have you) or fluid containment vessels (water distribution (plumbing), pumping stations, fuel tanks…). This is what Finite Element Analysis (surely, you have all heard that catchphrase) is all about: determining the modes of oscillation of distributed parameter (continuum) systems.

  159. Bart says:

    Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception.

  160. Bart says:
    June 9, 2011 at 11:55 am
    I will give an example of what I am talking about.
    Now smooth the data and show what you get.
    Bart says:
    June 9, 2011 at 12:19 pm
    Those cycles are in the MLO CO2 data. It is a bad idea to respond when flustered. You tend to miss details.
    Details brought up by you.
    it is entirely possible to detect a 23 year cycle, as I have explained to the point of exhaustion.
    But you have not shown the result. You claim to detect 23-yr in both halves of the data. Prove it.

    Are we done here? I think we should be.
    If you continue to evade the issue, then perhaps we should be.

  161. Bart says:
    June 9, 2011 at 12:32 pm
    Every continuum system ever anywhere in the universe can be described in this fashion. The Earth and its climate are no exception
    No doubt about that, but that you can describe them in this fashion, does not mean that those cycles actually exist as physical entities [which is the only thing of interest - otherwise it would just be numerology]. Remember the old joke about fitting an elephant.

  162. Bart says:

    “…does not mean that those cycles actually exist as physical entities…”

    It would only be shocking if they did not. Along the lines of discovering that gravity is a repulsive force.

    “Prove it.”

    Prove it to yourself. Learn about the subject. For the record, it is undeniably visible. But, if you understood half of what I have been telling you, you would realize it makes no difference whatsoever to my thesis. It chagrins me to say it, but you’ve really gone off the deep end here, Leif.

  163. Bart says:

    “Now smooth the data and show what you get.”

    How about you try this exercise. Generate the data as instructed. Then, pass it through a 29 point sliding average, and run your FFT on it.

    Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do, with an amplitude about 1/5 of the initial amplitude. As I’ve told you over, and over, and over, and over, and over, and….

  164. Bart says:
    June 9, 2011 at 5:20 pm
    Prove it to yourself. Learn about the subject. For the record, it is undeniably visible.
    The hole you are in is that you claim that there is a 22-year cycle in both halves of the data. I have proven to my satisfaction there is not, so show your PSDs. If you do not know how to plot the data or link to your plot, email the (x,y) point values to me and I’ll show them for you.

    it makes no difference whatsoever to my thesis.
    Wrong attitude.

  165. Bart says:
    June 9, 2011 at 5:49 pm
    Or, just generate a sinusoid with a 23 point period and pass that through a 29 point sliding average and plot the result. Do you still see the sinusoid? Of course you do,
    I think I have isolated your problem: The Loehle data was not constructed by running a 29-point average over yearly data. The time resolution was much worse: of the order of 30 years or in some cases 100 years with data taken at irregular large intervals, interpolating between the gaps. Imagine you have 30 yearly values that are all the same [because you only have one actual data value], followed by another 30 years of equal [but likely different from the first 30 years] values, and so on. Instead of assuming a constant value, you could interpolate between the scattered points. The values have a large noise component [likely larger than the difference between adjacent 30-yr periods]. This is the data you have to deal with. You claim categorically that you have found a clear 22-yr period in the first half of the data [about a 1000 years] and also in the last half of the data [naturally with the same phase]. This is what I dispute and ask you to demonstrate.

  166. AstroH says:

    Great analysis. Clearly there needs to be more research into feedback responses since computer models obviously couldn’t predict them all.

    However, I would disagree about the continuous negative feedback having a high probability of being in place throughout the 21st century, and that the linearity in the data is likely to continue in the present fashion. Some things to consider are the possible positive feedbacks that would still work despite ongoing negative feedbacks, and although their co-interactions, if any, will likely be non-linear, the additional feedback processes are always an item to consider within any complex system. Some more important factors that could affect the future climate as it pertains to the analysis of a hypothetical negative-feedback inferred from your sinusoid-plus-trend correlation:

    -Lag times between forcings and climate response. This includes both the immediate and long-term effects of various GHGs, solar forcing, oceans, oscillations, ice-melt patterns, etc. For example, the DIRECT immediate solar forcing appears to have a lag time of ~2.2 a (Scafetta and West, 2005).
    -The cloud and water vapor feedbacks. This is a rather complex system: increased tropospheric WV from warmer SSTs would augment the greenhouse effect (Held and Soden, 2000), while recent higher convection in the tropical Pacific combined with a cooler stratosphere has removed this greenhouse gas from the upper levels, reducing overall warming (Rosenlof and Reid, 2008). However, this negative feedback effect only ramped up after 2000, meaning it may represent a tipping point toward negative feedbacks, or it may be inherently unstable and could reverse itself at any time.
    -The CAUSE of post-1860 base warming. Since regular 60-year cycles appear to raise global temperatures by about 0.6C before hitting the peak and cooling by 0.3C, it is important to determine the underlying factor. Is it recovery from the LIA and coinciding ‘solar re-awakening’, or is something more in play here, such as some long-term ocean feedback, an extra forcing from GHGs, or a yet-undiscovered cause? If so, could this effect be weakening, and thus no longer contribute to most of the post-1970 warming, or have GHGs only begun to augment this effect? It is impractical to assume linearity, without knowing what causes it.
    -Undetected positive feedbacks. This of course includes the additional release of GHGs from permafrost melting, pine beetle and fungus population growths, methane clathrate releases, peat bog fires, conflagarations in weakened forests caused by biome shifts, Arctic dipole anomalies resulting in colder winter northern hemisphere continents and thus lowered CO2 absorption in winter, and the like. Many computer models assume the positive feedbacks will outweigh the negative ones, which may be true, but we don’t actually know.
    -Interactions between GHG-induced forcings and other anthropogenic factors such as soot, brown clouds, Arctic haze, the ozone holes, ground-level ozone and contrails. Many of these effects will change over time, as for example the ozone hole has strengthened the Antarctic polar vortex and thus caused surface cooling over East Antarctica, while ozone recovery will have other effects, as will the Arctic polar ozone anomaly, polar stratospheric clouds, noctilucent clouds and depth changes in the Arctic troposhere. Meanwhile, soot blocks out the sun and so may be delaying the GHG-induced warming until it is removed, but soot accelerates polar ice melting when it lands. Contrails have a much similar effect, causing both cooling and greenhouse-type warming under various circumstances, and additionally change the water vapor feedback. Many of these factors may simply be delaying, removing altogether or immediately increasing the effects of GHG forcing and associated warming.

    The lowest quoted figure I’ve seen to date for GHG-induced 20th century warming has been a contribution of 0.1C, which this article’s analysis does not contradict, but if the baseline warming cannot explain the warming post-1970 then the effect may be much greater, on the scale of 0.5C of GHG-induced warming. This effect, present and future, will depend on anthropogenic emissions and future feedback processes. It is very likely that climate sensitivity is variable over time, depending on possible ameliorating or augmenting factors such as background CO2 levels, direction of GHG change, rate of temperature and CO2 change, presence of potential feedback factors, ocean CO2 and oxygen, solar activity, ice extent, heat contribution of the ocean, forest cover, water vapor concentrations and others. In one example, the temperature and CO2 trends became decoupled during the Cretaceous, and this may occur sometime in the future. Sea level rise and temperature correlations may also be affected.

    One more thing to consider is the CO2-absorption ability of the oceans, and how it is impacted by conditions such as temperature, salinity, pH, ocean current flow, oxygen content, atmospheric GHGs, bioprocesses, etc. Most of the negative feedbacks can be explained by two things: the biosphere, and the hydrosphere. Throughout geological history, CO2 has only had a long-term effect on climate, whereas temperature change often is likely to create a positive feedback by raising CO2 levels whenever global temperature warms. Under such circumstances as increasing CO2 when the Earth is completely ice-covered, the biosphere and rock-sequestration processes no longer work, allowing the warming effect to take place more drasticly than otherwise. If the oceans become acidic and stagnant, any negative feedback processes will likely, excuse the pun, be negated. Climate change also seems to have an effect on volcanoes.

    Inevitably, the melting of large ice volumes increases the sequestration of CO2, by reducing both salinity and temperature and thus increasing the ocean’s uptake ability for absorbing the GHGs, without necessarily having much of a positive effect on plankton populations. Both the populations of plankton and coral are decreasing drasticly, and the recovery will likely be too slow to re-activate the negative feedback processes by absorbing the CO2 that they normally do. The result is likely to be an abundance of positive feedbacks, then a series of negative feedbacks taking their place, assuming that modern GHG emissions continue as usual before plateauing due to resource depletion. Some factors are likely to be linear, some exponential, and others oscillatory. Climate sensitivity may depend on the change itself. As the removal of all GHGs would require a reasonably large sensitivity to drop Earth’s temperatures ~50C lower than it is today, so should it be reasonable for the rapidity of current GHG increases and associated factors to influence this sensitivity. Of course, my guess is probably no better than computer models, which fail at holistic processing when it only receives 10% of the input required for the holistic process to work.

    The one major positive feature of the article is that it refers to current analysis of warming rather than some oft-quoted graph of Phanaerozoic climate proxies being unaffected by assumed long-term CO2 levels. The likelihood of glaciations likely depends on factors other than CO2 and solar output.

    This is most likely not my longest comment on a climate blog, so there’s no need to credit me for taking this onto a skeptic (refrain from non-sequitur River in Africa tangents, please) website.

    REFERENCES
    http://www.annualreviews.org/doi/abs/10.1146%2Fannurev.energy.25.1.441
    http://www.agu.org/pubs/crossref/2008/2007JD009109.shtml
    http://www.fel.duke.edu/~scafetta/pdf/2005GL023849.pdf

  167. Pat Frank says:

    Leif, I made no assumption about functional forms.

    The decision to try a cosine fit stemmed from the observation of a sinusoid in the GISS (land+SST) minus GISS (land-only) difference anomalies, over 1880-2010. That difference sinusoid had a period of about 60 years and showed two full cycles. It’s clearly a sign that there is an oscillation within the SST anomalies.

    There’s no numerology in the difference observation, and it justifies testing a cosine function in a fit to the entire (land + SST) anomaly data set. Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.

  168. Pat Frank says:

    Ryan, “Well of course you are correct that it may not be “proper” to fit a sine to a dataset just because its tempting peaks and troughs more or less beg you to do so..

    By now, given my responses, you ought to know that I didn’t use a cosine fit just because there were attractive peaks and troughs in the centennial air temperature anomalies. I had “more data,” namely the net oscillation that appeared in the (land+SST) minus (land-only) difference anomalies.

    Regarding your comment about the CET, I have test-fit the Central England Temperature anomaly data set. It’s very noisy, but one can get a pretty good fit using a ~60 year period, plus a longer period of 289 years, and a positive linear trend. Starting in 1650, the ~60 year period again propagates nicely into the peaks and troughs at ~1880, ~1940, and ~2005 in the CET data, just as it did in the more limited 130 year instrumental anomalies.

    The line, by the way, implies a net non-cyclic warming of 1.1 C over the 355 intervening years.

  169. Pat Frank says:
    June 9, 2011 at 7:53 pm
    Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
    The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.

  170. Pat Frank says:

    John H, Tamino’s critique centrally depends on invalid models. I’ll have more to say about that on his blog.

  171. Pat Frank says:

    J. Simpson, “CO2 must have some effect according to basic radiation physics even ignoring the IPCC’s attempts to mutiply it up.

    Radiation physics tells us that added CO2 will put added energy into the climate system. It tells us nothing of what the climate will do with that energy, or how the climate will respond. To suppose the IPCC’s point of view about a change of temperature specifically in the atmosphere is to impose onto an empirical analysis the very theory being tested. This is to engage in a circular analysis.

    Removing the empirically-justified oscillation from the total anomaly data left a positive trend that really is linear within the noise, and extending over the entire 130 year period. There’s no valid point in making an empirical analysis more complicated than the data themselves exhibit. Even dividing the data into early and late trends is a little more than a totally conservative approach to the data would permit. The most empirically conservative view of Figure 1 is that there has been no evident increase in the warming rate of the atmosphere for 130 years.

  172. Pat Frank says:

    Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.

    Not correct, Leif. An oscillation is apparent in the GISS (land+SST) minus (land-only) difference anomalies. Likewise in the CRU (land+SST) minus GISS (land-only) difference anomalies.

    Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.

  173. Pat Frank says:
    June 9, 2011 at 7:53 pm
    Whatever you surmise about Craig Loehle’s work, the label of numerology does not apply to the fits in Figure 1.
    The numerology applies especially to your fits. I can fit a very nice sine wave to the Dow Jones index since 1997. It would be numerology in the same sense as yours is. Here is the DJI numerology 1997-2008: http://www.leif.org/research/DJI-1997-2008.png
    A straight trend plus a sine curve. The fit is good [I only show the sine part], but has no meaning at all, pure numerology. And so is yours.
    Here I have added the trend back in: http://www.leif.org/research/DJI-1997-2008-with-trend.png

  174. Pat Frank says:
    June 9, 2011 at 9:03 pm
    Reference to a physical observable puts my analysis distinctly outside your numerical philosophy.
    I can build a wall in my backyard where the height of the wall [each brick horizontally] is proportional to the DJI, then I have a physical observable. Without a reason or plausible possible explanation, it would always be numerology. Balmer’s famous formula was numerology for a long time: Balmer noticed in 1885 that a single number had a relation to every line in the hydrogen spectrum that was in the visible light region. That number was 364.56 nm. When any integer higher than 2 was squared and then divided by itself squared minus 4, then that number multiplied by 364.56 gave a wavelength of another line in the hydrogen spectrum. Niels Bohr in 1913 ‘explained’ why the formula worked, but the real explanation came only in the 1920s with the advent of quantum mechanics.

  175. Pat Frank says:

    Leif, as I noted, my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.

  176. Pat Frank says:
    June 9, 2011 at 9:36 pm
    my analysis was justified by a prior physical observable. Your numerical dismissal is ill-founded.
    Without a plausible reason or theoretical expectation, any correlation that appears between physical observables is numerology.

  177. Bart says:

    Leif Svalgaard says:
    June 9, 2011 at 5:51 pm

    “The hole you are in…”

    I am in no hole.

    “…email the (x,y) point values to me…”

    I have no intention of revealing personal information over so trivial a matter. I really don’t give a rodent’s derriere if you believe me or not. Assume I’m lying if you like. My thesis is still compelling.

    Leif Svalgaard says:
    June 9, 2011 at 6:55 pm

    “Wrong attitude.”

    What is my thesis, Leif? Do you have any idea? Go back and read and reread until you understand it. Play around with the simple simulation model I gave to help you understand it.

    Pat Frank says:
    June 9, 2011 at 9:36 pm

    Your analysis is justified by the glaring fact that it is legitimate, due to the ubiquity of sinusoidal inputs and modal responses to noise in every distributed parameter system in the universe, as I have painstakingly documented in the foregoing. Leif is quite simply wrong, but he has a burr in his saddle, and you are not going to satisfy him no matter what you do.

  178. Bart says:

    Pat Frank says:
    June 9, 2011 at 9:03 pm

    Leif. “I can fit a very nice sine wave to the Dow Jones index since 1998. It would be numerology in the same sense as yours is.”

    The funny thing about this is, that is exactly what the quants on Wall Street do every day. And, they make obscene amounts of money doing it.

    They’ve had recent setbacks, mainly because they do more than merely observe, they interact with the system based on their observations. This creates feedback. It became significant feedback in the recent decade, and it was not designed specifically to be stabilizing feedback. But, no investment house has liquidated it’s financial analysis department in response. And, won’t.

    The government also does a lot of this kind of thing. How do you think they come up with “seasonally adjusted” economic statistics?

  179. Pat Frank says:

    Leif, when a multi-decadal oscillation appears in the difference between two temperature anomaly data sets, one of which is land+SSTs and the other of which is land-only, and when the world oceans are known to exhibit multi-decadal thermal oscillations, one has a direct physical inference. Your numerical dismissal is still ill-founded.

    However, to test this inference further, I made a difference between the cosine-alone portions of the cosine+linear fits to the GISS 1999 (land-only) and GISS 2007 (land+SST) data sets. The difference oscillation of the two fitted cosines alone, tracks very well through the oscillation representing the difference of the two full anomaly data sets. The appearance of this difference correspondence indicates these independently fitted cosines capture an oscillation in the original full data sets.

    If anything, the oscillation expressing the difference between the full cosine+linear fits for 1999 (land-only) and 2007 (land+SST) tracks even better through the difference oscillation of the anomaly data sets themselves.

    Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.

  180. Bart says:
    June 10, 2011 at 12:09 am
    Assume I’m lying if you like.
    If there were a statistically significant 22-year signal in the 2000 year temperature reconstruction that would be an important result. You categorically claim [several times] that there is, based on your superior understanding of distributed parameter systems in the universe. All I ask is that you produce the evidence for that

    Pat Frank says:
    June 10, 2011 at 12:30 am
    Both fit differences, like the original anomaly difference oscillation and its cosine fit, have periods of 60 years.
    Curve fitting without understanding of the physics is and has always been numerology. Just as Balmer’s formula until it was understood, or the Bode-Titius ‘law’ [ http://en.wikipedia.org/wiki/Titius%E2%80%93Bode_law ].

  181. Pat Frank says:

    Leif, curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.

  182. Pat Frank says:
    June 10, 2011 at 9:43 am
    curve fitting data following a valid physical inference, in light of known physical phenomena, and in the context of incomplete physical theory, is not numerology and has never been.
    Of course it is numerology. Not to say that numerology cannot be useful, like the example of Balmer’s formula shows. Why are you so upset about numerology? There was a time when the purported relationship between sunspots and geomagnetic disturbances was numerology. A century later we discovered the physical process that takes the relationship out of numerology and into physics. On the other hand, the Bode-Titius law is still numerology.

  183. Bart says:

    Leif – I think maybe you are laboring under a misapprehension that I claimed it was present and at equal strength in the 2nd half of the data. But, as I stated here: “The apparent energy (given the quality of the data) appears to vary, but this is in no way incompatible with the behavior which might be expected of random modal excitation.”

    I then gave you a simple simulation model to give insight into how these processes vary in time. Try setting zeta = 0.01, and observe how the energy of oscillation surges and fades. This is fully compatible with random modal excitation. It only depends on how fast the energy of oscillation dissipates in the absence of reinforcing excitation.

    Some modes, which have ready access to sympathetic energy sinks, drain quickly, and some do not. Those which do not, we tend to see as steadier quasi-periodic oscillations, and these have longer term predictive power. Oscillations at modal frequencies are not necessarily persistent, but they are recurring, due to random forcing input which will occasionally reinforce, and occasionally either fail to reinforce or actually weaken, the oscillations.

  184. Bart says:

    And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own. I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.

  185. Bart says:
    June 10, 2011 at 12:11 pm
    And, of course, there is the question of the quality of the data itself, which may have picked up a particular oscillation at some times, and missed it at others, or may have introduced apparent oscillations all its own.
    You claimed a significant signal was present in both halves as visible in your PSDs. You have evaded showing the evidence for that. That settles the issue for me.

    I judge that the ~21-23 year and ~60 year oscillations are real because similar periods of oscillation are picked up in the direct 20th century measurements as well.
    That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years. And there is power at any period, so clearly Spector would pick up such periods. Show your PSD for the modern data. BTW, we expect there to be a 0.1 degree solar cycle effect in any case.

  186. Pat Frank says:

    Leif PCA outside of physical theory, e.g., is numerology. Deriving a physically valid inference and fitting observational data using physical reasoning, in the context of known physical phenomena, is not numerology.

    Why do you insist on disparaging semi-empirical work? Data always lead theory. Analyzing such data using physical reasoning is not numerology. It’s standard practice in science when the theory is incomplete.

  187. Bart says:

    For any who would like to get a handle on what I have been talking about, here is a good video to see the modal analysis of a tuning fork. You can find other discussions if you google “modal analysis”. Most hits tend to be in regard to structures. But, you can google “fluid modal analysis” and find some specific to fluids. And, you can look up rheology and modal analysis. That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident (e.g., I would expect the vibration modes of the oceans to appear prominently in climate variables).

  188. Bart says:

    Leif Svalgaard says:
    June 10, 2011 at 12:29 pm

    “You claimed a significant signal was present in both halves as visible in your PSDs.”

    I claimed an observable signal was present in both halves. You have latched onto this triviality like a pit bull, and have blinded yourself to all else. I’m sure others viewing our discussion have formed their own opinions of the validity of my arguments for better or for worse, and there is nothing more I can say now which will change their opinions, so I give up. You are a lost cause.

    “That is an invalid analysis that tries to find power at 5, 10, 20, 40, etc years.”

    Sigh… do you ever, you know, read what people write before forming your opinions?

    …and then I allowed it to optimize the periods as well.

  189. Pat Frank says:
    June 10, 2011 at 12:31 pm
    Why do you insist on disparaging semi-empirical work?
    Who says that numerology is disparagement? Numerology is OK as long as you KNOW it is numerology. The problem comes when you begin to believe that your numerology is understanding.

    Bart says:
    June 10, 2011 at 12:41 pm
    That the vibration modes of the Earth’s physical composition should interact with and, to a significant extent, determine its climate should be self-evident
    You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data]. And still no demonstration of the 22-yr cycle in the PSDs for the two halves. Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods. I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle. If so, that is fine with me, because I don’t it either.

  190. Bart says:

    Leif Svalgaard says:
    June 10, 2011 at 1:21 pm

    “You have misunderstood the whole issue which was that the data [Loehle] from the outset has a very coarse sampling [and is not a running average of actual yearly data].”

    Yet, that is precisely what you yourself claimed, in so many words:

    The spacing between the peaks is 0.0345 [in frequency per year]. This corresponds to a period of 29.0 years and is likely due to Loehle’s data being 30-yr averages sampled every year…

    What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.

    “Since 2000 years is almost a hundred 22-yr cycles, one could safely divide the span into three periods.”

    I never said it was a steady state oscillation. I have gone to great lengths to explain why it would not generally be expected to be. All of this has apparently gone sailing right over your head.

    “I guess that you are no longer claiming that PSDs that you have already made show a significant 22-yr cycle.”

    I never did. I said that it was “there”, i.e., that it was observable. And, as it is attenuated by a factor of 1/5 due to the averaging taking place, that would indicate that it is, in fact, much more significant in reality.

    “If so, that is fine with me, because I don’t it either.”

    There’s a lot you don’t see, because you are unfamiliar with spectral estimation methods, and your analysis is very crude.

  191. Bart says:

    Leif Svalgaard says:
    June 10, 2011 at 1:21 pm

    “Who says that numerology is disparagement?”

    From dictionary.com: numerology — n
    the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs

    At the very least, you are guilty of gross hyperbole.

  192. Bart says:
    June 10, 2011 at 1:40 pm
    What you were seeing was the sinc function response of a 29 or 30 year averaging filter, modulated by the content of the signal.
    That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.

    I never did. I said that it was “there”, i.e., that it was observable.
    There is power at any and all frequencies, the thing is if it is significant.

    And, as it is attenuated by a factor of 1/5 due to the averaging taking place
    There is no averaging of higher sampling rate data. I might have expressed that clumsily, but what I have said over and over and over again is that the scarce widely scattered data from many dataset were lumpend into 30-year intervals.

    But you have still not showed the PSDs, so have no real support for your claims.

    Bart says:
    June 10, 2011 at 1:46 pm
    From dictionary.com: numerology — n
    the study of numbers, such as the figures in a birth date, and of their supposed influence on human affairs

    That particular example very many people are firm believers in. Some even think that the positions of the planets influence the climate and the sun. A better example of [useful] numerology is Balmer’s formula.

  193. Pat Frank says:

    Balmer’s formula wasn’t numerology. If was phenomenological; made to represent a physical observable. Numerology has no particular connection to the physical. Phenomenological equations, by contrast, are used in physics all the time, either when theory is inadequate or when it is too complex to solve exactly.

    Phenomenological approaches to data are classically the bridge that allows observables to be systematically examined when theory fails. When the phenomenological context is physical, the approach is entirely scientific.

    Your use of “numerology” has been distinctly disparaging Lief.

  194. Bart says:

    Leif Svalgaard says:
    June 10, 2011 at 3:30 pm

    “That is not the case. The data were not sampled every year and then averaged. The raw resolution is only one data point per 30 years [or in some cases 100 years], so no filtering occurred.”

    Not only are you contradicting your earlier post, you are contradicting the source:


    Data in each series were smoothed with a 30-year running mean.

    You even discerned the signature of the pattern of zeros of the transfer function yourself. Maybe, I should just sit back and let you argue it out with yourself?

  195. Bart says:

    The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years. Instead, you threw away your credibility by trying to play gotcha’ games over something you did not understand.

    Numerology, my fanny.

  196. Pat Frank says:
    June 10, 2011 at 4:16 pm
    Numerology has no particular connection to the physical.
    Any curve fitting to physical parameters is numerology when there is no theory or plausible expectation that the fit should occurs.
    Your use of “numerology” has been distinctly disparaging
    I said that numerology was OK if you KNOW it is numerology. If you deny it is numerology, then it becomes dubious.

    Bart says:
    June 10, 2011 at 7:11 pm
    Not only are you contradicting your earlier post, you are contradicting the source:
    This is what the source says:
    “The present note treats the 18 series on a more uniform basis than in the original study. Data in each series have different degrees of temporal coverage. For example, the pollen-based reconstruction of Viau et al. (2006) has data at 100-year intervals, which is now assumed to represent 100 year intervals (rather than points, as in Loehle, 2007). Other sites had data at irregular intervals. This data is now interpolated to put all data on the same annual basis.”
    No contradiction.

    Bart says:
    June 10, 2011 at 7:19 pm
    The funny thing is, the cycle you should have been trying to cast aspersions upon is the ~60 year one, since that is what Pat used in his fit, and it apparently appears in both the 20th century direct measurement data, and in the proxy reconstruction of the last 2000 years.
    I showed that there is no such 60 year cycle in the 2000-yr series. Now you claim there is, so you have to provide PSDs to back up that claim as well in addition to the 22-yr cycle you also claim. We are still waiting for you to comply. If you cannot or will not, then you have no credible claims. It looks more and more like this being the case as you are evading bringing forward any evidence.

  197. Bart says:

    Leif Svalgaard says:
    June 10, 2011 at 10:01 pm

    “No contradiction.”\

    Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples. The resolution decreases with depth, but that is merely an effect of spatial filtering, which effectively is temporal filtering, since the layers accumulate in time. So, there is additional filtering beyond the 30 year sliding average, and the 23 year process is even more than 5X more significant in reality.

    “I showed that there is no such 60 year cycle in the 2000-yr series.”

    You showed nothing of the kind. Your analysis is crap. The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.

  198. Bart says:
    June 11, 2011 at 12:52 am
    Say what? What are the sample rates of all the proxies? How syncopated are the samples for the sparse ones? Ice core measurements can have yearly samples.
    Loehle does not give the original data. But his 2000-yr series is the average of 18 data series, each re-sampled to 1-yr resolution and those he does give. I have here plotted all of them. The heavy black curve is his average temperature reconstruction: http://www.leif.org/research/Loehle-Mean.png
    Here are the individual data series [three to each plot]. It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years http://www.leif.org/research/Loehle-1-18.png

    The 60 year spike has 10X more energy than the attenuated 23 year spike. If the proprietors of this web site wish to and can post it on this thread somehow, they can shoot me an e-mail and I will send them my PSD plot.
    None of the ‘spikes’ are significant with the exception of the big 2000-yr wave. Cut the data in two halves, make a PSD for each half and you’ll see. The input data is simply not good enough to show a 22-year cycle even if present.

  199. Bart says:

    “It should be clear that the vast majority of the data is too coarse to preserve any cycles of less than 30 years…”

    Anything that is in there will appear in the analysis. So, it does not matter what the “vast majority” does.

    “None of the ‘spikes’ are significant…”

    You are woefully, painfully wrong. In the raw data, the 88 year process has an RMS of 0.044 degC. The 62 year process 0.041 decC. The 23 year process 0.013 degC. Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.

  200. Bart says:

    “So, it does not matter what the “vast majority” does.”

    Which is to say, it only serves to weight how prominent it will be in the result.

    I’m not saying my results are gospel truth. I realize that the data are not particularly reliable. But, the quasi-cyclic processes at 88, 62, and 23 years are significant, and reinforce the hypothesis (expectation, really) that there should be modal signatures visible in the data. In the end, this lends support to Pat Frank’s method of analysis, and the conclusion that 20th century temperatures are largely explained by natural quasi-cyclic processes reinforcing constructively to drive them up to a local peak in the first decade of the 21st century.

  201. Bart says:

    “Cut the data in two halves, make a PSD for each half and you’ll see. “

    This is an arbitrary, capricious, and irrelevant method of analysis of a stochastic signal. I have explained this over and over and over, and it simply does not penetrate. The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.

  202. Bart says:
    Adjusting them for the attenuation of the 29 year filter, they should be about 0.05, 0.06, and 0.07 degC RMS, respectively.
    Which all are smaller than the standard error quoted by Loehle of 0.14 degC. And you have not shown the PSDs yet. Remember to split the data into two halves. What do you mean by ‘raw’ data? The 18 series that I plotted here:
    http://www.leif.org/research/Loehle-1-18.png

  203. Bart says:

    Please note (because I don’t want anyone to be confused), the RMS of the 62 year process being at 0.041 degC, and the 23 year process being at 0.013 degC means that the energy ratio is (0.041/0.013)^2 = 9.95 or about 10X, as I stated above. Adjusting for the weighting of the sliding average show these processes have comparable RMS.

    On this: “I realize that the data are not particularly reliable.” I also realize that the quasi-cyclic processes I observe could be artifacts of the way the data were constructed by Loehle. But, the 62 year and 23 year processes have similar periods to those noted in the 20th century data, so I suspect they are related.

  204. Bart says:

    “The hump is still there, albeit at lower energy. And, this is perfectly ordinary for a modal excitation with particular energy dissipation characteristics.”

    It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.

  205. Bart says:

    “Which all are smaller than the standard error quoted by Loehle of 0.14 degC.”

    Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?

    Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.

  206. Bart says:

    I am away for the rest of the day. Temporary non-response should not be construed as acquiescence.

  207. Bart says:
    June 11, 2011 at 12:04 pm
    It would be perfectly ordinary if it disappeared into incoherent noise altogether for a time. This is why I provided everyone with the simple simulation model to observe how such processes might behave.
    1000 years is just not ‘for a time’. It is a very valid procedure to test if the signal is present in subsets of the data. Now, you might have a point if 88, 62, and 23 years were the only peaks in the PSDs, but they are not, so show the PSDs. The signal is what is important, don’t inflate it by squaring it to get the power. The 18 series are here: http://www.leif.org/research/Loehle-18-series.xls calculate the PSDs for each and report on what you find.

  208. Pat Frank says:

    Leif, what I’m saying is that what you’re calling numerology isn’t numerology. It’s physical phenomenology, which is entirely within the scope of science.

    You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.

  209. Bart says:
    June 11, 2011 at 12:17 pm
    Let’s suppose I have data with 0.14 degC independent standard error masking a bias of some value, and I perform a sample mean of 1024 data points. Can I discern the bias to less than 0.14 degC?
    Yes. The square root of 1024 is 32, so my sample mean standard deviation is 0.0044.

    Except the data points are not independent.

  210. Pat Frank says:
    June 11, 2011 at 12:42 pm
    You’re misappropriating “numerology,” and using it as a term to disparage a completely valid process in science in general, and the analysis I did in particular.
    The Titius-Bode ‘law’ is still a good example:
    The law relates the semi-major axis, a, of each planet outward from the Sun in units such that the Earth’s semi-major axis is equal to 10: a = 4+n where n = 0,3,6,12,24,48, … where each value of n [except the first] is three times the previous one.
    You would call this “physical phenomenology”. It fitted well until Neptune was discovered, but then fell as the first case outside of the defining domain was found. The Titius-Bode law was discussed as an example of fallacious reasoning by the astronomer and logician Peirce in 1898. The planetary science journal Icarus does not accept papers on the ‘law’.
    As I said, it is OK to do numerology as long as you KNOW it is that. Once you believe that you can extrapolate outside of the defining domain without having another reason than just the fit, it is no longer OK.

  211. Bart says:

    “Except the data points are not independent.”

    It does not matter if the “data” are not independent. What matters is if the errors are independent. Or, more precisely, how they are correlated.

    “1000 years is just not ‘for a time’.

    In geological terms, it is the blink of an eye. Moreover, it did not just disappear. It is simply near the noise floor (though still observable).

    “It fitted well until Neptune was” yada, yada, yada…

    But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.

  212. Bart says:

    Should have said “It is simply nearer the noise floor.” Taking a second look, apparent average power is reduced by about 1/2, so RMS is about 70% of what it was. But, the PSD itself is significantly more variable, because I have half the data to smooth, so this is not, by any means, a certainty.

    You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).

  213. Bart says:
    June 11, 2011 at 6:53 pm
    What matters is if the errors are independent
    For running means they are not.

    But, there is no physical reason to expect that the planets would be arranged thusly. There is ample reason, as I have solidly justified, to expect sinusoidal variations in variables describing natural physical processes.
    You have not justified that at all, and one does not expect sinusoidal variations in all natural physical processes. Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.

    Bart says:
    June 11, 2011 at 7:10 pm
    You are so far off base here, Leif. You really have no idea what you are talking about. If I find a way to post the graphic where you can see it, you will be embarrassed (or should be).
    I don’t seem to be getting through to you, so perhaps as you suggested it is not even worth trying.

  214. Bart says:

    “For running means they are not.”

    That depends on the correlation of the raw data. If they are uncorrelated sample to sample, a single average of a sliding average filter is still uncorrelated with other averages with which it does not overlap. Furthermore, while the sample mean reduces uncertainty for uncorrelated data by the inverse square root of N, where N is the number of samples, the uncertainty in estimates of other properties can go down faster than this. For example, when performing a linear trend on data with uncorrelated errors, the uncertainty in the slope estimate goes down as the inverse of N to the 3/2′s power. When the thing you are trying to estimate has a definite pattern or signature which can be easily discerned from the noise, you can get very rapid reduction in uncertainty.

    “Many [most] are irreversible, e.g. making an omelet, a tornado going through, a solar flare going off, or a star burning its fuel.”

    Poke your omelet with a fork. Does it not tend to jiggle in response at a particular frequency? Are tornadoes not spawned cyclically? Are the Sun and other stars really exceptions to this universal rule?

    “I don’t seem to be getting through to you…”

    What is getting through to me is that you haven’t studied PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.

  215. Bart says:
    June 12, 2011 at 3:07 am
    That depends on the correlation of the raw data.
    There are 18 samples. The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.

    this universal rule?
    There is no such universal rule. You are confusing ‘natural’ vibrations where the frequency depends on the vibrating matter [and which are quickly damped out] and ‘forced’ vibrations where the frequency is that of an applied cyclic force [e.g. the solar cycle].

    PSD estimation, but you think it can’t be done any better than just running an FFT over the data. And, your presumption is that anyone who tells you differently is unworthy of any respect or credence.
    The PSD is no more than the FT of the autocorrelation of the signal. In a more innocent age long ago, the autocorrelation function was often used to tease out periodicities, e.g. http://www.leif.org/research/Rigid-Rotation-Corona.pdf and can, of course, today be used for the same purpose.
    What I’m not getting through to you is the nature of the Loehle data. Here are the FFT of series 1, 7, and 13. http://www.leif.org/research/Loehle-1-7-13.png. The peaks are artifacts of the construction of the data, e.g. the lack of power at 29, 29/2, 29/3, 29/4, etc for series 7. The data is simply not good enough for demonstrating a 22-yr peak. And may not be good enough either for a 60-yr peak, in which case I may have been guilty of overreaching when stating that there was no genuine 60-yr peak in the Loehle reconstruction.
    The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’, but let that slide, your words speak for themselves.

  216. Bart says:

    Leif Svalgaard says:
    June 12, 2011 at 9:16 am

    “The number quoted [0.14] is not the standard deviation, but is already divided by the square root of 17.”

    I am talking about temporal smoothing of the time series in order to pick out components of the signal.

    “…which are quickly damped out… ”

    They are not, in general, quickly damped out – damping is a function of energy dissipation, and energy dissipation depends on the availability of sympathetic sinks. When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated. Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.

    “The PSD is no more than the FT of the autocorrelation of the signal.”

    But, a PSD estimate formed by the Fourier transform of noisy data is NOT consistent (in a statistical sense) or well behaved. There are reams of literature on this subject, on how to improve the estimation process, to which you choose to turn a blind eye. I gave you pointers on the subject here.

    A proper PSD estimate clearly shows well defined peaks at the 88 year, 62 year, and 23 year associated frequencies as I have stated. And, your analysis is crap.

  217. Bart says:

    “Even the “forced” vibrations you speak of are actually excitations of extremely lightly damped overall system modes.”

    Eventually, the energy of the universe will all be locked away where no more sinks are available, and the universal heat death will ensue.

  218. Bart says:

    “The respect is undermined a bit by your use of word like ‘capricious’ and ‘crap’…”

    And, yours, by the use of words like “numerology”, and by categorical statements like “Fourier analysis on global temperatures [e.g. Loehle's] show no power at 60 years, or any other period for that matter” when you have not performed a valid analysis.

  219. Bart says:

    capricious adjective 1. subject to, led by, or indicative of caprice or whim; erratic:

    You seem to think it means something other than what it does.

  220. Bart says:

    “The data is simply not good enough for demonstrating a 22-yr peak.”

    We can argue the quality of the data separately. The peak is there regardless. As I stated, I believe it is likely valid simply because a similar peak also appears in the 20th century data. This latter point on its validity is open to debate. The existence of the peak in a proper PSD estimate from the data is not.

  221. Bart says:

    “When a lightly damped mode is excited by persistent random forcing, it gets persistently regenerated.”

    A rather dramatic example of this, taught to neophyte engineering students, is the excitation of the twisting mode of the Tacoma Narrows Bridge, which led to its ultimate collapse.

  222. Bart says:

    Indeed, the universality of modal excitation extends to the quantum world. The vibration modes of an atom are those of the bound states of electrons. In classical mechanics, these modes would eventually decay through loss of energy. But, they are continually excited by the quantum potential in the Hamilton-Jacobi equation, as present by Bohm (pages 28,29).

  223. Pat Frank says:

    I’ve now posted a three-part response to Tamino’s second round of criticisms. We’ll see how it fares.

  224. Pat Frank says:

    Leif, “The Titius-Bode ‘law’ … You would call this “physical phenomenology”.”

    No, I wouldn’t. I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics. Other examples include linear free energy relationships in organic chemistry, including the Hammett equation.

  225. Pat Frank says:
    June 12, 2011 at 6:10 pm
    I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.
    It was made contrary to the explanatory theory of the day [Maxwell] and was not ‘hewing’ as much as possible to known physics. It was completely contrary to known physics. And was numerology in its day.

    Bart says:
    June 12, 2011 at 1:50 pm
    ‘capricious’ You seem to think it means something other than what it does.
    In what meaning did you employ it?

    Your infatuation with cyclomania shall stand for your own account, disconnected from reality. BTW, the ever faster expanding Universe will always have a heat sink.

    There will always be peaks, the question is if they are significant in view of the data.

  226. Pat Frank says:
    June 12, 2011 at 6:10 pm
    I’d call the the Rydberg formula an example of physical phenomenology. It was made in the absence of an over-riding explanatory theory, but derived to describe observations while hewing as much as possible to known physics.

    From http://www.chemteam.info/Electrons/Balmer-Formula.html :
    At the time, Balmer was nearly 60 years old and taught mathematics and calligraphy at a high school for girls as well as giving classes at the University of Basle. [...] Balmer was devoted to numerology and was interested in things like how many sheep were in a flock or the number of steps of a Pyramid. He had reconstructed the design of the Temple given in Chapters 40-43 of the Book of Ezekiel in the Bible. How then, you may ask, did he come to select the hydrogen spectrum as a problem to solve?

    One day, as it happened, Balmer complained to a friend he had “run out of things to do.” The friend replied: “Well, you are interested in numbers, why don’t you see what you can make of this set of numbers that come from the spectrum of hydrogen?” [...] Many of the experimentally measured values were very, very close to Balmer’s values, within 0.1 Å or less. There was at least one line, however, that was about 4 Å off. Balmer expressed doubt about the experimentally measured value, NOT his formula! ”

    From http://www.owlnet.rice.edu/~dodds/Files231/atomspec.pdf :
    “Although the formula was very successful, it was only numerology until the development of quantum mechanics led to a spectacularly successful explanation of all atomic spectra and many similar puzzles”

    From http://www.theophoretos.hostmatrix.org/quantummechanics.htm
    “A Swiss school mathematics teacher, Johann Jakob Balmer, tried to find a formula involving whole numbers which would predict exactly the frequencies of the four prominently visible spectra lines of hydrogen; if he could, then he would have discovered the eidos underlying the hydrogen spectra lines. And he did find the formula in 1885 [..] The formula was a feat of numerology, not of physics. ”

    And so on.

  227. Bart says:

    Leif Svalgaard says:
    June 12, 2011 at 8:19 pm

    “In what meaning did you employ it?”

    Coupled with “arbitrary”, as in the legal phrase.

    “There will always be peaks, the question is if they are significant in view of the data.”

    Exactly. Such behavior is the rule rather than the exception. So, when you see two full cycles of an evidently periodic process, as we do in the 20th century global temperature record, it is fully reasonable to expect that this may be the expression of a major mode of the system which has been recently, or is still being, excited.

    On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements. Here is another example of successful empiricism: What do we call the transformation of Special Relativity? Why is it not called the “Einstein Transformation”?

  228. Bart says:
    June 13, 2011 at 8:53 am
    Leif Svalgaard says:
    Coupled with “arbitrary”, as in the legal phrase.
    There is nothing arbitrary in dividing the data into two consecutive subsets.

    “There will always be peaks, the question is if they are significant in view of the data.”
    Exactly. Such behavior is the rule rather than the exception.

    No, the most peaks are not significant, and especially not with this particular data. There are statistical methods for estimated the significance of the peaks. Try to use them.

    On the subject of “numerology”, everything we know about the natural world can be traced back to empirical measurements.
    Numerology is using these empirical numbers without physical justification. This was clearly shown in the several links I gave about Balmer’s formula. Here is another one: the height of the Cheops pyramid is very close to a nano-Astronomical unit. Clearly, the Egyptians must have known the accurate distance to the Sun.

  229. Pat Frank says:

    Leif you’ve now defined, “Numerology is using these empirical numbers without physical justification.,” which shows your use of “numerology” is merely a pejorative as you have applied it to my analysis.

    From the very first, the cosine fit was physically justified by the difference anomaly oscillation traced to sea surface temperatures, and their multidecadal oscillations.

    Your criticism is physically groundless, Leif, and your unflagging continuance in it has become indistinguishable from an insistent personal contrarianism. Sorry to say.

  230. Pat Frank says:
    June 13, 2011 at 11:30 am
    From the very first, the cosine fit was physically justified by the difference anomaly oscillation traced to sea surface temperatures, and their multidecadal oscillations.
    What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it. The assumption that it is, is the numerology, because you have no theory to suggest that there is a specific mechanism at work, with an estimate of the period to expect.

  231. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 9:22 am

    “There is nothing arbitrary in dividing the data into two consecutive subsets.”

    There is. PSD estimates are biased by the finite length of the data window. When you shorten the data window for no particular reason, you degrade the estimate, especially at low frequencies. Even stationary, ergodic processes do not necessarily behave consistently within an arbitrarily small data window.

    “No, the most peaks are not significant…”

    Yes, they are. Laughably, absurdly so. You just don’t have the tools to see it.

    “…and especially not with this particular data.”

    Which particular data? The proxy historical data, or the 20th century data? The peaks are most definitely significant in the former. The only valid argument against them is that they are of dubious provenance. However, having just completed a PSD analysis of the latter, I can tell you there are significant peaks at frequencies corresponding to periods of roughly (in years) 64, 22, 9.6, and 1.0 years.

    That peaks of 62-64 years and 22-23 years appear in both the historical data and the 20th century measurements gives me good reason to believe they are due to the same modal excitation, and are recurring processes.

    “…the height of the Cheops pyramid is very close to a nano-Astronomical unit.”

    If lengths of a nanoAU appeared ubiquitously in every natural and man-made formation we ever saw, I’d not only say your analogy were valid, I’d say you were onto something.

  232. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 11:38 am

    “What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it.”

    To an extent, I agree with you on this. This is a random process. It is correlated such that you can say what is likely to happen in the future, but you can never say with absolute assurance that it shall happen. There may, additionally, be other processes which assert themselves in the years ahead.

    However, at this time, the hypothesis of a strong and recurring 60 year quasi-periodic process having been responsible for the apparent upsurge in global temperatures in the latter half of the 20th century cannot be lightly dismissed. It is at least as well founded as the idea that, because CO2 is going up, and temperatures went up, CO2 is responsible for the rise. More so, because the CO2 hypothesis does not explain the recent pause.

  233. Bart says:

    Pat Frank says:
    June 12, 2011 at 5:55 pm

    Very good. I’m surprised he allowed your mauling of his arguments to appear.

  234. Bart says:
    June 13, 2011 at 12:05 pm
    Even stationary, ergodic processes do not necessarily behave consistently within an arbitrarily small data window.
    These were not ‘arbitrarily’ small. Each window has about a thousand points in them, and would show anything significant. Your PSD on the 20th century data shows that 150 points are enough.

    Yes, they are. Laughably, absurdly so. You just don’t have the tools to see it.
    You have not shown anything. Just make the same absurd claims over and over.

    Which particular data? The proxy historical data, or the 20th century data?
    Both

    “…the height of the Cheops pyramid is very close to a nano-Astronomical unit.”
    If lengths of a nanoAU appeared ubiquitously in every natural and man-made formation we ever saw, I’d not only say your analogy were valid, I’d say you were onto something.

    Most people would not be so dumb as to suggest that two cycles of 60 years means that one is on to something.

    Bart says:
    June 13, 2011 at 12:13 pm
    It is at least as well founded as the idea that, because CO2 is going up, and temperatures went up, CO2 is responsible for the rise. More so, because the CO2 hypothesis does not explain the recent pause.
    This is a fallacious argument of the same kind as “a stone cannot fly, you cannot fly, ergo your are a stone”.

  235. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 1:44 pm

    “Your PSD on the 20th century data shows that 150 points are enough.”

    Apples and oranges. The 20th century data is much better behaved, and does not require as much smoothing.

    “You have not shown anything. Just make the same absurd claims over and over.”

    Ditto for you. You have not shown anything meaningful. I have asked WUWT if I could send them a jpeg of the plot. They have not responded, so I guess that means they cannot. But, you would be amazed at what a professional in these matters can do.

    “Most people would not be so dumb as to suggest that two cycles of 60 years means that one is on to something.”

    Most people are dumb.

    ‘This is a fallacious argument of the same kind as “a stone cannot fly, you cannot fly, ergo your are a stone”.’

    You lost me there, chief. Take a few breaths, and try again.

  236. Bart says:

    The 20th century data aremuch better behaved, and do not require as much smoothing.

  237. Bart says:

    “You lost me there, chief. Take a few breaths, and try again.”

    Maybe you are saying that the argument “because CO2 is going up, and temperatures went up, CO2 is responsible for the rise” is fallacious. Indeed, it is. That was the point.

  238. Bart says:
    June 13, 2011 at 1:56 pm
    The 20th century data is much better behaved, and does not require as much smoothing.
    You are suggesting that a thousand points are not enough…

    But, you would be amazed at what a professional in these matters can do.
    I’m amazed how wrong a professional can be…

    Most people are dumb.
    most people do not display it as vividly.

    You lost me there, chief. Take a few breaths, and try again.
    For the slow ones: just because one argument is wrong, does not mean the another one is right.

  239. Bart says:
    June 13, 2011 at 1:56 pm
    I have asked WUWT if I could send them a jpeg of the plot.
    http://photobucket.com/ is your friend. you can place the figures there.

  240. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 2:11 pm

    You are suggesting that a thousand points are not enough…”

    I am doing more than suggesting it. I am telling you. It is not enough to get nearly as good resolution of the low frequency region as twice the number of data points is. Why does that surprise you, when you have been arguing so strenuously that the data are lousy?

    “For the slow ones: just because one argument is wrong, does not mean the another one is right.”

    For the slowest of the slow, that was not the argument. The argument was: “However, at this time, the hypothesis of a strong and recurring 60 year quasi-periodic process having been responsible for the apparent upsurge in global temperatures in the latter half of the 20th century cannot be lightly dismissed.” This argument has been amply justified in the foregoing thread.

    If all you’ve got left are personal attacks, when you have been so thoroughly discredited on this subject, we really have reached the end of the conversation. Here is a primer on PSD estimation which you may find illuminating.

  241. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 2:20 pm

    “You can place the figures there.”

    Why didn’t you say so? Here you go.

  242. Bart says:
    June 13, 2011 at 2:36 pm
    It is not enough to get nearly as good resolution of the low frequency region as twice the number of data points is. Why does that surprise you, when you have been arguing so strenuously that the data are lousy?
    Do it anyway. After all it is almost 50 periods of the 23-yr peak.

    The argument was:
    your argument was involving CO2.

    If all you’ve got left are personal attacks, when you have been so thoroughly discredited on this subject
    I am always willing to learn, but all you can do is to use words as ‘crap’, ‘lousy’, ‘woefully wrong’, ‘discredited’, etc. So who is the attacker?

  243. Pamela Gray says:

    Bart, using your example of CO2 not explaining the pause, but since you say that your thesis does, it has the stronger argument, please consider that there are other explanations regarding the pause in the trend. Do you dismiss those in preference to your thesis?

  244. Pat Frank says:

    Leif, you wrote, “What is not justified is the assumption that whatever relationship you find is valid outside of the domain you ysed to find it. The assumption that it is, is the numerology…

    I made no such assumption. My entire analysis concerned the 130 year anomaly trend and the oscillation that is apparently within it. The assumption of extension is your misinference.

    You’ve made the same mistake as Tamino, but have applied it differently. Tamino thinks that unless one can extrapolate a data oscillation into the future, the oscillation is not present in the data at all. This is prima facie nonsense, and Tamino also reveals that he has apparently never heard of beat frequencies.

  245. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 3:10 pm

    “Do it anyway.”

    I have told you what I get. I have explained why this is a meaningless test. I do see any point in pursuing it any further.

    “your argument was involving CO2.”

    No, that is the AGW argument. I was comparing and contrasting this argument with that one.

    “So who is the attacker?”

    We both have been guilty. I have been very frustrated with your willingness to go out on such a weak limb to justify your intransigence. Prior to this thread, I would not have expected it of you. But, for my part, I apologize for the heated language.

    Pamela Gray says:
    June 13, 2011 at 3:46 pm

    “Do you dismiss those in preference to your thesis?”

    I dismiss them because they are appear ad hoc and epicyclic, an attempt to shoehorn a predetermined verdict into an existing set of rebellious data.

    I know that systems governed by partial differential equations always respond to random inputs based on their eigenmodes. I know that a set of partial differential equations “describe how the velocity, pressure, temperature, and density of a moving fluid are related” (i.e., the oceans and the atmosphere). I know that I see, in particular, 20-ish and 60-ish year peaks in measured and proxy global temperature PSDs.

    I think this should be a serious contender for explaining the temperature record of the 20th century.

  246. Pat Frank says:

    Thanks for your support, Bart. It’s not a happy experience posting there.

  247. Pat Frank says:
    June 13, 2011 at 4:26 pm
    You’ve made the same mistake as Tamino, but have applied it differently. Tamino thinks that unless one can extrapolate a data oscillation into the future, the oscillation is not present in the data at all. This is prima facie nonsense, and Tamino also reveals that he has apparently never heard of beat frequencies.
    Of course the wave is in the data. On the other hand, if one cannot extend the wave in the future, then it has little interest. Extending it is the numerology. Now, if you tell me that your wave has no predictive power, then, of course, you are off the hook as far as numerology is concerned, but then your wave is not really of interest anymore.

    Bart says:
    June 13, 2011 at 4:52 pm
    I have told you what I get. I have explained why this is a meaningless test. I do not see any point in pursuing it any further.
    You are missing a teaching moment. And your explanation is no good.

    No, that is the AGW argument. I was comparing and contrasting this argument with that one.
    I don’t see why dragging AGW into it has any meaning. You were saying that because you believe AGW is wrong, you must be right. This is fallacious.

  248. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 5:16 pm

    “You are missing a teaching moment.”

    You appear to suffering from a delusion as to who is the teacher, and who is the pupil here. I seriously doubt you could reiterate what my explanation was with any coherence. If this is how you see things, we are most decidedly done.

    Pat Frank says:
    June 13, 2011 at 5:11 pm

    Chin up, Buckaroo. You said it yourself: these guys don’t even know what beats are. To those who flung their feces, you never had a chance of reaching them anyway. But, others more thoughtful and less vocal were undoubtedly impressed. Small moves, Pat. Small moves.

  249. Bart says:
    June 13, 2011 at 6:12 pm
    “You are missing a teaching moment.”
    You appear to suffering from a delusion as to who is the teacher, and who is the pupil here.

    I think you have this backwards, but if you don’t want to, so be it.

    Your 88-year cycle is larger than the 62 and 23-year cycles. Where is it in the modern data?
    On your plot, the 88-year cycle has a power of 0.31 deg^2, which would mean a clear signal of 1 degree, which is nowhere to be seen in the original signal.

  250. Pat Frank says:
    June 13, 2011 at 4:26 pm
    Now, if you tell me that your wave has no predictive power, then, of course, you are off the hook as far as numerology is concerned. Is this what you are claiming? you sort of went quiet on this.

  251. Bart says:
    June 13, 2011 at 6:12 pm
    “You are missing a teaching moment.”
    Looking at autocorrelations of Loehle’s data I find: http://www.leif.org/research/Loehle-Autocorrelations.pdf
    I see no power [above the noise] below ~100 years. If you could stop your attacks ['delusions', etc] just for a minute and explain what I see that would be progress.

  252. Pat Frank says:

    Leif, you wrote, “Of course the wave is in the data.” That implies you have acceded to the analysis in my post.

    On the other hand, if one cannot extend the wave in the future, then it has little interest.” Not correct. See below.

    Extending it is the numerology.” So, that means you withdraw your “numerology” diagnosis with respect to the analysis here for example, or here or here,and now agree you improperly applied it to an analysis clearly restricted to the 130-year anomaly trend.

    Now, if you tell me that your wave has no predictive power,…” I’ll tell you the same thing I told Tamino, when he made that mistake: It’s an empirical analysis. We’ll just have to wait for 10 years or so to see if the oscillation persists. That’s the test, isn’t it.

    then, of course, you are off the hook as far as numerology is concerned,…” I was always off the hook as far as numerology is concerned. Your diagnosis rested on a misperception. You’ve now tacitly agreed that you inferred what was not in evidence.

    … but then your wave is not really of interest anymore.” Also not correct. It’s of interest because the oscillation is apparently in the full 130 year temperature record. It has strongly influenced the shape of the anomaly trend. Remove the oscillation, and our understanding of the trend and of the global average air temperature history of the climate since 1880, are profoundly affected. So is our understanding of the climate sensitivity. That was the whole result of the analysis. How could you have missed it so thoroughly?

    Of course, it all depends on whether the temperature anomalies are accurate. For the sake of the analysis I began by assuming the IPCC/CRU/UK Met/GISS position on the veracity of the global average temperature anomaly record. However, I’ve shown they’re wrong (pdf) about that and that the anomaly record is climatologically useless. See also my paper in the upcoming E&E 22(4), out later this month. It’s peer-reviewed, as was the previous paper, and will be open access.

  253. Pat Frank says:

    Leif, do you understand the difference between an empirical data analysis and a prediction from theory?

    Experimental scientists (and engineers) must work all the time with data for which there is no complete theoretical description. That means you have to make a phenomenological analysis, and then do more experiments to see how the results turn out. Those results inform one about the extensive (or intensive) power of the phenomenological model. From that, it may even become possible to develop a theory. This is standard practice, but you seem completely unfamiliar with the process. Haven’t you ever done it? It’s one of the most important pathways to theory extension or development.

  254. Pat Frank says:
    June 13, 2011 at 8:32 pm
    Leif, you wrote, “Of course the wave is in the data.” That implies you have acceded to the analysis in my post.
    One does not accede to data. Anyone can see the wave by eye. No analysis needed.

    “Extending it is the numerology.” So, that means you withdraw your “numerology” diagnosis with respect to the analysis here for example, or here or here,and now agree you improperly applied it to an analysis clearly restricted to the 130-year anomaly trend.
    You do not understand that the numerology applies if you assume that the the wave extends beyond the data.

    We’ll just have to wait for 10 years or so to see if the oscillation persists. That’s the test, isn’t it.
    Without a plausible mechanism, it would still be numerology ans we would not know if it will persist even further.

    misperception. You’ve now tacitly agreed that you inferred what was not in evidence.
    Talking about misperception! I infer nothing, just tell you that assuming the wave persisting past its domain is numerology.

    “… but then your wave is not really of interest anymore.” Also not correct. It’s of interest because the oscillation is apparently in the full 130 year temperature record. It has strongly influenced the shape of the anomaly trend.
    No, it has not influenced the trend. It is part of the trend.

    Remove the oscillation, and our understanding of the trend and of the global average air temperature history of the climate since 1880, are profoundly affected.
    since the wave is just a description of the observed data, removing it would be removing the data, and that would indeed be profound.

    So is our understanding of the climate sensitivity. That was the whole result of the analysis. How could you have missed it so thoroughly?
    The wave is not ‘our understanding’ of anything. It is just a description of the observations [smoothed and simplified]. This should not be lost on anybody.

    the anomaly record is climatologically useless.
    this includes your 60-yr wave?

  255. Pat Frank says:
    June 13, 2011 at 8:46 pm
    Experimental scientists (and engineers) must work all the time with data for which there is no complete theoretical description.
    The operative word here is ‘complete’. If there is ‘some’ theory [even crude], then the description is no longer numerology. If there is no theory, the description is still numerology. That is the distinction. This you should readily embrace and understand. Leave out ‘complete’ and read your sentence again. I don’t know any reputable engineering firm constructing things working with stuff for which there is no understanding at all.
    An example is prediction of the sunspot cycle. Hathaway’s was pure numerology because he said that he had no idea how it worked [and it actually failed for SC24]. Our prediction [which seems to be borne out] was based [however crudely] on the theoretical expectation that stronger polar fields would lead to enhanced dynamo action and thus more sunspots. We have no ‘complete’ understanding, but we have ‘some’, and that is enough. To say that the climate system might have internal cycles [of unknown origin] is not enough of a theoretical understanding to move your analysis out of numerology. Your wave might fail [as Hathaway's did], you say we’ll just have to wait and see. This means that the wave has no predictive capability and might fail at any moment, as Hathaway’s did.

  256. Pat Frank says:
    June 13, 2011 at 8:46 pm
    It’s one of the most important pathways to theory extension or development.
    and one man’s numerology [e.g. Balmer's] might lead to another man’s insight [e.g. Bohr's], so what is your problem? Your numerology might lead to another man’s breakthrough understanding.

  257. Bart says:

    Leif Svalgaard says:
    June 13, 2011 at 8:00 pm

    “On your plot, the 88-year cycle has a power of 0.31 deg^2″

    You are so far out of your depth in this subject, Leif, and you refuse to learn.

    Height is not a significant quantity in a PSD, only area under the curve. Here’s a hint: watch your units.

    We. Are. Done, you and I.

  258. Pat Frank says:

    Leif, are you suggesting there’s no physics-theoretical reason for supposing that global SSTs have an oscillatory component?

    Notice the spectral analysis of 820 years of SST-driven precipitation in Yellowstone, in Figure 1 of this paper. It has ~20 year and ~60 year peaks, and over all shows results very much like Bart’s PSD of HadCruT3v. Figure 2 and Figure 4 in this paper are also worth a look.

    Finally, there’s Chen, G., B. Shao, Y. Han, J. Ma, and B. Chapron (2010) “Modality of semiannual to multidecadal oscillations in global sea surface temperature variability J. Geophys. Res. Oceans, 115, C03005, doi:10.1029/2009JC005574., here, which discusses interdecadal oscillations (IDO) of climate and from the abstract, “it is revealed that a canonical modal spectrum of decadal‐to‐centennial SST variability constitutes four most distinct oscillations with periodicities at 9.0, 13.0, 21.2, and 62.2 years, which are naturally defined as primary modes and are, respectively, termed as the subdecadal mode, the quasidecadal mode, the interdecadal mode, and the multidecadal mode (modes S, Q, I, and M).” Note the final two modes repeat Bart’s result and the Yellowstone precipitation periods.

  259. Bart says:

    “If you could stop your attacks… just for a minute and explain what I see that would be progress.”

    Maybe, if you had asked questions, instead of making accusations, you would have received a better reception.

  260. Pat Frank says:

    Leif, and your insistent abuse of language in this context should embarrass a scientist.

  261. Pat Frank says:

    by the way, Leif, note that the 62.2 year multidecadal mode M is also virtually identical to the period derived from the cosine fits to the 130-year GISS and CRU anomalies.

  262. Bart says:

    “…results very much like Bart’s PSD of HadCruT3v…”
    “…final two modes repeat Bart’s result…”

    Hooah! Thanks, Pat.

  263. Bart says:

    One more question will I answer.

    Leif Svalgaard says:
    June 13, 2011 at 8:00 pm

    “Your 88-year cycle is larger than the 62 and 23-year cycles. Where is it in the modern data?”

    Possibilities:

    A) Data record too short to resolve – I used the last 100 years of HADCRUT3v data
    B) Power too weak to be observed – These are not, generally, steady state sinusoids. They surge, and they decline. They may be relatively constant over decades or centuries. They may exhibit apparent beats. They may fade to nothing for an extended interval, then leap up again some time later. They are random processes.
    C) It may be an artifact of Loehle’s construction – remember, I agreed that Loehle’s data might not be particularly good, but I stated I believed in the ~60 year and ~20 year processes because I saw them repeated in the 20th century data.

  264. Pat Frank says:
    June 13, 2011 at 10:49 pm
    Leif, are you suggesting there’s no physics-theoretical reason for supposing that global SSTs have an oscillatory component?
    The authors themselves state: “The actual physical mechanisms that explain the associations between the North Atlantic Ocean and the hydro-climate of North America are still unknown”.

    Pat Frank says:
    June 13, 2011 at 10:52 pm
    Leif, and your insistent abuse of language in this context should embarrass a scientist.
    “and one man’s numerology [e.g. Balmer's] might lead to another man’s insight [e.g. Bohr's], so what is your problem? Your numerology might lead to another man’s breakthrough understanding” is ‘abuse of language’ ??

    Bart says:
    June 13, 2011 at 10:52 pm
    “If you could stop your attacks… just for a minute and explain what I see that would be progress.”
    Maybe, if you had asked questions, instead of making accusations, you would have received a better reception.

    Well, no reasonable reaction from your side. I had hoped for better.

  265. Pat Frank says:

    Leif, and so, “The actual physical mechanisms that explain the associations between the North Atlantic Ocean and the hydro-climate of North America are still unknown.” means that there is no physical theoretic explanation for the thermal oscillations of the ocean (as opposed to the association with land hydrology). You’re projecting meaning that’s not in evidence.

    Oscillation theory for ENSO
    Theory of inertial oscillations in rotating incompressible liquids.
    Theory of oceanic thermal oscillations as ocean basin resonant modes activated by random atmospheric stimulation, which specifically mentions 20-year and 60-year modes.

    F. Primeau (2002) “Long Rossby Wave Basin-Crossing Time and the Resonance of Low-Frequency Basin Modes” J. Phys. Oceanogr., 32, 2652–2665. The full paper

    Yes, it’s abuse of language. Numerology is evidently your attributive fixation; it’s not a physically justified data analysis.

  266. Pat Frank says:
    June 14, 2011 at 10:17 am
    Theory of oceanic thermal oscillations as ocean basin resonant modes activated by random atmospheric stimulation, which specifically mentions 20-year and 60-year modes.
    There is no understanding why those particular numbers should be observed. And no claim that they are strictly periodic [which means there can be used for prediction]. On the contrary they are exited by random stimulation.

    Yes, it’s abuse of language. Numerology is evidently your attributive fixation; it’s not a physically justified data analysis.

    Over at http://wattsupwiththat.com/2011/06/13/solar-activity-still-driving-in-the-slow-lane I say:
    Leif Svalgaard says:
    June 14, 2011 at 8:44 am
    “At this point the L&P finding has the nature of numerology in the sense that we do not a mechanism to explain it [or even make it plausible]. We cannot just extrapolate into the future. We can, of course, [as we do] say that IF it continues, then such and such. The main obstacle is that we do not know how a sunspot forms.”

    So your numerology is in good company.

  267. Pat Frank says:

    I’m not reassured, Leif. Numerology is just your idiosyncratic nomenclature.

    Bart has already pointed out that random stimulation excites normal modes. Normal modes are resonant to a given system, and are intrinsic. The observed modes are a function, in part, of the boundary conditions imposed by the ocean basins.

    Here’s the abstract from Primeau’s paper. Note especially the last paragraph: “The ability of long-wave low-frequency basin modes to be resonantly excited depends on the efficiency with which energy fluxed onto the western boundary can be transmitted back to the eastern boundary. This efficiency is greatly reduced for basins in which the long Rossby wave basin-crossing time is latitude dependent.

    “In the singular case where the basin-crossing time is independent of latitude, the amplitude of resonantly excited long-wave basin modes grows without bound except for the effects of friction. The speed of long Rossby waves is independent of latitude for quasigeostrophic dynamics, and the rectangular basin geometry often used for theoretical studies of the wind-driven ocean circulation is such a singular case for quasigeostrophic dynamics.

    “For more realistic basin geometries, where only a fraction of the energy incident on the western boundary can be transmitted back to the eastern boundary, the modes have a finite decay rate that in the limit of weak friction is independent of the choice of frictional parameters. Explicit eigenmode computations for a basin geometry similar to the North Pacific but closed along the equator yield basin modes sufficiently weakly damped that they could be resonantly excited.

    Further theory: LaCasce, J. H., Joseph Pedlosky (2002) “Baroclinic Rossby Waves in Irregular Basins” . J. Phys. Oceanogr., 32, 2828–2847, here

    From the abstract: “Full analytical solutions are derived to elucidate the response in irregular basins, specifically in a (horizontally) tilted rectangular basin and in a circular one. When the basin is much larger than the (internal) deformation radius, the basin mode properties depend profoundly on whether one allows the streamfunction to oscillate at the boundary or not, as has been shown previously. With boundary oscillations, modes occur that have low frequencies and, with scale-selective dissipation, decay at a rate less than or equal to that of the imposed dissipation. These modes approximately satisfy the long-wave equation in the interior.

  268. Pat Frank says:
    June 14, 2011 at 11:36 am
    I’m not reassured, Leif. Numerology is just your idiosyncratic nomenclature.
    I gave you several links to Balmer’s numerology and showed that it is standard terminology, not just mine invention.

    Bart has already pointed out that random stimulation excites normal modes. Normal modes are resonant to a given system, and are intrinsic.
    Find me papers that calculate 20 and 60 years as the periods of these normal modes. The various links you have presented that mention those periods have them at barely significant above the red noise level. Show me a paper that tries to predict the climate based on those periods.

  269. Bart says:

    It is very interesting to see all the references Pat has dug up. Good to know people are looking into this. I had no knowledge that they were. It was just so blatantly obvious to me that the climate system ought to exhibit this type of behavior. It really is ubiquitous and universal.

    The modes are driven by random excitation, but that does not mean they do not have predictive value. It is too early to say how much because, so far as I am aware, a good model has not been developed. However, when a mode is as charged up and near its peak as this ~60 year one appears to be, it is a good bet that it will dominate the dynamics in the near term, and we are likely to see a distinct downturn in the global temperature measure, whatever it is actually measuring, in the not too distant future.

    Once that is seen and recognized, and enough resources are directed toward performing a modal survey and quantifying the drivers, then we could develop much better predictive tools.

  270. Pat Frank says:

    Bart, given the theoretical context that turned up and the literature precedent finding 20- and 60-year cycles in long term climate records, it seems to me, suddenly, that your PSD analysis of HadCRU plus the cosine fits, put into that total context, could be turned into a GRL submission.

    What do you think? If you’re interested, email me at pfrank830 *at* earthlink * dot* net.

    Leif, you’re just shifting your ground on what you require. You wanted a theoretical description of ocean oscillatory modes. You have it. Now you want a complete theory of climate.

  271. Pat Frank says:
    June 14, 2011 at 3:55 pm
    Leif, you’re just shifting your ground on what you require. You wanted a theoretical description of ocean oscillatory modes. You have it. Now you want a complete theory of climate.
    No, I wanted a justification [no matter how crude] for the particular values 22 and 62. Everybody knows that bodies of fluids have modes. This is like Bart’s silly notion that everything in the Universe is cyclic so that justifies every marginal peak we see. And papers that use these particular values to predict climate, i.e. try to take the numerology further.

  272. Pat Frank says:

    Leif, the empirical studies I linked above show the 22-year and 60-year periods appear in long climate records. That’s justification.

  273. Pat Frank says:
    June 14, 2011 at 8:15 pm
    Leif, the empirical studies I linked above show the 22-year and 60-year periods appear in long climate records. That’s justification.
    But applying the periods as if they were cycles and have predictive power is numerology. Even people that push the periods talk about ‘regime shifts’ when the cycles suddenly fail.

  274. Pat Frank says:

    LeifBut applying the periods as if they were cycles and have predictive power is numerology.

    The periods show up in long-term climate data, Leif. Applying the HadCRU fit results as a predictive hypothesis is entirely justified in terms of the longer historical data. This view is further justified by noting the similar oscillation that enters the anomaly record with the SSTs.

    To test the hypothesis, one must wait to see if the period repeats. Given the theoretical treatments that imply thermal oscillations in the global ocean, treating the observed recent periodicity as a hypothesis concerning recent thermal behavior is empirical science, not numerology.

    In your descriptive use of numerology, you didn’t distinguish between relating the dimensions of Cheops to the solar system and cosine fits that can be justified both by reference to theory and by periodicities found in independent data sets. Your use of the word is promiscuous and inaccurate.

  275. Pat Frank says:
    June 15, 2011 at 4:35 pm
    Your use of the word is promiscuous and inaccurate.
    In humans, promiscuity refers to undiscriminating casual sex with many sexual partners, ????
    Anyway, you might not like it, but my use is the standard use.

    predictive hypothesis
    This is where you fail. Even if it by accident should pan out, you have no assurance that it will again and again and again … Each time it might fail [undergoing regime shift from the random stimulations]

  276. Pat Frank says:

    Leif your use is your standard of use, nothing more.

    Promiscuity more generally means indiscriminate, or without care. Its use has just been vulgarized by the religious.

    Your use of “even if by accident” shows that your comments about the analysis have been tendentious. You’ve discounted a verifying result before it could even happen.

    Resonant modes are always on-period, despite random stimulation. However, if there are many resonant modes, they will beat against one another. The observable is the beat frequency. As you know, a beat frequency can show net phase changes, and even go through null periods. An analysis like Bart’s PSD analysis, however, will always show the same underlying resonances. That is the meaning of the frequency analysis produced by McCabe, et al. for the 820 year Yellowstone precipitation record. The record itself (part 1 of Figure 1) is a mess. The underlying resonances only show up after spectral analysis.

  277. Pat Frank says:
    June 15, 2011 at 6:03 pm
    Leif your use is your standard of use, nothing more.
    I have given you many examples of the standard use.

    without care
    I care much about this, to wit, this dialog

    You’ve discounted a verifying result before it could even happen.
    I have pointed out that if the period is real, it it verified by the hundreds or even thousands of years of record before present. It only has true predictive power if it does not need verification that it is till on track every time. Since it does need that, it is just numerology.

  278. Pat Frank says:
    June 15, 2011 at 6:03 pm
    Your use of “even if by accident” shows that your comments about the analysis have been tendentious. You’ve discounted a verifying result before it could even happen.
    I’m on a NASA panel to predict sunspot cycles. There is [by now] about a hundred predictions that have been submitted for consideration by the panel. Only two of those were based on physical theory, all the rest were numerology. The two physics-based predictions were far apart: one of the largest cycles ever or the smallest in a hundred years. The panel discounted all the numerology and having reached an impasse produced two predictions [although NASA wanted only ONE number]: one high and one low. Later, a new physics-based prediction surfaced and was low, so the panel eventually went with only the low prediction [although with typical human CYA-attitude did not dare to make it quite as low as the two low predictions]. Because the low predictions were based on solid physics [although it had to be calibrated using past cycles and cannot yet be calculated from first principles] we are confident that they will be correct [within a reasonable error bar]. The previous panel charged with predicting cycle 23 failed, because they had believed in extrapolating the numerology of the time and did not consider the physics-based prediction [admittedly based on poorer data than todays].

  279. Pat Frank says:

    Leif, irrelevant. Our disagreement concerns empirical data analysis as valid scientific practice. Within a theoretical context, I might add.

    The empirical models of your colleagues had scientific standing, provided by the known theoretical context of solar cycles.

    Or was it that during your panel discussions, you dismissed your hundred or so colleagues for practicing Cheops-analogized number-associationism?

  280. Pat Frank says:
    June 15, 2011 at 11:12 pm
    Our disagreement concerns empirical data analysis as valid scientific practice.
    Nonsense, all data analysis is empirical and some is even valid. That is not the issue. The issue is whether just because you find some cycles empirically that you blindly can extrapolate them into the future for prediction. And you cannot.

    The empirical models of your colleagues had scientific standing, provided by the known theoretical context of solar cycles.
    Most were not, as they were not guided by theory, but simply by statistical curve fitting. And thus useless for prediction because they are just numerology. To wit, their predictions were all over the map with an even larger spread than actual solar cycles have ever had.

    Or was it that during your panel discussions, you dismissed your hundred or so colleagues for practicing Cheops-analogized number-associationism
    Everyone on the panel was clear on what was numerology and what was not. And some of the ‘predictions’ were almost as ill-founded as the pyramid speculations. The clearest example of applied numerology [in that field] is the failed prediction of cycle 23. We decided not to repeat that disaster. You see, billions of dollars and even lives hang on our prediction being at least in the right ballpark. This is serious business.

  281. Bart says:

    Thought I’d poke in and if anything were still being written. I see Leif is still clutching at straws. Now, the method is useless because we don’t know if a comet might slam into the Earth in a few months.

    We do these sorts of things all the time in practice. You model the modes in a Kalman Filter, quantify the spectral densities of the drivers, prime the filter with the measurements collected up to the present, then predict forward. Your propagated covariance tells you how uncertain your estimates become over time. It’s routine.

  282. Bart says:
    June 17, 2011 at 6:29 pm
    We do these sorts of things all the time in practice. You model the modes in a Kalman Filter, quantify the spectral densities of the drivers, prime the filter with the measurements collected up to the present, then predict forward.
    And still it is numerology, even if routine. If it worked, you are happy, if not you shrug your shoulders and call it a regime shift.

  283. Bart says:
    June 17, 2011 at 6:29 pm
    We do these sorts of things all the time in practice. You model the modes in a Kalman Filter, quantify the spectral densities of the drivers, prime the filter with the measurements collected up to the present, then predict forward.
    Here are monthly sunspot numbers. Use the data up to 1964 and perform a routine forward prediction for 1965-2020 [or further if you want]: http://sidc.oma.be/DATA/monthssn.dat

  284. Pat Frank says:

    Leif, your first item about, ‘blind extrapolation,’ is irrelevant because I never did that. Your argument there is a red herring.

    Your item 2 about ‘guided by theory,’ is also a red herring because an empirical data analysis can be informed by theory, as opposed to deduced from theory. Your ‘guided by theory’ is ambiguous, and lends an apparent gravity without actually meaning anything.

    I’d like to see the responses of your colleagues, if you had you told them their work was no better than than extrapolations from the dimensions of Cheops.

    Interesting that you’d highlight billions of dollars and lives to distinguish the importance of your work.

    Bart’s analysis provides an independent validation of the cosine fits to the anomaly trend. Further, the precedence in the literature of the 60 year thermal cycles that appear in long range climate data validates the physical presence of this cycle in the mechanism of climate. This record again validates the PSD results and the fits. Put together, the precedence and the analysis make a powerful argument that there’s no evidence of an accelerated warming trend in the 130 year record, and so no evidence of AGW.

    Do you really think that billions of dollars and lives are not at risk in the use of a specious AGW to leverage the destruction of cheap energy?

  285. Pat Frank says:

    Leif, your challenge to “perform a routine forward prediction” is again irrelevant. The question always was whether the analysis showed periods in existent data. It was never that the analysis had predictive value.

    Surely more than once I pointed out that one has to wait for the appearance of future data to see whether the patterns found using an empirical analysis propagate forward.

    More and more, your objection is couched in irrelevancies.

  286. Pat Frank says:
    June 18, 2011 at 3:56 pm
    Leif, your first item about, ‘blind extrapolation,’ is irrelevant because I never did that. Your argument there is a red herring.
    Analysis of past data is not numerology, believing that they have predictive power without knowing why is numerology.

    I’d like to see the responses of your colleagues, if you had you told them their work was no better than than extrapolations from the dimensions of Cheops.
    Some were not any better.

    Interesting that you’d highlight billions of dollars and lives to distinguish the importance of your work.
    Is disingenuous, as NASA and people concerned with space assets commissioned the panel precisely for that reason. This has nothing to do with me or my work. BTW NASA decided not to decommission the Hubble Space Telescope based on our forecast.

    Do you really think that billions of dollars and lives are not at risk in the use of a specious AGW to leverage the destruction of cheap energy?
    I don’t think that your analysis and Bart’s will lead to savings of billions of dollars and any lives saved, because the AGW threat is political and not scientific.

  287. Pat Frank says:
    June 18, 2011 at 3:56 pm
    I’d like to see the responses of your colleagues, if you had you told them their work was no better than than extrapolations from the dimensions of Cheops.
    While not from the panel predictions, this paper was presented at the SORCE 2008 meeting
    http://lasp.colorado.edu/sorce/news/2008ScienceMeeting/

    She argued as strenuously as you in the power of cycles:
    http://lasp.colorado.edu/sorce/news/2008ScienceMeeting/posters/P4_01_Lynch_Poster.pdf
    She even had a theory [of sorts]

  288. Pat Frank says:
    June 18, 2011 at 4:01 pm
    Leif, your challenge to “perform a routine forward prediction” is again irrelevant.
    That was for Bart who routinely does that. I’m interested in what he might find. We may invite him to be on the next solar cycle prediction panel, if he scores well.

    Surely more than once I pointed out that one has to wait for the appearance of future data to see whether the patterns found using an empirical analysis propagate forward.
    As I have said many times, if you claim that your analysis has no predictive power and was only meant to fit the past, then you are, of course, off the hook for numerology.

  289. Pat Frank says:

    Tamino has taken to snipping out my replies, made in defense of my analysis. His prior repertoire of science-relevant criticisms included idiot. Those, such as fredb or charles, who wanted me to respond directly to Tamino, and who wanted a debate without personal attacks should notice Tamino’s progression. Criticism, personal attack, and finally censorship.

    Here’s the post Tamino snipped out: (PJKlar had accused me of fraud) PJKlar, in my original post I wrote this: “For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.

    Towards the end, I wrote this: “Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.

    Given those explicit qualifiers, how you can see the analysis as any sort of fraud is beyond understanding.

    Ray Ladbury, you wrote, “How, pray, is temperature supposed to rise unless there is a net input of energy?

    If the atmosphere and the global ocean are coupled oscillators, can thermal energy pass from one to the other without any net external energy input? We both know the answer to that question.

    You also wrote, “A conservative approach is one that is 1)consistent with known physics, 2)consistent with known evidence.

    The physics of climate is not well-known and what is known is certainly not well resolved in climate models. However, there is theory that describes resonant modes in ocean basin energy flux, activated by random atmospheric stimulation.

    For evidentiary precedent, Chen, et al., (2010) “Modality of semiannual to multidecadal oscillations in global sea surface temperature variability” J. Geophys. Res., 115, C03005; discuss interdecadal oscillations (IDO) in the SST record. They found four main periodicities including an interdecadal oscillation of 62.2 years; virtually identical to the period implied by the cosine fit to the CRU anomalies.

    Further, McCabe, et al. 2008 did a frequency analysis of an 820 year record of drought in Montana, linked to SSTs, and found a prominent 60-year signal; see their Figure 1.

    So now, here is a PSD analysis of HadCRUT3v (click over to Figure 1). It shows the same ~60 year period as found in the 800-year Montana precipitation record and as the multidecadal period noted by Chen, et al.

    MartinJB, note that the unfit residuals themselves show no significant excursions away from zero over their whole length. They are both linear and of virtually zero slope everywhere. How is incorrect to conclude, therefore, that the fit accounts for the signal?

    Barton, A ~60 year oscillation was noted to enter the GISS complete global anomaly set when the ocean temperatures were added into land-only temperatures. So, in the event, the fit did not represent an arbitrary cosine, but found one that exhibited the same periodicity as was induced by entry of the marine temperatures.

    I’ve fit the 1999 GISS land-only temperature anomalies using the same cosine+linear strategy. The difference between the two fits , GISS (land+marine) minus GISS (land-only) produces an oscillation that goes right through the difference oscillation of the data sets themselves.

    The observation that an oscillatory signal appears in the anomaly record with the marine temperatures, empirically justifies the strategy of fitting with a cosine. Look also at the similar 60-year cycles I referenced in the reply to Ray Ladbury. They also justify the result in terms of a known SST period.

  290. Bart says:

    I’m interested in what I might find, too. But, it’s going to take some time, with all the demands on my time and the other projects in which I am engaged, and gearing up for family summer vacation and all. Perhaps in a month or two, I will let you know by responding to Leif and/or Pat in another thread.

  291. Bart says:
    June 18, 2011 at 7:37 pm
    I’m interested in what I might find, too. But, it’s going to take some time
    Of course, no rush. A solar cycle last over 10 years. Let me know if you have questions about the data.

  292. Pat Frank says:
    June 18, 2011 at 5:46 pm
    Tamino has taken to snipping out my replies, made in defense of my analysis. His prior repertoire of science-relevant criticisms included idiot.
    Well, actually not that far from what you guys have been calling me :-)

  293. Bart says:

    “Tamino has taken to snipping out my replies…”

    What a shock. The guy’s a mediocrity who can’t stand the heat when someone exposes it.

    “Well, actually not that far from what you guys have been calling me.”

    I know full well you are no idiot, Leif. But, you have been obdurate in this thread. I have just been trying to push you into taking a deeper look. The greatest distinction with Tamino’s MO, though, is we have given reasons why you have been wrong.

    “A solar cycle last over 10 years.”

    Preliminary PSD analysis informs me that the solar cycle is governed by two quasi-periodic processes with periods of roughly T1 = 20 and T2 = 23.6 years. The sunspot count appears to reflect the energy of these combined processes, which necessarily has apparent periods of 0.5*T1, 0.5*T2, T1*T2/(T2+T1), and T1*T2/(T2-T1) years, or 10 years, 11.8 years, 10.8 years, and 131 years. This latter appears as a quasi-beat period in the data. I say “quasi-” because these are not rigidly defined periods of steady state sinusoids, but mean periods of random excitation of resonance phenomena.

    All these numbers are preliminary, rough estimates, mind you. There are many other steps which would be needed to nail them down precisely.

  294. DirkH says:

    Pat Frank says:
    June 18, 2011 at 3:56 pm
    “Do you really think that billions of dollars and lives are not at risk in the use of a specious AGW to leverage the destruction of cheap energy?”

    Hear hear.

  295. Bart says:

    “…two quasi-periodic processes with periods of roughly T1 = 20 and T2 = 23.6 years…”

    Interestingly, I tossed this out assuming it was probably common knowledge. A brief web search tells me it may not be. But, the spikes in the PSD tell me it is a reasonable presumption, as they appear at all four of the frequencies associated with the periods 0.5*T1, 0.5*T2, T1*T2/(T2+T1), and T1*T2/(T2-T1). Furthermore, the fact that the solar magnetic field reverses orientation tells me that the fundamental periods are, indeed, these T1 and T2.

    What about it, Leif? Is this new?

  296. Bart says:
    June 19, 2011 at 12:36 pm
    What about it, Leif? Is this new?
    Yes, it is, but in a negative sense. There are hundreds of sophisticated analyses by acclaimed professionals that do amazing things [so they say]. Fortunately they all find different results, and none to my knowledge have claimed that “solar cycle is governed by two quasi-periodic processes with periods of roughly T1 = 20 and T2 = 23.6 years.”
    The other periods [10 years, 11.8 years, 10.8 years] regularly crop up but are just splitting of the basic ~11 year period because of long-term amplitude modulation. From a physical point of view it is not accepted that there should be two such periodic processes governing the solar cycle. Although we don’t have a complete understanding of the cycle we are not morons either that have no clue whatsoever. The basic physics is well understood, what we lack are observations of the interior of the sun [these may be forthcoming soon, though] so that we can solve the equations with correct boundary conditions.

  297. Bart says:

    I put a plot to illustrate the concept here. It’s not a perfect match – lots of work needs to be done to nail down the parameters of the model – but it shows how the PSD of the SSN data might come about.

    As per Papoulis (for me, 2nd edition, page 233), if a Gaussian process with autocorrelation function R(t) is squared, the resulting autocorrelation function, which I will call Q(t), is

    Q(t) = R(0)^2 + 2*R(t)^2

    The PSD of this is the Fourier transform of a constant R(0)^2 plus the scaled convolution in the frequency domain of the Fourier transform of R(t) (PSD of the underlying process) with itself (i.e., multiplication in the time domain results in convolution in the frequency domain, and vice versa).

  298. Bart says:

    “Yes, it is, but in a negative sense.”

    Really? Apparently, from what you say, none of the old stuff works. This is new. Maybe you should give it a try.

    ‘…none to my knowledge have claimed that “solar cycle is governed by two quasi-periodic processes with periods of roughly T1 = 20 and T2 = 23.6 years.”’

    Should have said “dominated by in the recorded era” rather than “governed by”.

  299. Bart says:
    June 19, 2011 at 1:35 pm
    I put a plot to illustrate the concept here. It’s not a perfect match – lots of work needs to be done to nail down the parameters of the model – but it shows how the PSD of the SSN data might come about.
    but does not uniquely determine the model. I would guess there is an infinitude of ways the PSD of the SSN can come about.

  300. Bart says:
    June 19, 2011 at 1:35 pm
    I put a plot to illustrate the concept here. It’s not a perfect match – lots of work needs to be done to nail down the parameters of the model – but it shows how the PSD of the SSN data might come about.
    The idea was to test your routine Kalman filter prediction for SSN from cycle 20 to SC25, and from SC23 to SC25.

  301. Bart says:
    June 19, 2011 at 1:41 pm
    “Yes, it is, but in a negative sense.”
    Really? Apparently, from what you say, none of the old stuff works. This is new. Maybe you should give it a try.

    It is no more new than any of failed old ones. Yours fits right in with them.

  302. Bart says:

    No, it should be unique. I would just need a lot more time to nail it down. I mean, I’ve had one day, and this has been researched for decades (centuries, really). Gimme a break!

  303. Bart says:

    “The idea was to test your routine Kalman filter prediction for SSN from cycle 20 to SC25, and from SC23 to SC25.”

    It’s routine, but it’s also a lot of work, and I usually get paid big bucks for it. I may work on it a bit from time to time, and eventually come up with the solution, but it’s not going to be anywhere in the near term. Meanwhile, I’ve given you some valuable information which, if you want to know what is really going on, you should pursue.

  304. Bart says:
    June 19, 2011 at 1:56 pm
    No, it should be unique. I would just need a lot more time to nail it down. I mean, I’ve had one day, and this has been researched for decades (centuries, really). Gimme a break!
    I am giving you a break. I was responding to your direct question.

  305. Here is another example of cyclomania run amok: http://arxiv.org/abs/1105.3885v1

  306. Bart says:
    June 19, 2011 at 1:59 pm
    Meanwhile, I’ve given you some valuable information which, if you want to know what is really going on, you should pursue.
    I do not consider it worthwhile, so you have to convince me by doing it.

  307. Bart says:

    Leif Svalgaard says:
    June 19, 2011 at 1:54 pm

    “It is no more new than any of failed old ones. Yours fits right in with them.”

    Wow. “Hidebound” doesn’t even begin to describe that reception.

    Fine. Solar dynamics do not really interest me, and I don’t really care. If you want to continue deluding yourself, that’s your affair. I’m out.

  308. Bart says:
    June 19, 2011 at 1:59 pm
    “The idea was to test your routine Kalman filter prediction for SSN from cycle 20 to SC25, and from SC23 to SC25.”
    It’s routine, but it’s also a lot of work

    The computer does the work. How much tuning and adjusting do you need to do ‘by hand’?

  309. Bart says:
    June 19, 2011 at 2:13 pm
    Wow. “Hidebound” doesn’t even begin to describe that reception.
    Fine. Solar dynamics do not really interest me, and I don’t really care. If you want to continue deluding yourself, that’s your affair. I’m out.

    You are not the first one to bow out when the going gets rough.

  310. Bart says:

    “You are not the first one to bow out when the going gets rough.”

    Not rough. Futile. And, for no reward whatsoever.

    Clearly, the periods associated with the solar magnetic field reversals are ~20 and ~23.6 years. These are the fundamental modal oscillation periods. They have an obvious relationship with the modal oscillations seen in the SSN data. If you cannot, or will not, acknowledge that basic fact, when it is staring at you right in front of your eyes… what is the point?

  311. Bart says:

    “I do not consider it worthwhile, so you have to convince me by doing it.”

    I do not have to convince you of anything. I think I would have better odds of convincing a rock to flip itself over. I’m done doing pro bono work on this. Anyone who wants more has to negotiate a contract.

  312. Pat Frank says:

    Leif, that’s entirely unfair.

  313. Pat Frank says:

    By the way, Bart, when I posted the reply at Tamino’s, I remembered to add a following post acknowledging your production of the PSD analysis mentioned in the part directed to Ray Ladbury. But Tamino snipped that, too.

  314. Bart says:
    June 19, 2011 at 3:26 pm
    Clearly, the periods associated with the solar magnetic field reversals are ~20 and ~23.6 years. These are the fundamental modal oscillation periods. They have an obvious relationship with the modal oscillations seen in the SSN data. If you cannot, or will not, acknowledge that basic fact, when it is staring at you right in front of your eyes… what is the point?
    The spurious periods you find 20 and 23.6 are not physical and do not correspond to anything, the Sun does not oscillate like that.

    Bart says:
    June 19, 2011 at 3:33 pm
    I’m done doing pro bono work on this.
    Aren’t we all?
    Anyone who wants more has to negotiate a contract.
    Not worth it.

    Pat Frank says:
    June 19, 2011 at 4:38 pm
    Leif, that’s entirely unfair.
    The worth of an opinion or theory is how well it predicts. If Bart refuses to predict using his ‘fundamental modes’ unless we pay him money to do so, he is indeed out.

  315. Pat Frank says:
    June 19, 2011 at 4:38 pm
    Leif, that’s entirely unfair.
    Here is a selection of some of the choice words used:
    lousy accusatory flustered deep end lost cause crude gross hyperbole woefully capricious off base embarrassed crap intransigence delusion refuse to learn abuse shifting your ground clutching at straws distinguish the importance of your work obdurate Hidebound deluding Futile
    Unfair?

  316. Carla says:

    Leif Svalgaard says:
    June 19, 2011 at 2:11 pm
    Here is another example of cyclomania run amok:
    ~
    The use of the word cycle..makes my stomach turn these days. But cycles are relative to their time and space,

    But speaking of “run amok,” recently learned that during one our suns minimum periods way back oh six hundred something, no earthly cooling. (who said that lol?) Fits my superficial cranky just fine to see increase in VLISM densities with a warm ionization level.. But later on down the road of time seems like those little puffy cloudletts start becoming..cooler and have lower levels of ionization..

    Like saying time in a cloud or was that time in a bottle..

  317. Carla says:
    June 19, 2011 at 7:06 pm
    Like saying time in a cloud or was that time in a bottle..
    chacun à son goût

  318. Bart says:

    Leif Svalgaard says:
    June 19, 2011 at 6:39 pm

    “The spurious periods you find 20 and 23.6 are not physical and do not correspond to anything.”

    The physical basis of the solar cycle was elucidated in the early twentieth century by George Ellery Hale and collaborators, who in 1908 showed that sunspots were strongly magnetized (this was the first detection of magnetic fields outside the Earth), and in 1919 went on to show that the magnetic polarity of sunspot pairs:

    Is always the same in a given solar hemisphere throughout a given sunspot cycle;
    Is opposite across hemispheres throughout a cycle;
    Reverses itself in both hemispheres from one sunspot cycle to the next.

    So, how long does it take for the magnetic polarity to reset to its original configuration? Do you ever look before you leap?

  319. Bart says:

    more… “Hale’s observations revealed that the solar cycle is a magnetic cycle with an average duration of 22 years. However, because very nearly all manifestations of the solar cycle are insensitive to magnetic polarity, it remains common usage to speak of the “11-year solar cycle”.’

  320. Bart says:
    June 19, 2011 at 7:40 pm
    Reverses itself in both hemispheres from one sunspot cycle to the next.
    Does not mean that the two cycles are part of a 22-yr oscillation. And they are not. The solar dynamo has a time scale of 11 years, that the polarities change between the cycles is just a result of the polar fields changing halfway through the cycle.

    So, how long does it take for the magnetic polarity to reset to its original configuration? Do you ever look before you leap?
    There is no ‘original configuration. As sunspots from one cycle decays, their magnetic fields are carried towards the poles by a slow circulation [that some thinks takes 40 years]. Upon arrival, the new magnetic flux first cancels the old flux there [which has opposite polarity] and then builds up a new polar field over the next several years that serves as the ‘seed’ for the next cycle. The actual reversal at each pole is almost instantaneous: one month the field is positive [but small], the next it is negative [and small], in a sense it is a smooth continuous change. Here you could benefit from know;edge form a professional. Here you can see the first modern measurements of the polar fields http://www.leif.org/research/The%20Strength%20of%20the%20Sun's%20Polar%20Fields.pdf

    very nearly all manifestations of the solar cycle are insensitive to magnetic polarity, it remains common usage to speak of the “11-year solar cycle”.’
    And that is the correct physical interpretation. Each cycle is a unit that gives rise to the next, which in turn gives rise to the next and so on for eons.

    Spare us the “leap before you look” nonsense, it is not becoming a gentleman.

  321. Bart says:
    June 19, 2011 at 7:40 pm
    So, how long does it take for the magnetic polarity to reset to its original configuration?
    Here is the polar magnetic field the past several cycles
    http://www.leif.org/research/Solar-Polar-Fields-1966-now.png

    The polarities of the sunspots reverse in a more complicated manner: Several years before the spots of a given cycle finally disappear near the equator, new spots with reversed polarity appears at higher latitudes, so the sun has spots from both cycles for several years around minimum. If you count time from the first spot of a cycle to the last spot of that cycle, the length of the cycle is about 17 years.

  322. Bart says:

    The underlying process is 20 and 23.6 years for the interval from the mid-1700s to today. The sunspot number is a measurement of the magnitude of that process. The observed harmonics fall precisely where they should, at 10 years, 11.8 years, 10.8 years, and 131 years.

    A gentleman gives credit where it is due, and does not deny what has been laid out right before his eyes in order to affect omniscience.

  323. Bart says:
    June 19, 2011 at 9:01 pm
    The underlying process is 20 and 23.6 years for the interval from the mid-1700s to today.
    Sorry to say, but there are no such underlying cycles.

    A gentleman gives credit where it is due
    no credit is due, you have been misled by your opinion of your own brilliance.

    Now do the Kalman prediction to regain some credibility.

  324. Bart says:
    June 19, 2011 at 9:01 pm
    The underlying process is 20 and 23.6 years for the interval from the mid-1700s to today.
    Many people have tried various numerology on this. One frequent [past] commenter here is Vukcevic, pushing a SSN formula that provides a generally poor but at least approximate fit to the observed SSN. His formula goes something like this [it is hard to quote correctly, because it has been a moving target], SSN = 100 abs (cos(pi/2+ 2pi (t-1941)/A) + cos(2pi(t-1941)/B)), where t is time in years. The two constants A and B are A = 23.7 and B = 19.9 years, which are respectively (A), twice Jupiter’s orbital period and (B) the time between conjunctions of Jupiter and Saturn ( http://www.astrofuturetrends.com/id69.html ). This combination with A and B crops up from time to time in fringe studies and does provide a crude fit to the SSN the past 2-3 centuries, so can be said to provide an approximate numerological description of the cycle. All our observations of the real sun and the understanding we have gained of how it works show, however, that the fit is spurious. There are no physical processes in the sun that work that way [the Sun is not an oscillator]. And as I pointed out, the real length of the solar cycle is about 17 years.

  325. Bart says:

    Leif Svalgaard says:
    June 19, 2011 at 9:55 pm

    “Sorry to say, but there are no such underlying cycles.”

    These are NOT cycles except in loose parlance. They are modal excitations.

    Once again, you have gone out on a limb, and are completely, obviously, and utterly wrong. You ought to look up the meaning of the term I used: hidebound. This definition at dictionary.com is particularly aptly metaphoric:

    “3. (of trees) having a very tight bark that impairs growth ”

    You have developed an attitude that you know everything, but in the realm of Fourier analysis, your insights and prejudices have been demonstrated to be consistently inadequate, and it is impairing your growth.

    “…the fit is spurious.”

    The reasons may or may not be spurious, but the effect is clearly there. You’re telling me something doesn’t exist while I’m looking right at it.

    Vukcevic’s formula is the right form, but suffers from the assumption of steady state sinusoidal oscillation. This is a modal oscillation driven by a random input. I see time constants of most likely somewhere between 80 and 100 years, based on the width of the lobes. Nailing down the model is the greater part of the Kalman Filter effort.

    “…the Sun is not an oscillator…”

    I have not said it is. A more apt analogy is a glob of jelly sitting on the table of a railway dining car, which jiggles fitfully at a frequency determined by the surface tension due to random wideband acceleration excitations of the vehicle. (Please do not go off on a tangent about surface tension having nothing to do with the solar cycle – I am making an analogy, not elucidating principles of solar dynamics.)

    “And as I pointed out, the real length of the solar cycle is about 17 years.”

    Your waveform crosses zero at about 1968, 1990, and looks likely to cross again about 2012 – 22 years each time. Where are you getting this 17?

  326. Bart says:

    For any who may be interested, treating the Sun Spot Number as proportional to the magnitude of the sum of processes at 20 and 23.6 years, rather than the squared magnitude as I did previously, gives a better correspondence of the model with the data. I updated my plot here accordingly.

  327. Bart says:
    June 20, 2011 at 8:42 am
    They are modal excitations.Which are just damped oscillations. Here is a tutorial for you on modal excitations: http://www.sem.org/PDF/IMAC2008_Modal_Excitation_Tutorial_revF.pdf
    You need a force to drive the excitations and we have not detected any such forces with the periods you find. Vuk [and others] believes the forcing is Jupiter and Saturn, but the energy and coupling mechanisms are not there, so this is not taken seriously. Just another example of spurious correlation.

    <i<This is a modal oscillation driven by a random input
    Oscillation again. Or just another loose term?

    “…the fit is spurious.” The reasons may or may not be spurious, but the effect is clearly there.
    You have this backwards. The effect is spurious because the reasons are not there.

    “And as I pointed out, the real length of the solar cycle is about 17 years.”
    Your waveform crosses zero at about 1968, 1990, and looks likely to cross again about 2012 – 22 years each time.
    It crosses every ten years in 1968, 1980, 1990, 2000, 2012. And it is not a ‘waveform’. There is no wave, just polar fields that come and go every ten years, driven by movements of small-scale magnetic flux from dead sunspots drifting towards the poles.

    Where are you getting this 17?
    http://www.nature.com/nature/journal/v333/n6175/abs/333748a0.html
    ftp://ftp.nso.edu/outgoing/users/altrock/ESC24.pdf [section 4]
    http://iopscience.iop.org/0004-637X/685/2/1291/74871.text.html [section 2]
    etc

    I see time constants of most likely somewhere between 80 and 100 years, based on the width of the lobes.
    There is general acceptance of the existence of an approximately 11-yr solar ‘cycle’ with an amplitude modulation of 80-100 years. There is a large body of literature with various physical theories accounting for this.

  328. Bart says:

    “You need a force to drive the excitations and we have not detected any such forces with the periods you find. “

    Wideband noise will do it. A two mode model such as this will generate the observed behavior with the proper phasing. A Kalman Filter run backwards and forwards on the data set will properly initialize the states, and a projection forward with error bounds from the propagated covariance can easily be generated (but, it takes time, and I have a job and a life).

    “Vuk [and others] believes the forcing is Jupiter and Saturn, but the energy and coupling mechanisms are not there…”

    I believe you. But, it is a misapprehension to believe that you need a driver at a specific frequency. All you need is an input to energize the mode.

    “Oscillation again. Or just another loose term?”

    Definition 2:

    oscillate os·cil·late (ŏs’ə-lāt’)
    v. os·cil·lat·ed , os·cil·lat·ing , os·cil·lates
    1. To swing back and forth with a steady, uninterrupted rhythm.
    2. To vary between alternate extremes, usually within a definable period of time.

    “You have this backwards.”

    I do not.

    “It crosses every ten years in 1968, 1980, 1990, 2000, 2012.”

    But, it crosses with the same +/- slope every 22 years. You pick out the period the same as you would for a sine wave.

    “There is no wave, just polar fields that come and go every ten years…”

    wave-form
       [weyv-fawrm]
    –noun Physics .
    the shape of a wave, a graph obtained by plotting the instantaneous values of a periodic quantity against the time.

    “There is general acceptance of the existence of an approximately 11-yr solar ‘cycle’ with an amplitude modulation of 80-100 years.”

    Time constants are not periods. At the link I gave above, the time constant of the first block is 1/(zeta1*omega1), and similarly for the second block. These indicate the time associated with a decay of 63% in the amplitude of the damped sinusoid which would ensure if you took away all excitations.

  329. Bart says:

    “All you need is an input to energize the mode.”

    All you need is an input which will energize the mode. Wideband noise is sufficient.

  330. Bart says:

    “…of the damped sinusoid which would ensue if you took away all excitations.”

  331. Bart says:

    Here is what I am talking about. This run of the model just happened to be randomly initialized such that it gave a result eerily similar to the real SSN data.

  332. Bart says:
    June 20, 2011 at 8:42 am
    They are modal excitations.
    Here I explain why they are not: http://www.leif.org/research/Toy-Solar-Cycles.pdf

  333. Bart says:

    Leif Svalgaard says:
    June 20, 2011 at 10:51 pm

    Awful. Just, awful.

  334. Bart says:
    June 21, 2011 at 8:53 am
    Awful. Just, awful.
    I guess you didn’t like the smackdown…

  335. Dave X says:

    The period of the green sinusoidal curve in panel two of figure one does not match the period of the orange curve in panel two of figure two. Which figure is in error?

Comments are closed.