Stockwell asks: Is the Atmosphere Still Warming?

Guest post by Dr. David Stockwell

I suspect that the only really convincing evidence against global warming is a sustained period of no global warming or cooling — climate sensitivity and feedbacks are too esoteric.

I have followed the recent global temperature with some excitement, and started to prepare a follow up to a previous article I wrote on the failure of global temperature to meet AGW expectations.

The Nature publication “Recent Climate Observations Compared to Projections” by Rahmstorf, Hansen and others in 2007 claimed an up-tick in a graph showed that “global temperatures were increasing faster than expected”, and consequently climate change would be worse than expected. In “Recent Climate Observations: Disagreement with Projections”, using their methodology and two additional year’s data, the up-tick was shown to be an artefact of inadequate smoothing of the effects of a strong El Nino. Perhaps this rebuttal played some part in subsequent revisions of Rahmstorf’s graph with longer smoothing, which had the unfortunate effect (for him) of removing the up-tick, so they could no longer claim, “global temperatures were increasing faster than expected”.

Can we answer the question “Is the Atmosphere still warming” in a reasonable way?

From the field of econometrics comes empirical fluctuation processes (EFP), available to programmers in an R package called strucchange – developed to analyse such things as changes in exchange rates by the brilliant Achim Zeileis. The idea is to find a test of the null hypothesis that the slope parameter m for a section of a series has not changed over time:

H0: m1 = m2 versus the alternative H1: m1 not equal to m2

The idea is to move a window of constant width over the whole sample period, and compare local trends with the overall distribution of trends. The resulting process should not fluctuate (deviate from zero) too much under the null hypothesis and—as the asymptotic distributions of these processes are well-known—boundaries can be computed, which are only crossed with certain probability. If, on the other hand, the empirical process shows large fluctuations and crosses the boundary, there is evidence that the data contains a structural change in the parameter. The peaks can be dated and segmented regression lines fit between the breaks in slope.

I applied the strucchange function EFP to the five official global temperature data sets (CRU, GISS, NOAA, UAH and RSS) from 1978 using the latest values in 2011, and to mean global sea level. The results for the global temperature are below:

Click to enlarge - Figure 1. Fluctuation process, structural change model and information measures determining the number of structural breaks for the five global temperature data-sets (CRU, GISS, NOAA, UAH and RSS).

The fluctuation process (top panel) crosses the upper significance boundary a number of times, indicating that the trend parameter is unstable. For example, it crosses in 1998, coincident with the strong El Nino, and then relaxes. Most recently, three of the five data sets are at the lower boundary, indicating that at least the CRU, NOAA and RSS datasets have shifted away from the overall warming trend since 1978.

The middle panel shows the structural break model for the CRU data, with the optimal number of breaks given by the minimum of the Bayesian Information Criterion (BIC) (bottom panel). The locations of the breaks are coincident (with a lag) with major events: the ultra-Plinian (stratosphere reaching) eruptions of Mt Chichon and Mt Pinatubo, the Super El Nino and the Pacific Decadal Oscillation (PDO) phase change in 2005.

Sometimes these types of models are sensitive to the start and end point, so I re-ran the analysis with data from 1950. Figure 2 is the resulting structural break model for CRU. While the fluctuation process did not show the same degree of recent downtrend, the structural break model is similar to the shorter series in Figure 1, except the temperatures since 1998 are fit with a single flat segment.

The temperature is plotted over random multiple AR(1) simulations, showing the temperature has ranged between the extremes of an AR(1) model over the period.

click to enlarge Figure 2. Linear vs. segmented regressions for the global temperature dataset CRU, with the timing of significant climatic events.

Another indication of global temperature is the mean global sea level, both barometric and non-barometric adjusted. Global sea levels tell the same story as atmospheric temperature, with a significant deceleration in sea level rise around the PDO shift in 2005.

click to enlarge - Figure 3. The fluctuation process, structural break model and information measures for global mean sea level, both barometric and non-barometric adjusted.

By these objective criteria, there does appear to be a structural change away from the medium-term warming trend. Does this mean global warming has stopped?

What are the arguments that warming continues unabated?

Easterling and Wehner in their article “Is the climate warming or cooling?” lambasted “Numerous websites, blogs and articles in the media [that] have claimed that the climate is no longer warming, and is now cooling” for “cherry picking” the recent data. They examined the distribution of 10 year slopes of both the realized and modelled global temperature. They argued that because there were a small number of periods of flat 10 year temperatures that the long-term warming trend is intact.

Both E&W and EFP agree that there is a small chance of flat temperatures for 10 years (EFP says around 5%) during a longer-term warming trend. What E&W’s are saying is that given a small chance at one time, the chance of flat temperatures at any time, over the last 50 years say, is much higher. This doesn’t alter the fact that to an observer during any of those decades when temperature was flat (as now) there would still be a 5% chance of a break in the long-term trend.

Breusch and Vahid (2008 updated in 2011) chimed in with “Global Temperature Trends”, stating “there is no significant evidence for a break in trend in the late 1990s”, and “There is nothing to suggest that anything remarkable has happened around 1998.” As hard as I looked I could not find any estimates of significance to back up their claim of significant evidence.

The statement is even more puzzling as the last 15% at the ends of the series are typically not tested for breaks due to low power of the test on the diminishing numbers of data. The 1990’s fall in the outside 15%. Breaks the size of the break in 1976 would not have been detected on their data.

Of course, there are a variety of other observations of the Earth’s radiative balance and ocean heat content, supporting of the “no warming” claim, by top researchers such as Douglass and Loehle. There does not appear to be any credible empirical evidence from the AGW camp that the atmosphere is still warming.

I suspect that as in “Recent Climate Observations” where climate scientists were fooled into thinking that “climate change will be worse than expected” by the steep up-tick in global temperatures during a strong El Nino, they have also been fooled by a steep but longer-term up-tick in global temperatures associated with a positive phase of the PDO.

About these ads
This entry was posted in ENSO, PDO and tagged , , . Bookmark the permalink.

98 Responses to Stockwell asks: Is the Atmosphere Still Warming?

  1. Bob Tisdale says:

    David Stockwell, how are you defining PDO?

  2. Brian H says:

    None are so “easily fooled” as those who desperately want to be!

    But I think Anthropogenic Global Foolishness is definitely receding.

  3. rbateman says:

    There is nothing to prevent a big La Nina from stepping down the global temps.
    A lot of folks who have been stressing out over natural changes will find thier hot air baloons gone cold, and the dizzying heights that they soar, with thier wild predictions, will mean a rapid descent to a hard and unforgiving surface.

  4. Why does Figure 3, for global sea level, have an axis labelled “Temperature (C)”?

    REPLY: It is segmented, top and bottom two different graphs, with the top graph sub-segmented for comparison. – Anthony

  5. John Kehr says:

    The AGW projections are so vague that their practicality is near meaningless. The projections are so well mixed in with the climate noise that it is impossible to separate any anthropogenic signal from variability.

    You showed the rate of warming regressions for time periods, but a moving rate warming provides some interesting results.

    http://theinconvenientskeptic.com/2011/04/2000-years-of-rate-of-temperature-change/

    The current warming period is normal in both the magnitude and duration. What is most interesting is that there have been weaker periods of cooling over the past 400 years, well before any anthropogenic signal could have been impacting the climate.

    Trying to focus on a 30 year period to predict climate is exactly why the warmists are so far off base. Their entire science is based on the current warming being caused by CO2. All of their efforts are for naught if the warming is 100% natural.

  6. Murray, Oops – that should be sea level. I used the same routine on both sets of data.

  7. Bob, I got the PDO dates from wikipedia. I don’t actually use the values anywhere, as only interested in it as an ‘event’ modifying the slope.

  8. Robert of Texas says:

    Your analysis of the data is based on some amount of trust in the data being correct. My biggest skepticism of Alarmist Global Warming AGW (not natural global warming) is that the data is too messy and inaccurate to draw any meaningful conclusion from it.

    I suspect strongly that the urban heat island effect is underestimated. I suspect strongly that the closing of temperature stations has a larger effect than estimated. I find it curious that global warming has seemed to slow down or even stop about the time skeptics started really paying attention to how data is gathered – so I suspect the data since around 2000 is more accurate than the previous data. I am very suspicious of how the raw data is handled and bias accounted for.

    Scientists studying global warming should be addressing all of these factors – instead some seem intent on avoiding a sensible discussion of these factors.
    So while I find your analysis interesting, without some confidence in the data your conclusions about THIS data is not convincing, or even meaningful. You can’t analyze bad data and get meaningful results.

    I do not mean any of this to be disrespectful of you and your work. I just need to trust the data first.

  9. So, the middle graph of Figure 3 says the temperature has risen from -30C in 1995 to +30C in 2010? I don’t remember it being that cold in 1995. 60C sounds like a HUGE amount of gorebull warming,or am I totally missing something here?

  10. JamesS says:

    Not being a statistician — or even very good at the stats courses I took as a geology major — whenever I read an article like this that talks about a new statistical tool, I wonder “What did they test this against for correctness”?

    So: what DO these stats wallahs run their brilliant new schemes against to see if they make sense at all? What are the sanity checks? Do they take a known curve or linear function and run it against them to see if the slope of the slope, or whatever they’re analyzing, stays constant over the range?

    When I read these articles that basically say scientist X used a new tool to determine thus about so, my first thought isn’t about the results — it’s more along the lines of “Is that new-fangled meter stick really a meter long?”, or thoughts to that effect.

    How DO we know these new tools are useful, and not just telling us things we want to see?

  11. Thanks David, ignore my second post, timing issues.

  12. tango says:

    the only option is to move the temp recorders closer to the air conditioner units .why JUST do it, I am not going to get off the gravey train just do what you are told

  13. P. Solar says:

    Interesting analysis but why is there no legend on fig.1 and fig.3 ?

    You present 5 data sets in fig.1 but there is no legend , no indication in the caption , not even an explanation in the accompanying text to say which is which.

    Same in fig.3 , I’m interested to see how the barometric adjustments affect the data but which is which?

    Also in fig.3 the middle segment y axis is labelled “temperature (C)” from -30 to +30 . Clearly this is supposed to be a plot of mean sea level not temperature.

    The ‘click to enlarge’ images are nice and clear but then fig.3 loses it’s caption, it has no title .

    This is incredibly sloppy work from a PhD (unless the doctorate was in sociology or something, I have not checked.).

  14. Robert of Texas: While you could be right, a definite structural change down would be incontrovertable that something is wrong with the theory of AGW, nomatter what the deficiencies in the data, wouldn’t it.

    JamesS: If you read some of the background material from the links I provide, these methods have been known to statistics for a long time and well researched. Contrast that with the ‘novel’ statistical approaches that demonstrate the so-called climate breakthroughs. Its chalk and cheese. Steve McIntyre has always been on about this. They need to use methods that have been around the stats literature that are well understood — what I have tried to do here — not roll their own for every new breakthrough.

  15. P. Solar says:

    Also what is the gray scribble on fig.2 ? Anybody’s guess.

  16. George E. Smith says:

    Well I am not much of a statistician; so I don’t play the lotteries. I figure that if I buy a ticket, I might win, and I might not; so its a 50:50 shot. If I don’t buy a ticket its a 0:100 shot that I don’t win.

    I figure if I played the next million lotteries, that I am going to not win the vast majority of them; but I could win the next one; but only if I buy a ticket. It’s like I bought one ticket, and somebody else bought every other ticket, and he didn’t win; I did !

    So much for the statistics.

    So back to your premise “is the ATMOSPHERE still warming?” (my emphasis).

    It would seem to me that the answer to that question is predicated on the assumption that we even know, or can know the Temperature of the atmosphere. We certainly can’t say if it is still warming until we know that we can even measure it.

    I’m not aware that we have the means to measure it. Well I know for sure that Mother Gaia, knows exactly what the Temperature of the atmosphere is; she has a thermometer in each and every atmospheric atom/molecule, so for sure she knows what the (average) Temperature is; and at any instant of time too, so ergo, the average over any interval of time one could wish for.

    But then she has no means of telling us what the Temperature is. And it’s for darn sure, that we don’t have nearly enough thermometers in the atmosphere to measure the Temperature. Well there’s that little problem of sampled data system theory, called the Nyquist Sampling Theorem. And we are sampling a field of two variables; time and space. And we don’t take enough observations often enough to satisfy the Nyquist criterion for either one of those two variables; let alone for both of them simultaneously.

    And remember that, one only has to fall short of the Nyquist prescribed sampling rate by a factor of two to have the reconstruction of the sampled continuous function produce aliassing noise at zero frequency; which means that not only can we not reconstruct the continuous two variable function, but we can’t even obtain the correct average, without aliassing noise.

    Other than that; it is an interesting question that you ask. In the long run, the properties of the H2O molecule will ensure that the answer to your question is NO !

  17. Bob Tisdale says:

    David Stockwell says: “Bob, I got the PDO dates from wikipedia. I don’t actually use the values anywhere, as only interested in it as an ‘event’ modifying the slope.”

    In looking at you Figure 1, the PDO switch that you’ve highlighted is the only event that lags the structural break.

    Regardless, since the PDO is an abstract form of the North Pacific SST anomalies and does not represent them, there’s no mechanism for it modify the slope. The North Pacific SST anomalies peaked in 2005 and have been dropping since then.

  18. Werner Brozek says:

    “Both E&W and EFP agree that there is a small chance of flat temperatures for 10 years (EFP says around 5%)”

    I believe it was once stated that the IPCC was 90% confident that CO2 was to blame for global warming. And even if we assume for argument sake that the temperatures were flat for only 10 years, does that mean the IPCC is now only 5% certain CO2 is to blame for global warming?

  19. P. Solar: Tough crowd. I have sent in a new Fig 3 to replace the old one.

  20. James Sexton says:

    Great job, Dr. Stockwell! I thank you for post this! You saved me a ton of work and I don’t think my approach would have been as comprehensive as yours.

    For those that wish a simpler view, similar to what I’m taking from the information,
    go here, http://suyts.wordpress.com/2011/04/12/rss-going-negative/

    No, it isn’t a blog per se, just some informative graphs that shows all of the recent “warming”. Hopefully, some people can come to an understanding that when a “scientist” speaks of “recent” warming on a global scale, they will know they are being entirely disingenuous. (Think about the explanation for the recent snowy years.)

  21. P. Solar says:

    The third segment of fig 1 and fig 3 have no label on the vertical axis. What does this represent?

    Just for informational purposes , since axis ticks are usually marked with dimensionless quantities like 0,1,2… so the axis label should also be dimensionless, eg. temperature/C or MSL/mm .

    As always when showing climate related data, no indication of the uncertainty of the data is given.

    Perhaps that’s all we can expect form environmental “science”.

  22. P Solar, the gray scribbles are: “The temperature is plotted over random multiple AR(1) simulations, showing the temperature has ranged between the extremes of an AR(1) model over the period.”

    The uncertainty of the breaks are indicated by the red bars over the x axis of the middle panels.

  23. BravoZulu says:

    No, S for brains. the atmosphere isn’t still warming.

  24. Elmer says:

    Murray, did you ever get an answer to your -30C question? It’s a good on. Its probably in hundredths of a degree Celcius.

  25. Alan says:

    I’m sorry but could we stop “analyzing” the temperatures within such a narrow time frame window?… rising, or falling, or flat over a month, a year, or even a decade? It’s silly, populist, non scientific to me. In this planet’s history of climate and geology, anything less than a couple of centuries is just noise. Or weather. Are you bullish or bearish on stocks for this year, based on an analysis of the S&P 500 index yesterday between 2:30 p.m and 2:32 p.m?

  26. Elmer: The sea level in in mm. There is a replacement graph with the correct axis label coming.

    BarvoZulu; Not sure who you are addressing, but the idea is to approach the question in a principled way, not an opinionated way. The EFP indicates that we are getting very close to 95% confidence that the atmosphere isn’t warming.

  27. P. Solar says:

    It is unclear from your text and the graphs how you get from the EFP plot to your linear breakpoints. It would appear from the paper on strucchange that dates where the EFP breaks the boundary indicate a change in structure.

    Now looking at fig. 1 , the most obvious dates are 1988 and 1997.5 , you chose to break at 1999 and 1988 falls exactly half way between two of your break points.

    It appears from what is presented here that the EFP test and BIC has been used to suggest 4 break points but that the break points that were chosen were not derived from the results of the EFP plot.

    Maybe you need to explain how you chose the break-points and how (of indeed if) these relate to the efp results.

  28. P. Solar says:

    David Stockwell says:
    April 13, 2011 at 6:08 pm

    P Solar, the gray scribbles are: “The temperature is plotted over random multiple AR(1) simulations, showing the temperature has ranged between the extremes of an AR(1) model over the period.”

    What AR model is that? How was it chosen , what does it mean? Why is it relevant?
    Just showing a scribble plot “model” that has about the same amplitude does not prove/show anything unless you explain it.

    The uncertainty of the breaks are indicated by the red bars over the x axis of the middle panels.

    No, I meant the experimental uncertainty of the data. It’s an old-fashioned concept that used to exist in the hard sciences.

    What you show with the red bars is the variable time lag between your apparently arbitrary breakpoints and the “events” you highlight.

    Thanks for the reply.

  29. James Sexton says:

    Ric Werme says:
    April 13, 2011 at 5:58 pm

    http://www.facebook.com/event.php?eid=148669818520369

    According to their Facebook page, 3,116 FB members will be attending, 2,426 may be attending. I clicked “See all” and it only showed me a few. Trying to friend them all would take a while.
    =========================================

    If you find one that wants to play, let the rest of us WUWT FB members know, wouldja?

  30. Katherine says:

    Sometimes these types of models are sensitive to the start and end point, so I re-ran the analysis with data from 1950.

    If you re-run the analysis with data starting from 1930 or 1920 would the warming be similar?

  31. Jeff L says:

    I like this cross-diciplinary approach / analysis.
    Key observations :
    1) The current segment is way flatter than any of the other segments
    2) It quantifies what is visually obvious

    Very intriguing !

  32. P Solar: The Zeileis approach, that I am trying to emulate, uses the EFP to determine if the slope parameter is unstable. If it is, by exceeding the 95%CL, then you proceed with the exploratory segmentation technique to get a model of how the trends might have been changing.

    The EFP is not used for the dating of the breaks, which is just a brute force search based on the change in residuals for break vs no break.

    EFP says it is not correct to fit a linear regression to the temperature — because it appears to be a process with structural changes, much like an exchange rate might change due to some government intervention.

  33. Alan+Katherine on time frame: “I’m sorry but could we stop “analyzing” the temperatures within such a narrow time frame window?…”

    Its OK when you ask a specific question. The change in slope from 2000 to 2010 is a significant departure from the steep slope from 1978 to 2010. The change in slope from 2000 to 2010 is not significant relative to the lower slope from 1950, or from 1900 for that matter.

    That is why I say that there appears to be a significant departure from the 1978 slope, and of course, this is the fist that is supposed to be mostly caused by CO2 according to the IPCC. But CO2 is still increasing, but temperature is not, why?

  34. tommoriarty says:

    “…subsequent revisions of Rahmstorf’s graph with longer smoothing, which had the unfortunate effect (for him) of removing the up-tick…”

    Rahmstorf must be getting used to the idea of updated data taking all the fun out of his predictions. See, for example…

    http://climatesanity.wordpress.com/2010/11/17/rahmstorf-2009-part-9-applying-three-corrections/

  35. Smokey says:

    David Stockwell says:

    “…I wrote on the failure of global temperature to meet AGW expectations.”

    As Prof Richard Feynman explains regarding any new hypothesis: “We compare it directly with observation to see if it works.” If the hypothesis doesn’t agree with experiment, if it doesn’t agree with observation, it’s wrong.

    The AGW hypothesis [and especially the Catastrophic AGW hypothesis] do not agree with experiment or observation. Therefore, they are both wrong.

    The scientific method is ignored by the purveyors of the failed CAGW Hypothesis Conjecture. Their model is wrong, because it does not agree with observation. There is no runaway global warming, nor even warming. Despite a hefty ≈40% increase in CO2, temperatures are declining. Thus, the CO2=CAGW [and CO2=AGW] hypotheses are falsified; they do not agree with observation.

    Who are you gonna believe? Michael Mann? Or Richard Feynman?

  36. Smokey: That’s my view. If you haven’t had your favourite theories demolished a few dozen times, you are not a scientist.

  37. mike g says:

    @Alan

    Well, Alan, then I’ve got just the videos for you. They’re of Bob Carter doing exactly what you are asking for. And, doing it very convincingly. No wonder media down under has him in the cross hairs.

    And you can get the other three parts from Lucy’s fine site:

    http://www.greenworldtrust.org.uk/Science/Curious.htm

  38. P Solar on AR(1): This takes some explaining and not essential, but the literature on such tests (read B&V linked in the article) compares the realized temps to a presumed model of random variation. If you take the AR(1) model of the detrended temps, (ie Y_t= a Y_t-1) + e) then the a coefficient is about 0.7 and the SD of e is about 0.1C. Simulate that tons of times to get a feel for the spread of results due to pure, autocorrelated randomness. Thats the gray area in fig 5.

    Most of the literature hovers around the 95%CL for tests of the realized temperature against random AR models (of various flavours).

  39. Thanks, Dr. Stockwell. I appreciate your honesty.

    Have you seen this knocking of heads between two environmental scientists on global warming? – http://college-ethics.blogspot.com/2011/04/two-environmentalists-knock-heads.html

    I’d be interested in you thoughts on this.

  40. Alice: I am more concrete about that sort of thing. The most basic assumption of linear regression is that the slope parameter m in y=mx+c is constant. It’s not. So the model is wrong. I fit a model with segments for each of the different slopes. Temperature is not increasing anymore. Either it increased from 1978 because of CO2 but has ‘maxed out’, or it was caused by the sun all along.

  41. Lady Life Grows says:

    David Stockwell says:
    April 13, 2011 at 8:10 pm
    “If you haven’t had your favourite theories demolished a few dozen times, you are not a scientist.”

    I had a favorite T-shirt I got from a roommate that I called my “science T-shirt.” It showed a honey dripper and said “oh, Lord, make my words as sweet as honey, for tomorrow I may have to eat them.”

    But I don’t feel sweet as honey when I consider that practical consequences of the AGW hysteria. Aside from fighting photosynthesis and risking extinctions, the econazis are burning so much corn as ethanol that world food stocks have declined and the Arabs countries are rioting. Tens to hundreds of thousands have been killed so far, and it may touch off larger wars that could kill millions or even billions of people.

    That program is so inefficient that it would not reduce warming even if the hysteria were right. So maybe we can get the US congress to have mercy on the world’s poor and stop ethanol subsidies. We need to cut plenty anyway, and that is one good place.

  42. P. Solar says:

    David Stockwell says:
    April 13, 2011 at 7:36 pm

    P Solar: The Zeileis approach, that I am trying to emulate, uses the EFP to determine if the slope parameter is unstable. If it is, by exceeding the 95%CL, then you proceed with the exploratory segmentation technique to get a model of how the trends might have been changing.

    The EFP is not used for the dating of the breaks, which is just a brute force search based on the change in residuals for break vs no break.

    EFP says it is not correct to fit a linear regression to the temperature — because it appears to be a process with structural changes, much like an exchange rate might change due to some government intervention.

    EFP says it is not correct to fit a linear regression to the temperature … but then you do. Including those areas that were found to be the most unstable , right across 1988 for example :?

    In fact it’s hard to see what use you make of efp once you’ve plotted it.

    So the key work you do here is the “brute force” optimisation of segments, a process you don’t describe.

    What is the rationale for the discontinuities you introduce ? What happens if you go to 8 or 10 segments, your optimisation does not seem to look beyond 5. That may well just be a local minimum, and a pretty shallow one at that.

    I agree with the idea that post-2003 has been pretty flat but you don’t need a complex analysis to see that.

    The rest of it just does not ring true for me. I don’t think this kind of method will be accepted as proving anything , apart from by some of the less critical minds here.

  43. Bob K. says:

    I am sure this: “H0: m1 = m2 VERSES the alternative H1: m1 not equal to m2″ was ment to read: “H0: m1 = m2 VERSUS the alternative H1: m1 not equal to m2″.

  44. Bob: “In looking at you Figure 1, the PDO switch that you’ve highlighted is the only event that lags the structural break.”

    I wouldn’t take the location of the PDO shift as exact. When I say PDO I mean all those processes that PDO represents, that seem to explain the multi-decadal temperature fluctuations so well.

  45. P Solar: “EFP says it is not correct to fit a linear regression to the temperature … but then you do. ”

    EFP says it is not correct to fit a SINGLE linear regression to the temperature. So you need a model with more than one slope. Which is what the method does.

    The EFP justifies the use of more than one slope. Its a critical step. The next step is dating of the breaks and is more exploratory — there are other criteria than BIC. But BIC should be a global optimum — it would always be greater with more parameters I think, unless it is a really weird type of series (I could imagine one I suppose if it was highly periodic).

    The rational for a break, in the case of one test of breaks, is the supremum ration of the sum of squares for a single line vs. a broken line, and is an F statistic: F = RSS1/RSS2. However, this is not the only statistic either.

    You really want to go to the literature if you want the details as its technical.

  46. RoHa says:

    “Stockwell asks: Is the Atmosphere Still Warming?”

    And is his answer “No”?

  47. Katherine says:

    David Stockwell says:
    That is why I say that there appears to be a significant departure from the 1978 slope, and of course, this is the fist that is supposed to be mostly caused by CO2 according to the IPCC. But CO2 is still increasing, but temperature is not, why?

    Thanks for the clarification, Dr. Stockwell. I think I understand your article better now.

  48. “I agree with the idea that post-2003 has been pretty flat but you don’t need a complex analysis to see that. The rest of it just does not ring true for me. I don’t think this kind of method will be accepted as proving anything , apart from by some of the less critical minds here.”

    When I hear the words “pretty flat” or “pretty close” I immediately think “He must be a climate scientist”. As to whether it rings true for you, or me — who cares? The case for a segmented fit is this:

    1. The model y=mx+c is WRONG as m is not a constant. Therefore you must use a model that allows m to fluctuate (and this goes for AR(1) with a constant drift term too).

    2. The segmentation method “discovers” exogenous causes such as volcanic eruptions, without any knowledge of the timing of these events. So the method is interesting as it tells you something that you didn’t assume.

    Now it could be that including TSI as a dependent variable explains enough of the variation that the EFP no longer crosses the boundary, then the case for a segmented model would not be so strong — I will have to look into that.

    3. The method provides a way to evaluate the trends at the ends of a series without the problems that moving averages have. This is an important issue as we are most interested in what is happening at the ends, as that is the most recent data.

  49. rogerthesurf says:

    Does anyone know, according to the IPCC, how soon will it be before we are fully effected by the 7 meter sea level rise caused by the melting of the Greenland ice cap then?

    Cheers

    Roger

    http://www.rogerfromnewzealand.wordpress.com

  50. cohenite says:

    “When I hear the words “pretty flat” or “pretty close” I immediately think “He must be a climate scientist”. As to whether it rings true for you, or me — who cares?”

    Best put-down of the week.

    And this will be music to TC’s ears:

    “The most basic assumption of linear regression is that the slope parameter m in y=mx+c is constant. It’s not. So the model is wrong.”

  51. Jimbo says:

    Hansens promises come true:

    http://stevengoddard.wordpress.com/2011/04/14/hiding-the-la-nina-at-giss/

    What is up with GISS????

  52. peter2108 says:

    Frequently we hear climate science castigated for publishing results that are not reproducible because computer programs and even data are not available. I would like to reproduce the graphs that David Stockwell has provided, and indeed make some of my own in the same. But he does not give me enough information to do that. To be sure I am very much a tyro at statistics – but I do have ‘R’ installed on my PC. I can also obtain various temperature records. I’m interested particularly in how the breaks were identified after EFP gave a verdict that a simple linear model was inappropriate. [Aside: "The model y=mx+c is WRONG as m is not a constant.". This is not true, the model is wrong, but 'm' is a constant].

    All that aside it is a really excellent post – many thanks.

    PS Which colour line is UAH data?

  53. Bob Tisdale says:

    David Stockwell says: “When I say PDO I mean all those processes that PDO represents, that seem to explain the multi-decadal temperature fluctuations so well.”

    I know of no processes represented by the PDO. It’s an aftereffect of ENSO.

    A simple explanation of how the PDO patterns are formed in the North Pacific: The Eastern North Pacific SST anomalies (north of 20N) rise during an El Niño due to changes in atmospheric circulation, coastally trapped Kelvin waves, etc. The El Niño also slows or reverses trade winds in the western tropical Pacific, so less warm water is spun up into the Kuroshio Extension and the SST anomalies in the western and central North Pacific can drop. Warm SST anomalies in the eastern North Pacific and cool SST anomalies in the central and western North Pacific result in a positive PDO. During a La Niña, the Eastern North Pacific can cool, and due to the increased strength of the trade winds, more warm water than normal (leftover from the El Nino and created by the increase in Downward Shortwave Radiation over the tropical Pacific) is spun up to the Kuroshio Extension. That causes higher than normal SST anomalies in the western and central North Pacific. The pattern with the cool SST anomalies in the east and warm anomalies in the central and west is represented by a negative PDO.

  54. Bob Tisdale says:

    Jimbo says: “What is up with GISS????” And you linked Steve Goddard’s post that starts with the sentence, “Incredibly, GISS has higher temperature anomalies now than they did in the middle of last year’s El Nino summer.”

    http://stevengoddard.wordpress.com/2011/04/14/hiding-the-la-nina-at-giss/

    There was no El Nino last summer. The average June-July-August 2010 NINO3.4 SST anomalies was about -1 deg C. That’s well into La Nina range. While I understand he’s trying to show the divergence between GISS and the other datasets, starting the post off with an incorrect statement doesn’t help. Then there’s the other very obvious question: why did he exclude the NCDC data?

  55. John Finn says:

    I’m not sure what the significance of this analysis is.

    David Stockwell’s hypothesis tests

    H0: m1 = m2 verses the alternative H1: m1 not equal to m2

    Of course, m1 is not equal to m2. Why would it be. Any underlying trend (due to AGW or other) is bound to be amplified or attenuated over various timescales by natural factors – particularly ENSO. For example the Pinatubo eruption in 1991 would have the effect of reducing the trend of any 10 year period which ended in the early 1990s but increasing any trend beginning in the late 1980s. Include 1998 and adjacent trends will be markedly different.

    As for the recent “flat” period

    1. Since 2002 solar activity has fallen (from max->min). This may be responsible for a drop of ~0.1 deg due to the widely accepted fall in TSI over the solar cycle.
    2. Since 2002 the Multivariate ENSO Index has fallen significantly.

    The combination of these 2 factors could easily accout for 0.2 deg per decade trend. I’d hang on a few more years before calling the end of global warming.

  56. Dave Springer says:

    The salient question is whether the atmosphere has warmed at all from anthropogenic CO2. Theoretically it should have! In the real world I have my doubts. We have reliable temperature records continental Antarctica dating back to 1955. There is no warming going on there. None at all. Zero. Zip. Zilch. Nada. In fact the Amundsen-Scott station at the south pole has shown 0.17c/decade cooling trend from 1958 to 2000 although when that’s combined with the other stations the cooling is statistically insignificant.

    Now dig this. Antarctica is special because it is by far the driest air on the earth. There is essentially no water vapor there to interfere with the greenhouse effect of carbon dioxide. There is no anthropogenic soot that reaches it to darken the snow and thus raise the surface temperature. There are no anthropogenic land use changes to muddy up the picture. There are no improperly sited thermometers or urban heat islands there. CO2 concentration in Antarctica mirrors the rise observed at Mauna Loa. In short Antarctica is the perfect place to observe CO2 induced warming and there ain’t no warming there.

    I’m not disputing the observations of significant warming over land surfaces in the northern hemisphere over the past century but I’m more than a little skeptical about it being due to anthropogenic CO2 given that the best place to observe its effect in isolation shows no significant change in temperature over the past 60 years.

  57. Gary Swift says:

    David Stockwell said:

    “Either it increased from 1978 because of CO2 but has ‘maxed out’, or it was caused by the sun all along.”

    Or the Kyoto protocol is working and warming will resume when it expires? The team will thank you for proving that recent mitigation efforts have been more successfull than hoped. This is a grand day for Al Gore; somebody call the Nobel comittee. Or maybe it’s the windmills that are saving us? /sarc

    All kidding aside, I agree with above comments suggesting that the record isn’t long enough to eliminate noise. This analysis does do a good job of showing the effects of decadal noise on the trends though. The next 30 years of satellite observations should be very illuminating. If we can only hold back the knee jerk reactions long enough to get better evidence…

  58. George E. Smith says:

    Well Dave, I almost agree with you.

    Wiki says this about the Atacama Desert in Chile; reputedly the driest place on earth; quote “”””” Aridity
    Some parts of Atacama Desert, especially, surroundings of the abandoned Yungay town[9] (in Antofagasta Region, Chile) are arguably the driest places on Earth,[10] and are virtually sterile because they are blocked from moisture on both sides by the Andes mountains and by the Chilean Coast Range. ……………………………… The average rainfall in the Chilean region of Antofagasta is just 1 millimetre (0.04 in) per year. Some weather stations in the Atacama have never received rain. Evidence suggests that the Atacama may not have had any significant rainfall from 1570 to 1971.[5] It is so arid that mountains that reach as high as 6,885 metres (22,589 ft) are completely free of glaciers and, in the southern part from 25°S to 27°S, may have been glacier-free throughout the Quaternary, though permafrost extends down to an altitude of 4,400 metres (14,400 ft) and is continuous above 5,600 metres (18,400 ft). Studies by a group of British scientists have suggested that some river beds have been dry for 120,000 years.[11] …….. “””””

    Now because of its Temperature, the actual molecular abundance of the average atmosphere in Antarctica, may be lower than the Atacama; but after all, there had to be SOMETHING to put all of that ice there on Antarctica; whereas the Atacama has no ice, even where it is cold enough.

    But I will give you this point anyway. The Temperature over most of Antarctica, is so low, that even if CO2 was as greenhouse potent as a kindling wood; and even if there were ten times as much of it, it still couldn’t raise the surface Temperature enough to put any significant amount of additional water in the atmosphere.

    In other words; you cannot start your barbecue in Antarctica, with the CO2 kindling wood theory; and therefore I agree with you; Antarctica proves the point that it cannot be CO2 that got the Grteenhouse warming going in the first place.

    The notion that earth would be a frozen ice ball, sans CO2 (in the atmosphere) is a figment of the phony energy budget cartoon of Kevin Trenberth (et al).

    At 340.5 W/m^2 average global TSI of KT’s model, you might be able to show that a frozen ice ball will remain frozen; but that is not what happens in Mother Gaia’s laboratory. She sees the whole 1362 W/m^2 of TSI acting locally over part of the tropics at all times, and with that blow torch going; particularly in a sky devoid of clouds, and with low moisture (orCO2); there’s not a snowball’s chance in hell, that the sun won’t burn through any ice and evaporate water into the atmosphere. That is one of the benefits, of having H2O be largely transparent to the main energy carrying portion of the solar spectrum.

    I’m absolutely convinced beyond any shadow of a doubt, that earth with its entire water mass frozen solid; and with nary a single molecule of CO2 or ANY other GHG in the atmosphere, would recover to its present condition given just the present sun and earth orbit. Maybe sans CO2, it might be slightly less cloudy than it is now, but nobody would notice the Temperature change. And of course you would never ever get to that frozen ice ball in the first place; unless you shut the sun off for a very long time. Of course that’s stretching a point a bit, since sans CO2 there would be no life on earth so no biosphere to interract with the weather/climate sytem.

    But I agree with you Dave, Antarctica demonstrates the utter impotence of CO2.

  59. David, it is evident that atmospheric and sea temps are not rising and haven’t been since the close of Earth’s great year, which is when they spiked in July 1998.

  60. Richard M says:

    “Theoretically it should have!”

    Maybe, maybe not. I’m still waiting for anyone of the physicists out there to quantify the cooling effect of CO2. Yes, we know it intercepts radiation from the ground and sends some of it back towards earth. But, it can also be heated by the atmosphere itself and send some of that energy to space. So, is one effect greater than the other? Or, do they balance out like so many things in nature?

  61. thingadonta says:

    the decline in warming is worse than we thought.

  62. P. Solar says:

    David Stockwell says:

    P. Solar says:

    “I agree with the idea that post-2003 has been pretty flat but you don’t need a complex analysis to see that. The rest of it just does not ring true for me. I don’t think this kind of method will be accepted as proving anything , apart from by some of the less critical minds here.”

    When I hear the words “pretty flat” or “pretty close” I immediately think “He must be a climate scientist”. As to whether it rings true for you, or me — who cares? The case for a segmented fit is this:

    That is exactly the sense in which I used those terms. I meant that although I agree with the idea that it “appears” temps have been “pretty flat” (ie I’m basically in agreement with your conclusion) I can only say that in vague, hand-waving terms.

    I would welcome some objective method (that cannot be dismissed as hand-picking start and end points) that could back up my impression of no warming.

    I think that is what you are trying to do and I support that effort. I just remain unconvinced by your method (not) shown here. Maybe because you have not presented it well.

    3. The method provides a way to evaluate the trends at the ends of a series without the problems that moving averages have. This is an important issue as we are most interested in what is happening at the ends, as that is the most recent data.

    I agree that is what is needed and for those reasons. Sadly you do not explain what your method is.

    EFP says it is not correct to fit a SINGLE linear regression to the temperature. So you need a model with more than one slope. Which is what the method does.

    It is quite obvious that a single slope is a gross simplification It does not follow that several linear slopes are adequate. There are known pseudo-cyclic tendencies and some would suggest and exponential growth term ;) . You are making an arbitrary and unfounded assumption that several discontinuous slopes are better.

    The rational for a break, in the case of one test of breaks, is the supremum ration of the sum of squares for a single line vs. a broken line, and is an F statistic: F = RSS1/RSS2. However, this is not the only statistic either.

    You really want to go to the literature if you want the details as its technical.

    It is difficult to “go to the literature” when you don’t say what you are doing but you are not understanding my question. You are applying some abstract statistical test that says two slopes with a discontinuous jump has less residuals that one straight line.

    What I am asking is how you think you can apply this to the physical system you are looking at. Suggesting a model with a step discontinuity in global temperature clearly does not apply to the physical world. You cannot suggest that the slopes you derive have any physical meaning. This seems to make the result rather pointless.

    You need to impose a further condition on your maths to ensure that the end of one slope is continuous with the next. I would not expect to be able to do a “optimised” segmentation of this data with less than about 10 segments , some of which would be negative. You then have to re-apply your original test to see what efp says about each of the new linear fits you have done. If you think efp is relevant to dismiss one straight line , why do you not use it to test your suggested improvements.

    It’s good to see someone involved in environmental science that is not hooked up in the AGW hysteria but that does not stop me being critical of what you present here.

    Sorry that a bit long. The main point is that I don’t think you explain what you’ve done so it’s impossible to evaluate your method and I don’t think you discontinuous model has any physical meaning, whatever the slopes or method used.

  63. P. Solar says:

    As to whether it rings true for you, or me — who cares?”

    Sadly, in the complete absence of any proper explanation of what you did to get your results that’s all I can say. Maybe I should have said: undisclosed statistical methods – who cares?

  64. P.Solar: Yours are all good questions to go into. Looking at this question:
    “how you think you can apply this to the physical system you are looking at. Suggesting a model with a step discontinuity in global temperature clearly does not apply to the physical world.”

    This needs to be considered in the comparison to an alternative. In the case of a volcanic eruption, it seems quite well modeled by a sudden change in level. Another approach of fitting a straight line through it is not right, as it ignores the known perturbation. Other smooth approaches could have more parameters, or their own problems near the end points.

    Similarly, the increase in temperature from the 1997-98 El Nino was so fast, it is also well approximated by an abrupt change in level.

    A model is, after all, only an abstraction. Allowing changes in both level and slope allows you to represent discrete events, within a system that tends to trend, with a relatively small number of parameters.

    I have thought about segmented models (without changes in level) but here you are making another assumption. These climatic events (volcanos and big El Nino) are relatively very quick.

    What it is saying is without getting too complicated is: there are very fast changes (level), there are slow changes (trend). The fast changes are caused by known events (volcanos and big ENSO events).

    There is a limit to the number of parameters that can be used for a given length of data, and this is suggested by the minimum of the BIC.

    I say “who cares about what you or I feel” is not to be rude, but to say that the point is a repeatable defensible method, so I think we are on the same page actually. This is not “my method”, but a well-used method that the interested reader can follow up from the links I provided, a full description being beyond the scope of this post.

    Your suggestion to “why do you not use it to test your suggested improvements” seems a good test of the residuals.

    As to cyclicity, there is a new R package that builds on strucchange called bfast that removes periodicity first, and also tests for a change in the amplitude of periodicity. The result is no different for global temperatures, as the periodicity is very small, relatively. However, I have run it on monthly Arctic ice extent data and it is very interesting. Here you really do need to remove the cyclical component first as its very pronounced. There have been some very sudden changes in Arctic ice that it identifies, and its quite different to global temperatures.

    If people are interested and Anthony is OK with it I can write up a post with those results too.

  65. peter2108:

    The code is roughly as follows, where D is a time series of temperature. I can send all the code if you want to contact me.


    e=efp(D~time(D),type="Rec-MOSUM"); plot(e)
    bp=breakpoints(D~time(D))
    fm1 <- lm(Temp ~ breakfactor(bp)/time(Temp))
    plot(D)
    lines(ts(fitted(fm1),start=start(D),frequency=frequency(D)),col=2,lwd=2)

  66. P. Solar says:

    David Stockwell says:
    “I have thought about segmented models (without changes in level) but here you are making another assumption. These climatic events (volcanos and big El Nino) are relatively very quick.”

    I don’t really get your point here. What am I assuming?

    There are many changed in *both* directions which are “relatively very quick”. In fact there are slopes in both directions both long and short that do not get modelled in the same way. All get modelled by steady rise and discontinuous drops. That seems to indicate an artefact of your method rather than the data.

    We still don’t know your method so I can’t comment further on why that may be. You seem to steer around explaining what you do by vague references to the literature and “beyond the scope of” comments.

    Your claim of a “minimum” in the BIC seems very contentious based on the limited plot you showed. Did you try running up to 10 segments to see if that minimum becomes more well defined?

    Maybe I’m overlooking something but I don’t see the links you say explain your method. You only use efp to say one straight line is invalid. What you do instead seems rather opaque.

    If I’ve missed something please point out my error and how you applied it to what you have done.

  67. There are many changed in *both* directions which are “relatively very quick”. In fact there are slopes in both directions both long and short that do not get modelled in the same way. All get modelled by steady rise and discontinuous drops.

    The 1998 El Nino gets modelled by an abrupt rise.

    I don’t see the links you say explain your method.

    I attempted to emulate the method used in the paper “Testing, Monitoring, and Dating Structural Changes in Exchange Rate Regimes” linked to in the post. Here it is again

    http://eeecon.uibk.ac.at/~zeileis/papers/Zeileis+Shah+Patnaik-2010.pdf

    BIC seems very contentious based on the limited plot
    I am not going to argue with BIC, its been around since the beginning (http://en.wikipedia.org/wiki/Bayesian_information_criterion) .

  68. P. Solar says:

    David Stockwell says:

    BIC seems very contentious based on the limited plot
    I am not going to argue with BIC, its been around since the beginning (http://en.wikipedia.org/wiki/Bayesian_information_criterion) .

    I’m not asking you to argue with BIC but I am asking you not to chop out half of what I say and then offer some irrelevant reply to something I never said in place of answering what I did say.

    Your claim of a “minimum” in the BIC seems very contentious based on the limited plot you showed. Did you try running up to 10 segments to see if that minimum becomes more well defined?

    Your blatant avoidance of the question and disingenuous editing of my post seems to indicated you did not, and that you do not have any good reason for pretending that graph has a “minimum” in anything but the most banal literal meaning.

    It seems you are as incapable of holding a meaningful discussion as doing meaningful science.

  69. P Solar: For discussion on the limitations of BIC see Bai and Perron 2003, http://onlinelibrary.wiley.com/doi/10.1002/jae.659/pdf, the section Computation and Analysis of Multiple Structural Change Models.

    I think I answered your question already: But BIC should be a global optimum — it would always be greater with more parameters I think, unless it is a really weird type of series (I could imagine one I suppose if it was highly periodic).

    I don’t think a “double-dip” is likely with a BIC type index.

  70. P Solar: If I do have concerns with the BIC, it would be that the maximum has too many breaks, in the case of auto-correlated data. If it was used in a situation where robustness of the model was important (not the case in an exploratory study) I would consider using less breaks than suggested. However, I am using the suggested method here.

  71. P. Solar says:

    Dr Stockwell,

    BIC is a mathematically derived method based on the assumption that the data is an exponential class distribution. That may be true for the exampled exch-rate date in between changes in structure due to central bank intervention etc. (to take the case explored in the paper).

    The climate data you are looking at has a pseudo-cyclic element , to a grossly simplified extent like a rectified sine wave. The size of this part of the signal is larger than linear trends everyone is trying to fit to the data.

    That in itself would seem to indicate that applying BIC to this data is not valid. It’s like applying OLS to data with significant uncertainly in x-data. The derivation assumes minimal x uncertainty and if you ignore that criterion you get still get a least-squares fit but the slope is wrong. This may or may not be obvious to the eye , depending upon the actual data.

    This sort of fundamental criterion can not be waved aside even by saying it’s “just and exploratory study”.

  72. P Solar: One validation I have done is to test the significance of 1 break by simulating random AR(1) series with an without a known break. The F value for the model with one break, that occurs in 1998, exceeds the 95%CL. So I know the real distributions of the F value for a one break model. This is the model that is flat from 1998, so I have confidence that the change in slope is a real feature of the data, and this has nothing to do with BIC.

    What is meant by exploratory, is that whether it is a 3 or 4 or 5 break model is not really important for this study. The validation, I think, comes about from the independent identification of abrupt features like eruptions. Moreover, the flatness of the period since 1997-8 is not affected by the number of breaks, as the additional breaks tend to occur during the 70’s and 80’s. So its a robust feature that is not really affected by the use of BIC.

    The BIC is not unique, there are other measures used with pros and cons, so you could get a different number of breaks with AIC for example. But since we do not ‘know’ how many breaks there should be, there is always going to be a level of uncertainty there.

    All of the common error distributions come from the exponential class, so I don’t understand your objection. That includes normal, Poisson, etc, all with exponential tails. It would be strange if they did not. The exponential refers to the errors, not to the shape of the model, or whether it is cyclical.

    I am not sure if this answers your question. I look for validation through robust, independent predictions, and trying to get the main features right, so there are choices about what things I think might undermine the result. If you can say how the choice of 3, 4, or 5, or 10 breakpoints might undermine the result, maybe I’ll get it.

    If the BIC gave 10 breakpoints as a maximum and they were random, with no coincidence with any abrupt event, then that would be a problem. As it is, I think there could be a few more breakpoints in there (Pinatubo is not recognized in the 1950-2010 series for example).

  73. P. Solar says:

    re exponential class distributions: the “errors” here are the deviations from the straight line that is being attempted as a model. If there is a strong pseudo cyclical element to the data this will be equally present in the “errors” . That is why I was suggesting that I don’t think the errors can be even approximately described as conforming to exponential class.

    The point it appears you are missing is that these are not simple experimental errors which may often conform to some kind of exponential distribution, they are real physical variations .

    While I agree that the idea of an objective method “discovering” known features would be a good indication, I think attribution of your discoveries here is dangerously close to seeing what you would like to see and could be easily attacked.

    If the model “discovers” events both before and after they happen, by largely varying margins an misses other equally important features like the dips in 1988 and 2007 , I’m not sure this is much more than chance.

    I don’t think this would stand up to a sceptical criticism, and science (apart from climatology of course) demands scepticism.

    I do think the EFP shows that one single slope is not valid because there are changes in structure, notably in 1987/88 and after 1995. Unfortunately what you do after that fails to detect the 88 break point and runs straight through it.

    In fact your EFP result contradicts the latter part of what you do.

    One possible reason for this is that BIC is not applicable to this data.

  74. eadler says:

    Dr Stockwell says:
    I suspect that as in “Recent Climate Observations” where climate scientists were fooled into thinking that “climate change will be worse than expected” by the steep up-tick in global temperatures during a strong El Nino, they have also been fooled by a steep but longer-term up-tick in global temperatures associated with a positive phase of the PDO.

    http://atmoz.org/blog/2008/05/14/timescale-of-the-pdo-nao-and-amo/

    From the University of Washington, we see that the PDO is defined as “leading PC of monthly SST anomalies in the North Pacific Ocean, poleward of 20N. The monthly mean global average SST anomalies are removed to separate this pattern of variability from any “global warming” signal that may be present in the data.”

    So based on this definition, the PDO cannot be a direct contributor to the trend of the average global SST.

    The El Nino index is a natural variation that does contribute to the global average temperature, and so do sunspot cycles and the aerosal emissions from volcanoes. When these sources of natural variation are analyzed to determine their effects on global average temperature, and their impact is subtracted from global average temperature, a long lasting trend emerges since 1975.

    http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/

    The consistency of the trend that one gets is remarkable:

  75. P. Solar says:

    The most obvious, dominant feature of temperature records you have shown in fig 1 is the roughly periodic “humps” of about 3.5 years duration. So any attempt at average slopes of the scale you are attempting must be synchronised to these features otherwise the result will be dependant on the phase of this cycle where you start/end your sample.

    The form of the troughs is much more defined than the peaks and so is a more well defined criterion. This would often be close (though different) to the intervals you have derived.

    From figure 1: corresponding periods would be 1978?, 1985.5,1992.5, 2000, 2008
    This is superimposed by the sunspot cycle which although not dominant probably contributes to the minima around 1985 and 97 and the maximum around 2003.

    Any attempt at fitting straight lines is rather doomed to be wrong, however fancy or abstract the technique as I think you have shown with EFP.

    A paper you may find interesting looked at trends in Antarctic peninsula base (Gomez dome).

    E. R. Thomas et al (with UEA co-author)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 36, L20704, doi:10.1029/2009GL040104, 2009

    “Ice core evidence for significant 100-year regional warming on the Antarctic Peninsula”

    They do a PC analysis based of isotope ratios at Gomez, then diverge into some speculative and subject commentary based on climate models to provide the now obligatory “unprecedented” mantra to ensure further funding.

    However, if you study the first half of the paper where they were looking at real data and doing real science it seems to be both rigorous and objective. The result is a century scale warming that peaked around 2000.

    Despite their trying to spin this as proof of warming in continental Antarctica, the climate at Gomez is still basically that of the peninsula (as noted in the paper) and dominated by surrounding ocean.

    Though comments are made about the similarly chaotic nature of economic and climate data and your approach is interesting , I think figure 1 alone shows there is too much periodic behaviour in climate to take the analogy too far.

    If you’re looking for a long term trend masked by significant cyclic phenomena, you need to remove the cyclic element first otherwise all attempts a straight lines will be corrupted. That is what you show by EFP.

    Maybe running a 7 year gaussian would reveal the form you are looking for without sacrificing too much of the recent data.

  76. P. Solar says:

    Perhaps fit a line to each solar cycle: 1985-1995 , 1995-2009. Then look at the EFP for each segment.

  77. P. Solar says:

    Sorry 1997 http://www.leif.org/research/Active%20Region%20Count.png

    thanks to Anthony’s encylopedic web site, I did not have to look far.
    ;)

  78. DirkH says:

    P. Solar says:
    April 17, 2011 at 4:40 am
    “The most obvious, dominant feature of temperature records you have shown in fig 1 is the roughly periodic “humps” of about 3.5 years duration. So any attempt at average slopes of the scale you are attempting must be synchronised to these features otherwise the result will be dependant on the phase of this cycle where you start/end your sample. ”

    Just let the window glide and show the gradient of the resulting trend for each window instance as a continuous curve.

  79. eadler says:

    The right way to do this is to try to fit the known sources of natural variation plus a straight line trend to the global average temperature data. Then subtract out the part of the temperature that is due to the natural variation and see if a trend remains. The natural sources of variation are volcanoes, El Nino Index, and the sunspot cycle.

    http://tamino.wordpress.com/2011/01/20/how-fast-is-earth-warming/

    When this is done the result seems like the global average temperature has a strong upward trend since 1975:

  80. eadler says:

    Dr Stockwell says,

    I suspect that as in “Recent Climate Observations” where climate scientists were fooled into thinking that “climate change will be worse than expected” by the steep up-tick in global temperatures during a strong El Nino, they have also been fooled by a steep but longer-term up-tick in global temperatures associated with a positive phase of the PDO.
    I think that you are mistaken here. Quoting from the 2007 paper you referenced the authors write:

    The global mean surface temperature increase
    (land and ocean combined) in both the
    NASA GISS data set and the Hadley Centre/
    Climatic Research Unit data set is 0.33°C for
    the 16 years since 1990, which is in the upper
    part of the range projected by the IPCC. Given
    the relatively short 16-year time period considered,
    it will be difficult to establish the reasons
    for this relatively rapid warming, although
    there are only a few likely possibilities. The first
    candidate reason is intrinsic variability within the
    climate system. A second candidate is climate
    forcings other than CO2: Although the concentration
    of other greenhouse gases has risen
    more slowly than assumed in the IPCC
    scenarios,
    an aerosol cooling smaller than expected
    is a possible cause of the extra warming. A third
    candidate is an underestimation of the climate
    sensitivity to CO2 (i.e., model error).

    Accusing them of being fooled by the sources of natural variability that you mentioned, the PDO and El Nino, is clearly not justified.

    In fact the definition of the PDO, which is derived from the average Sea Surface Temperature of the Northern Pacific Ocean , excludes the possibility that the index could contribute directly to warming of the global oceans.

    http://jisao.washington.edu/pdo/PDO.latest

    Updated standardized values for the PDO index, derived as the
    leading PC of monthly SST anomalies in the North Pacific Ocean,
    poleward of 20N. The monthly mean global average SST anomalies
    are removed to separate this pattern of variability from any
    “global warming” signal that may be present in the data.

  81. A program that removes the seasonal (cyclical) component first is bfast: BFAST: Breaks For Additive Seasonal and Trend.

    BFAST integrates the decomposition of time series into trend, seasonal, and remainder components with methods for detecting and characterizing abrupt changes within the trend and seasonal components. BFAST can be used to analyze different types of satellite image time series and can be applied to other disciplines dealing with seasonal or non-seasonal time series, such as hydrology, climatology, and econometrics. The algorithm can be extended to label detected changes with information on the parameters of the fitted piecewise linear models.

    I tried it with these data and while it recognises and removes the seasonal component, the piecewise linear model is the same as strucchange, because the seasonal amplitude is small relative to the overall rise from 1950 to 2000. This is not the case for Arctic Ice, where the seasonal component is large relative to the multi-decadal change, and that result is interesting.

    Of course, this will only fit short period cyclic components, as the long period will only be half cycles in this data, which is what I understand you are saying. Of course, a half cycle can be approximated with a piecewise fit, and the phase change in PDO should be those breaks that relate to the cyclical component. The only other breaks are “extraordinary” like volcanos and the “supe El Nino”.

    I would see it as an advantage that you can model a longer cycle with piecewise fits, but not necessarily assume a long cycle. Whether the flattening from 2000 is due to the phase of the cyclic component, or a special feature of global warming (maxing out) is not a result of the approach. My concern was specifically whether it was rational to say that the atmosphere had stopped warming. It would be supported in both the cyclical and special case.

  82. cohenite says:

    eadler says trust Tamino; Lucia does better:

    http://rankexploits.com/musings/2011/hadley-march-anomaly-0-318c-up/

    In respect of the furphy that PDO is only natural variation and can’t produce a trend see:

    http://www.cgd.ucar.edu/cas/adai/papers/MonahanDai_JC04.pdf

    http://www.atmos.ucla.edu/~sun/doc/Sun_Yu_JCL_2009.pdf

  83. P. Solar says:

    Here is a quick run of HadCrut3 (rather than atmosphere) with a 10y window 2.5 sigma gaussian filter.

    I’d say we are definitely at the peak of the ~60 year cycle. Looking at the previous two peaks (c. 1878, 1960, 2002) we see that IPCC 0.7/cent is a con job. They are using the century time-scale like it’s a good idea because it’s a round number. Looking at the data we see it’s a clever ploy to measure trough to peak without actually saying so.

    measuring peak to peak we see the rise over each cycle to be 0.2 then 0.4C , a total of 0.6/125 years. That 0.48C/century.

    Now let’s be Al Gorey for a minute and assume (for no good reason) that GW is a run-away exponential growth. The next cycle will give us 0.8C in 62 years. That’s 1.3C per century assuming run-away exponential growth. Not 4 or 5 or 6C, but 1.3C.

    Now lets assume (because IPCC tells us that “most” GW is due to man and most of that is CO2 and CO2 is increasingly at a steady linear rate) a linear progression : 0.6C in the next cycle, that’s 1C/century. That’s not going to frighten the horses.

    But CO2 is well beyond the linear regime and steady increase will have progressively less affect. So let’s now be equally silly and assume the IPCC are right about the importance of CO2, we should probably expect between 0.2 and 0.4C over the next 62 years.

    The way cycle 24 and 25 are projected, we may not be that lucky.

  84. P. Solar says:

    OK , let’s play silly buggers with wordpress filtering out hrefs, how about this?
    [IMG]http://i55.tinypic.com/e7b8z7.png[/IMG]

    [Reply: WUWT uses HTML (angle brackets), not BBCode (square brackets). That may be the problem. Also, no need for an IMG tag, just paste the link with a space before & after, like this: http://i55.tinypic.com/e7b8z7.png ~dbs, mod.]

  85. P. Solar says:

    You may use these HTML tags and attributes:

    yeah we all get lied to

    .

    If any mods know what wordpress *does* allow please edit the post. I give up.

    [Post edited; you didn't close the "a" tag. Also, you don't have admin priveleges, so the link won't load. ~dbs]

  86. P. Solar says:

    http://i55.tinypic.com/e7b8z7.png pretty please?

    [See? It's E-Z! ~dbs.]

  87. P. Solar says:

    Phew.

    By the way , IPCC’s “the latter half of the 20th c.” is equally ingenuous move to measure peak to trough. If they’d used “last 60″ they would have had 0.4 instead of 0.6 averaged over a longer period.

    That masterly bit of the illusionists trade allows them to nearly double the warming trend without actually lying about it.

  88. P Solar: The thing that is coming out of this discussion is how much we all can read things into something like a simple graph of temperature. I’ve done plenty of graphs of periodics like that, and while there is a signal there, there is also the abrupt changes, and the trends to contend with. To avoid taking one set of blinkers off and putting another on, perhaps bfast is better (1. remove seasonality and seasonal breaks, 2. remove linear breaks).

    The “60 year” cycle from what I have seen is quasi-periodic, meaning its not really a definite period, but to me looks more like a drifting between two extremes, a bit like a snowboarder in a half-pipe. So if it gets to one extreme and “turns around” then the periodicity is well represented by breaks, as I am sure you have seen the figure of the PDO cycle represented as a zig-zag.

    Seems like I am going to have to expand the motivation for a break model in any paper like this that I submit.

  89. eadler: Accusing them of being fooled by the sources of natural variability that you mentioned, the PDO and El Nino, is clearly not justified.

    Its an old trick, listing your disclaimers and then advocating the alarmist line anyway. Their conclusion is:

    Previous projections, as summarized by IPCC, have not exaggerated but may in some respects even have underestimated the change, in particular for sea level.

    If they were not fooled, why then did Rahmstorf consider it necessary to increase the length of the smoothing for the same figure in the Copenhagen Synthesis Report?

  90. eader:
    The right way to do this is to try to fit the known sources of natural variation plus a straight line trend to the global average temperature data.

    Tamino’s analysis is nice and persuasive, but keep in mind that a straight line fit does not admit the possibility of a break in the trend. A series can have a significant long-term trend AND a significant recent break down from the trend. To ask a question, you need a model that is capable of answering it. The proof T gives that temperatures have not turned down since 1998 or 2004 is the calibrated eyeball test.

    My approach is not to assume a single linear trend that you say is the correct way. You cannot test for a break in a linear trend, if you simply assume it.

    And BTW, my analysis differs from T in a number of respects. I am saying that three of the temperatures are approaching 95%CL, its going to take a year or two more of no warming to become really significant. I am also not factoring out the sun, as I am only asking if global warming has stopped, not whether the effect of CO2 has stopped — different question.

  91. eadler says:

    cohenite says:
    April 17, 2011 at 5:13 pm

    eadler says trust Tamino; Lucia does better:

    http://rankexploits.com/musings/2011/hadley-march-anomaly-0-318c-up/

    The link you provided to Lucia is irrelevant to Tamino’s analysis. Lucia’s compares data to multi-model projections and includes a fudge factor for noise. Tamino’s analysis has nothing to do with multi-model simulations and noise.

    In respect of the furphy that PDO is only natural variation and can’t produce a trend see:

    http://www.cgd.ucar.edu/cas/adai/papers/MonahanDai_JC04.pdf

    http://www.atmos.ucla.edu/~sun/doc/Sun_Yu_JCL_2009.pdf

    I don’t see the relevance of either the SUN_YU or the MonahanDai paper to PDO. Both are about ENSO, which I recognize is a component of global temperature variation. There is no mention of PDO in either paper.

    Are you trying to pull a fast one here? Did you actually look at these papers?

  92. eadler says:

    David Stockwell says:
    April 17, 2011 at 9:10 pm

    eadler:
    The right way to do this is to try to fit the known sources of natural variation plus a straight line trend to the global average temperature data.

    Tamino’s analysis is nice and persuasive, but keep in mind that a straight line fit does not admit the possibility of a break in the trend. A series can have a significant long-term trend AND a significant recent break down from the trend. To ask a question, you need a model that is capable of answering it. The proof T gives that temperatures have not turned down since 1998 or 2004 is the calibrated eyeball test.

    My approach is not to assume a single linear trend that you say is the correct way. You cannot test for a break in a linear trend, if you simply assume it.

    And BTW, my analysis differs from T in a number of respects. I am saying that three of the temperatures are approaching 95%CL, its going to take a year or two more of no warming to become really significant. I am also not factoring out the sun, as I am only asking if global warming has stopped, not whether the effect of CO2 has stopped — different question.

    The purpose of science is to understand the natural and human caused forces driving the climate. This beats reliance on purely statistical analysis which does not have any rigorous predictive function. We know that the naturally occurring driving forces volcanoes and solar radiation drive imbalances of radiation that warm the earth and that the temperature will respond as a result. We also know that there are oscillations in the surface temperature of the pacific ocean, the El Nino/La Nina conditions, that also strongly affect the surface temperature. Using what we know to extract the long term trend after these known oscillating and impulse forces are removed, is a better way determine if a trend exists, than a simple statistical analysis that is totally void of scientific content.

    The evidence is that there is a good fit and a long term trend can be extracted. Climate scientists believe that the trend is probably due to a combination of positive radiative forcing due to GHG’s and a decrease in aerosals since the 1970’s.

  93. P. Solar says:

    David Stockwell says: “To avoid taking one set of blinkers off and putting another on, perhaps bfast is better (1. remove seasonality and seasonal breaks, 2. remove linear breaks).”

    Well , I don’t know what all this has to do with blinkers. Any data processing distorts the data so it is important to ensure the method is applicable to the data (does not assume some quality of the data that is not the case) and that the method does not in some way presume the result.

    I’m not familiar with bfast, what is the “linear breaks” referring to? I don’t really see any breaks in the temperature records you used here. Several have already be adjusted, spin-washed and homogenised to the point where even their accuracy is doubtful.

    You seem particularly interested in some volcanic events as discontinuities in the data. I don’t go with that. There is a discernible trace of these events but it far from being a step function either in reality or in the record and they are not of magnitude or duration that requires a separate analysis.

    For example Mt Pinatubo had a limited effect that is visible for a couple of years. This hardly amounts to a regime change. It is a small perturbation. The long term impact of these kind of events (or absence thereof) are part of what makes up the long term trend. I don’t see merit in treating them as discontinuities.

    PDO flip in 1976 may qualify as a systematic change of behaviour but again it was not huge ( and is outside the period you are examining). It is possible that 1995-1998 was also a change of structure. I think that one may be worth looking at to see if it fits that kind of analysis.

    If you think the ~62 year cycle is better modelled as up and down slopes I see nothing wrong with that approach (or at least not that it is less valid than a cyclic analysis). What I had trouble with was the discontinuous drops and linear rises in your post since I found the lack of negative segments was not representative of the data and was indicative of a flaw in the method.

  94. P. Solar says:

    eadler says:

    Climate scientists believe that…

    Climate scientists believe, real scientist prove.

  95. cohenite says:

    eadler, ENSO is PDO:

    http://www.esrl.noaa.gov/psd/people/gilbert.p.compo/Newmanetal2003.pdf

    I read the papers and when I say I prefer Lucia to Tammy, that’s what I mean; stop looking for an argument.

  96. Ranger Joe says:

    I was under the impression that increased cosmic radiation warming the upper atmosphere would trigger an increase in cloud cover and precipitation down below. Like the biosphere popping a protective umbrella. There is no paradox here. They still can’t explain why the solar corona is a million degrees…while the solar surface is 10,000 degrees.

Comments are closed.