Graphing Lesson Part 2 – "Crest to Crest"

By Steve Goddard

Earlier in the month I wrote an article showing the trend in Arctic ice since 2002.

I took a lot of criticism from people for not measuring “crest to crest or trough to trough.

Any one schooled in analysis of cyclical data would know that one must go from crest-to-crest or trough-to-trough, to maintain some semblance of symmetry about the x-axis.

It is time now to see how serious people are about their belief systems. We have passed the 2010 El Niño peak, and can see what the “real” trend is since the cyclical El Niño peak of 1998.

http://www.woodfortrees.org/plot/rss/from:1998/last:2010/plot/rss/from:1998/last:2010/trend

Hansen claims :

“Global warming on decadal timescales is continuing without let-up … we conclude that there has been no reduction in the global warming trend of 0.15-0.2C/decade that began in the late 1970s.”

Talk about cherry-picking! Look at his start point. He chose the worst case trough to crest to measure his trend.

Question for readers. Is Hansen correct, or does he need some serious graphing lessons? Below are the trend graphs from 1998-present for all four sources. GISS is way out of line.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
152 Comments
Inline Feedbacks
View all comments
Icarus
August 1, 2010 2:57 pm

Richard M said (August 1, 2010 at 8:55 am):

Icarus, please show exactly where that pipeline enrgy is hiding. Be specific.

It’s in the current global energy imbalance (i.e. radiative energy received from the sun exceeds that radiated back to space from the Earth). That will take some decades to equilibrate, even if CO2 doesn’t rise any more (which is extremely unlikely).

John Finn
August 1, 2010 4:12 pm

Icarus says:
August 1, 2010 at 2:57 pm

Richard M said (August 1, 2010 at 8:55 am):
Icarus, please show exactly where that pipeline enrgy is hiding. Be specific.


It’s in the current global energy imbalance (i.e. radiative energy received from the sun exceeds that radiated back to space from the Earth). That will take some decades to equilibrate, even if CO2 doesn’t rise any more (which is extremely unlikely).
Ok – thanks for that, Icarus , but I think most of us are aware that the claimed imbalance is due to an imbalance between incoming solar radiation and outgoing LW radiation from the “top of the atmosphere”. The point is that this extra energy must be accumulating somewhere. The problem is that no-one seems to know where. This is a comment by Kevin Trenberth:

> The fact is that we can’t account for the lack of warming at the moment
> and it is a travesty that we can’t. The CERES data published in the
> August BAMS 09 supplement on 2008 shows there should be even more
> warming: but the data are surely wrong. Our observing system is inadequate.
I think Richard M might have been asking you to be more specific about where the “hidden heat” might be hiding.
PS If you do happen to come across it please let Kevin Trenberth know.

barry
August 2, 2010 12:09 am

Talk about cherry-picking! Look at his start point. He chose the worst case trough to crest to measure his trend.

There is no oscillation in the temperature record in 30-year periods. There is no even remotely recognizable ‘peak and trough’ in the instrumental record on decadal scales. One could argue there is an oscillation in the data (PDO), but it does not account for the long-term trend. There is year by year oscillation in the temp record – global temps tend to be warmer during Northern Hemispheric summer – but Hansen took annual averages, and did not, nor could not, make the same mistake Goddard did on sea ice data.
Hansen’s 30-year trend (of temperature) is far more statistically significant than a 9 year trend (of Arctic sea ice). How soon we forget that statistical significance tests fail for global temperature starting if we start from 1995 to 2009 – 15 years (if you’re reading this post in 2011, that may no longer be the case). That factoid was made much of at Watts Up With That, but it seems to stop no one here from prognosticating on climate trends with much shorter time periods. This lack of consistency hardly inspires confidence.
(Aside: sea ice trends probably don’t need as many years as climate trends to achieve statistical significance, but 9 years is statistically insignificant for either metric)
While we’re on the subject of cherry-picking, I ran the regression function on sea ice data at wood for trees, matching the (erroneous) method used by Steve Goddard, who runs the plot from mid-2002 to present. From mid-2001 to present, there was a shallower upwards trend. From mid-2000 to present, a flat trend. Every year back thereafter (starting at 2000, then 1999, 1998 etc), the trend to present was of reduced sea ice. Selecting 2002 as a start point is cherry-picking – or in the words much expressed at WUWT re Jones and temp trends since 1995, it’s not ‘significant’.
There are sound reasons to compute global temps for the last 30 years – can cross-check with satellite record and with sea ice record, solar output has been flat or slightly waning in that period, the flat or cooling mid-century temps may have been a product of aerosols from industry (cleaned up since then by regulations on emissions), and the time period is statistically robust. Readers could be led to believe, however, that Jim Hansen doesn’t talk about pre-70s global temperature. That’s just wrong.
(I could be wrong, but I think the first link in the article goes to the wrong WUWT page. Shouldn’t it be: http://wattsupwiththat.com/2010/07/02/arctic-ice-increasing-by-50000-km2-per-year)

Alexej Buergin
August 2, 2010 1:59 am

” Spector says:
August 1, 2010 at 9:51 am
RE:Alexej Buergin: (August 1, 2010 at 7:56 am) “O! say can you see that I have written something in brackets?”
I believe this system treats anything inside angle-brackets to be HTML tag coding, such as I used above to italicize the quoted text. Invalid or disallowed tags are ignored.”
It seems that in England ( ) and [ ] are called brackets, in the US only [ ]. It used ( ) and they stayed there.

Alexej Buergin
August 2, 2010 2:15 am

The title of Goddard’s contribution is “crest to crest”. Most people agree that this is the best (or least bad) way to calculate a trend. February (maybe April) of the El Niño-year 1998 is the first peak, and March (maybe July) of the El Niño-2010 is the second one (UAH). So copy the monthly UAH-data into an Excel file, let it calculate the trend, then you get an indication of the warming during the last decade (or 12 years), and nobody can accuse you of cherry-picking.
Phil Jones must have used CRUtemp, of course, when he says there was no significant warning during the last 15 years.

August 2, 2010 3:16 am

Alexej Buergin says:
August 2, 2010 at 1:59 am
“It seems that in England ( ) and [ ] are called brackets, in the US only [ ]. It used ( ) and they stayed there.”
In English English ( ) are properly called parentheses, [ ] are properly brackets, { } are properly braces, and are improper!
However, in common or lazy English English ( ) are often called brackets or round brackets, [ ] are often called square brackets, <> are called angle brackets, and { } are called … er … those wiggly thingies!

August 2, 2010 5:05 am

Just to correct/clarify the use of “last” on WFT here. “last:N” takes the last N samples from the current date – so for example, last:120 gives the last ten years data measured from the time the viewer fetches the graph (not the time you first made it!). The combination of “from” and “last” doesn’t make sense, and actually only works because 2010 is a huge number of samples so doesn’t limit the interval already set.
I think Steven meant to use “to:2010” in the initial graphs, which would fix the end-point even if the graph was re-viewed in 2011, 2020 or whatever. Remember that WFT uses decimal years and does not include the ‘to’ end-point: technically left-closed, right-open = [from,to). So “to:2010” gives December 2009 as the last sample. That means if you do from:2000/to:2010 you get the expected 120 samples.
Icarus/Paul Birch: The OLS C++ code is available on the site – if you think there is a real problem there please contact me privately.
Cheers
Paul

August 2, 2010 5:22 am

Apologies, it was “RW”, not “Icarus” who was discussing the OLS accuracy. I find it hard to remember names even usually, initials and pseudonyms doubly so!
After re-reading the comments above, I’m a bit confused what (if any) issues in WFT are being reported – can you clarify?

Alexej Buergin
August 2, 2010 5:24 am

” Icarus says:
August 1, 2010 at 9:25 am
Alexej Buergin: Why do you prefer the 1998 – 2010 data rather than the whole data set?”
This is what I tried to answer at 2:15 a.m.

Icarus
August 2, 2010 5:31 am

Alexej Buergin:
Is 12 years sufficient to determine a trend? Wouldn’t 30 years be better? Why or why not? What particular reason might you have for choosing 1998 – 2010 to determine a trend rather than 1980 – 2010?

August 2, 2010 5:37 am

Icarus says: [ … ]
Your questions have been answered in detail numerous times here. I suggest you read the WUWT archives for a few weeks to get up to speed on the subject.

August 2, 2010 6:34 am

woodfortrees (Paul Clark) says:
August 2, 2010 at 5:22 am
“After re-reading the comments above, I’m a bit confused what (if any) issues in WFT are being reported – can you clarify?”
OK. Look at http://www.woodfortrees.org/plot/sine:10/from:1902.5/to:1912.501/trend/plot/sine:10/from:1902.5/to:1912.5/trend/plot/sine:10/to:1950
1902.5-1912.5 the trend should be zero, but comes out significantly negative (green line). Adding a teeny bit to the end time shouldn’t make much difference but obviously does (red line). It would appear that entering 12.5 causes the last sample at 12.5 to be lost. At best, this is confusing.
It also appears that the (series 1) red line vanishes if one of the later series overlays it. This is also confusing when one is playing around with the first series (well, it confused me!). It might be better to have the plotting priority the other way round (earlier series have preference).
There was also another case where the trend line did not appear, but the scale of the graph suddenly changed (presumably because the trend line would have stuck over the top), but unfortunately I’ve forgotten what parameters I used and can’t replicate it (mea culpa).

barry
August 2, 2010 8:25 am

Phil Jones must have used CRUtemp, of course, when he says there was no significant warning during the last 15 years.

That’s not what he said, that is what the press and skeptical blogsites erroneously reported he said.
He said that the warming between 1995 and 2009 was not statistically significant. A lot of people didn’t understand what that qualifier meant – basically that the confidence level of the trend was slightly less than 95%. It’s a warming trend, but there is a non-negligible chance that the trend could be zero.
By the end of the year, the temperatures since 1995 will have statistical significance. Then some eejit will say, “there has been no significant warming since 1996,” again confusing a statistical term with its common usage.

August 2, 2010 8:26 am

Hi Paul,
Thanks for this. I think the problem comes – as you’ve realised – from the half-open interval, and whether the final sample is included or not. Entering exactly 1912.5 means that 1912.42=0.499 (June 1912) is the last sample, which is lower than 1902.5=0.5 – hence the slight negative trend. 1912.501 includes 1912.5 (July 1912) and the trend is then zero, as you’d expect.
I think you may be trying to make what is a system designed for discrete monthly data fit a pure mathematical ideal with infinite precision. OK, you could argue for a closed interval (so 1912.5 would be included) but for every benefit that brings, another gotcha springs up – for example, that from:2000/to:2010 would have 121 samples.
It’s also inevitable that lines plotted on top of each other will cover each other up. Last on top seems like the most logical given I had to choose…
I’m happy that you don’t think the OLS algorithm itself isn’t bad, as originally claimed!
Cheers
Paul

August 2, 2010 9:09 am

woodfortrees (Paul Clark) :
I hadn’t realised that the datapoints were monthly (though now you mention it, that makes sense). This makes the decimal data entry rather messy. I wonder if it might be better to use months there, ie., 1900.0 (December 1899) to 1900.11 (November 1900). Then I would suggest using the end dates inclusively, but weighting the end samples so that there are still 12 samples a year. Thus 1900.0 – 1910.0 would give only half weight to the 1900.0 and 1910.0 samples, full weight to the rest. Whereas 1900.05 – 1910.05 (in base 12 method) would give zero weight to 1900.0 and 1910.1, but full weight to 1900.1 and 1910.0 (and everything between).
Whatever you decide, I think you need to show clearly, somewhere on the working page, what conventions you’re using – and warn users about any traps they’re likely to fall into. You may already do that somewhere on site, but I couldn’t find it.

Alexej Buergin
August 2, 2010 10:33 am

” Icarus says:
August 2, 2010 at 5:31 am
Is 12 years sufficient to determine a trend? Wouldn’t 30 years be better? Why or why not? What particular reason might you have for choosing 1998 – 2010 to determine a trend rather than 1980 – 2010?”
12 years is long enough to calculate the trend for these 12 years if beginning and end are comparable. 30 years is not long enough to say something significant about climate.
If you have a sine with a period of 12 years, 12 years from crest to crest would be OK.
30 years could go from trough to crest or from crest to trough, and that is not OK.

Alexej Buergin
August 2, 2010 10:42 am

” barry says:
August 2, 2010 at 8:25 am
He said that the warming between 1995 and 2009 was not statistically significant. A lot of people didn’t understand what that qualifier meant – basically that the confidence level of the trend was slightly less than 95%. It’s a warming trend, but there is a non-negligible chance that the trend could be zero.”
A sociologist, a physicist and a mathematician went to France and saw a black sheep.
Sociologist: “The sheep in France are black”.
Physicist: “There is a black sheep in France”.
Barry the mathematician: “There is at least one sheep in France, which is black on at least one side”.

Spector
August 2, 2010 10:59 am

RE: Paul Birch: (August 2, 2010 at 9:09 am) “I hadn’t realised that the datapoints were monthly (though now you mention it, that makes sense). This makes the decimal data entry rather messy.”
Just for reference, the simple formula I use in Microsoft Excel to create decimal dates from year and month numbered data is:
Yeardate = [year] + ([month] – 0.5)/12
I usually do not try to calculate the exact mid-month date on the assumption that this precision is lost in the noise.

barry
August 2, 2010 12:18 pm

Alexej, I have no ideas what your analogy is supposed to mean, but here is the trend 1995 – 2009 based on HadCRUt.
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1995/plot/hadcrut3vgl/from:1995/to:2010/trend
About 0.1C per decade, a stronger warming trend than the 20th century. In common terms, that is ‘significant’.
In mathematical terms, the time period just fails statistical significance – the confidence level for this plot is slightly less than 95%. That is what Jones was saying. By 2011, the trend from 1995 will achieve statistical significance, and unless there is a huge plummet in temps for the rest of this year, the trend will have increased a bit.
12 years is way too short to achieve statistical significance with respect to global temperature trends.

barry
August 2, 2010 10:26 pm

By lucky chance, the issue just being discussed has been blogged about.
http://www.skepticalscience.com/Has-Global-Warming-Stopped.html
And for those interested in the mathematics:
http://moregrumbinescience.blogspot.com/2009/01/results-on-deciding-trends.html
That site has a number of posts about the approach to working out statistically valid time periods for climate (20 – 30 years). It’s maths, not politics.

Alexej Buergin
August 3, 2010 12:55 am

” barry says:
August 2, 2010 at 12:18 pm
Alexej, I have no ideas what your analogy is supposed to mean”
It means that most people here know what “statistically significant” means, so you do not have to recite the whole definition every time. And that mathematicians are complicated people by nature.
And Goddard’s point is that one should start in 1998 (crest), not in 1995, anyway.

Icarus
August 3, 2010 6:00 am

Alexej Buergin:
We all know that 12 years is not long enough for a trend, for obvious reasons. If you want to do a ‘crest to crest’ trend then it makes much more sense to pick 1983 to 2010 which will give you a trend of 0.18C per decade.
Agreed?

AJ
August 3, 2010 10:53 am

Icarus says:
July 31, 2010 at 2:45 pm
……
The trouble is, there is around another 0.4 to 0.5C of warming in the pipeline due to the lag in ocean warming, just from ~390ppm of CO2, which takes us to 1.2C above pre-industrial.
……
The amount of unrealized temperature gain can be modeled as a bank account. We are depositing carbon forcings (delta ln(co2)) and nature is withdrawing from this unrealized account and transferring it to the realized account.
This model is known in financial terms as the accumulated value of a continuously compounding annuity with continuous payments. The accumulated value factor is: (exp(RT)-1)/R, where R is a rate (negative in this case) and T is time.
Now because R is negative, as T goes to infinity, the exp(RT) term goes to 0 and the accumulated value converges to -1/R.
To estimate R, I calculated that there is about an 80 day lag in SST’s from seasonal solar forcings. I’ll skip the math, but this gives R=-1.23 on a yearly basis. This means that there can be no more than .83 years worth of carbon deposits left in the unrealized temperature gain account. In effect, all co2 deposits up to last year have been fully realized and this would leave only one or two hundredths of a degree in the pipeline.
BTW, I’m not just making this up. The exp(RT) term was used by none other than Sir Issac Newton himself in his “Law of Cooling” and has been confirmed by countless small scale experiments. Does this scale up to the global level? Not sure, but I haven’t seen anything that refutes it. The .4 to .5C that you quote probably comes from the complicated computations hidden in the climate models.
Also, I have done the same exercise assuming a more realistic linearly increasing deposit rate, and the conclusion is the same. The amount in the pipeline converges to a very small amount. That is probably why Trenberth can’t find it.
HTH,
AJ

sky
August 3, 2010 1:16 pm

The trend given by linear regression is strictly valid only if all the residuals are independently distributed random variates. Cyclical components in the time series (e.g., annual cycle) introduce strongly autocorrelated residuals and produce similar oscillatory behavior in the trend of any fixed length computed on a running basis. This plays havoc with intuitive notions of trend. The most effective way of eliminating such cyclical components is NOT to “anomalize” the series by subtracting an annual “norm” estimated from a pitifully short stretch of record (as is the case with JAXA data) but to do a running yearly average. This simply filters out the annual cycle and all of its harmonics without having to estimate them.
But even then, the indications obtained from short records remain highly tenuous, because of irregular multidecadal and longer oscillations. No record shorter than twice the longest such oscillation can provide a semblance of determining whatever SECULAR trend there may be in the underlying process. In most cases, records of such length are simply unavailable and all references to trend are limited to just that particular stretch of record.

sky
August 3, 2010 1:26 pm

AJ says:
August 3, 2010 at 10:53 am
“I’m not just making this up. The exp(RT) term was used by none other than Sir Issac Newton himself in his “Law of Cooling” and has been confirmed by countless small scale experiments. ”
Nicely done! The whole idea of “heat in the pipline” comes from unrealistically long time constants attributed to the climate system and to the model calculations of TOA “energy imbalance.” Meanwhile, as ERBES revealed, with all sorts of data massaging being necessary to bring measured discrepancies between different satellites within 10W/m^2 of each other, the premise that the present balance is accurately known empirically is a myth.