Crowdsourcing a Temperature Trend Analysis

WB4

(Image Credit: WoodForTrees.com)

By Werner Brozek, edited and with introduction by WUWT regular “Just The Facts”

Your help is needed in building a regular temperature trend analysis for WUWT. With much attention being focused on how much warming, or lack thereof, has occurred in Earth’s recent past (1, 2, 3, 4) it seems worthwhile to establish a regular update that provided a consummate summary of the key temperature records and their associated trends. Fortunately, WUWT regular Werner Brozek has been compiling just such an update and posting it in comments on WUWT and Roy Spencer’s website. As such, we would like to present an expanded version of Werner’s analysis for your input and scrutiny, before finalizing the content and form of these regular updates. As such, please review the following and lets us know, if it appears to be factually accurate, what you think of the layout, what you think of the content, if you think certain links should be images or images should instead be links, any additional improvements that can be made. There are few additional specific questions included in Werner’s analysis below. Thank you for your input. JTF

Temperature Trend Analysis

By Werner Brozek

This analysis has three section covering 6 data sets, including GISS, Hadcrut3, Hadsst2, Hadcrut4, RSS and UAH:

Section 1, provides the furthest date in the past where the slope is a least slightly negative.

Section 2, provides the longest time for which the warming is NOT significant at the 95% level.

Section 3, provides rankings of various data sets assuming the present ranking stays that way for the rest of the year.

Section 1

This analysis uses the latest date that data is available on WoodForTrees.com (WFT) to the furthest date in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, I give the time from October so no one can accuse me of being less than honest if I say the slope is flat from a certain month.

On all data sets, the different times for a slope that is at least very slightly negative ranges from 8 years and 3 months to an even 16 years.

1. UAH Troposphere Temperature: since October 2004 or 8 years, 3 months (goes to December)

2. NASA  GISS Surface Temperature: since May 2001 or 11 years, 7 months (goes to November)

3. Wood For Trees Temperature Index: since December 2000 or 11 years, 9 months (goes to August)

4. Hadley Center (HadCrut3) Surface Temperature: since May 1997 or 15 years, 7 months (goes to November)

5. Hadley Center (HADSST2) Sea Surface Temperatures: since March 1997 or 15 years, 8 months (goes to October)

6. RSS Troposphere Temperature: since January 1997 or 16 years (goes to December) RSS is 192/204 or 94% of the way to Ben Santer’s 17 years.

7. Hadley Center (Hadcrut4) Surface Temperature: since December 2000 or an even 12 years (goes to November.)

Here they are illustrated graphically;

WB2

you can recreate the graph directly here.

Here is an alternate graphical illustration;

WB4

you can recreate the graph directly here.

(Which of these illustrations do you prefer? Are they too cluttered to include in one graph? If so, how can we make this more user friendly?)

Section 2

For this analysis, data was retrieved from SkepticalScience.com. This analysis indicates for how long there has not been significant warming at the 95% level on various data sets.

For RSS the warming is NOT significant for 23 years.

For RSS: +0.130 +/-0.136 C/decade at the two sigma level from 1990

For UAH, the warming is NOT significant for 19 years.

For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994

For Hacrut3, the warming is NOT significant for 19 years.

For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994

For Hacrut4, the warming is NOT significant for 18 years.

For Hadcrut4: 0.098 +/- 0.111 C/decade at the two sigma level from 1995

For GISS, the warming is NOT significant for 17 years.

For GISS: 0.113 +/- 0.122 C/decade at the two sigma level from 1996

(Note that we have concerns with using data from SkepticalScience.com, however we have not identified another source for this data. Does anyone know of a reliable alternative source where these data points can be readily accessed?)

Section 3

This section provides the latest monthly anomalies in order from January on. The bolded one is the highest for the year so far. I am treating all months equally and adding all anomalies and then dividing by the total number of months. This should not make a difference to the relative ranking at the end of the year unless there is a virtual tie between two years. After I give the average anomaly so far, I say where the year would rank if the anomaly were to stay that way for the rest of the year. I also show the warmest year on each data set along with the warmest month ever recorded on each data set. Then I show the previous year’s anomaly and rank.

The 2011 rankings for GISS, Hadcrut3, Hadsst2, and Hadcrut4 can be deduced through each linked source.

The latest rankings for UAH can be found here.

The rankings for RSS to the end of 2011 can be found here.  (Others may also be found here)

With the UAH anomaly for December at 0.202, the average for the twelve months of the year is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.282 + 0.202)/12 = 0.16. This would rank 9th. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.130 and it will come in 10th.

With the GISS anomaly for November at 0.68, the average for the first eleven months of the year is (0.32 + 0.37 + 0.45 + 0.54 + 0.67 + 0.56 + 0.46 + 0.58 + 0.62 + 0.68 + 0.68)/11 = 0.54. This would rank 9th if it stayed this way. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.89. The anomaly in 2011 was 0.514 and it will come in 10th assuming 2012 comes in 9th or warmer.

With the Hadcrut3 anomaly for November at 0.480, the average for the first eleven months of the year is (0.217 + 0.194 + 0.305 + 0.481 + 0.473 + 0.477 + 0.445 + 0.512+ 0.514 + 0.491 + 0.480)/11 = 0.417. This would rank 9th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340 and it will come in 13th.

With the Hadsst2 anomaly for October at 0.428, the average for the first ten months of the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.351 + 0.385 + 0.440 + 0.449 + 0.428)/10 = 0.336. This would rank 9th if it stayed this way. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273 and it will come in 13th.

With the RSS anomaly for November at 0.195, the average for the first eleven months of the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195)/11 = 0.200. This would rank 11th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147 and it will come in 13th.

With the Hadcrut4 anomaly for November at 0.512, the average for the first eleven months of the year is (0.288 + 0.208 + 0.339 + 0.525 + 0.531 + 0.506 + 0.470 + 0.532 + 0.515 + 0.524 + 0.512)/11 = 0.45. This would rank 9th if it stayed this way. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399 and it will come in 13th.

Here are the above month to month changes illustrated graphically;

WB1

you can recreate the graph directly here.

Appendix

In addition to the layout above, we also considered providing a summary for each temperature record, as is illustrated below for RSS. Please let us know if you find this format to be adventurous/preferred as compared to the category breakout above, and also please let us know if there are any additional analyses that might be valuable to incorporate.

RSS

1. With the RSS anomaly for November at 0.195, the average for the first eleven months of the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195)/11 = 0.200. This would rank 11th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147 and it will come in 13th.

The rankings for RSS to the end of 2011 can be found here.

2. RSS has a flat slope since January 1997 or 16 years (goes to December). See:

WB3

Recreate graph here.

3. For RSS the warming is NOT significant for 23 years.

For RSS: +0.130 +/-0.136 C/decade at the two sigma level from 1990

See here.

Put in 1990 for the start date; put in 2013 for the end date; click the RSS button; then calculate.

About the Author: Werner Brozek was working on his metallurgical engineering degree using a slide rule when the first men landed on the moon. Now he enjoys playing with new toys such as the WFT graphs. Werner retired in 2011 after teaching high school physics and chemistry for 39 years.

Please let us know your thoughts and recommendations in comments below. Thanks Werner & Just The Facts

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Werner Brozek

Thank you for all your work “justthefacts”!
A minor omission occurred with the UAH information. It should read:
With the UAH anomaly for December at 0.202, the average for the twelve months of the year is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.282 + 0.202)/12 = 0.16. This would rank 9th. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.130 and it will come in 10th.

john in cheshire

Anthony, Happy New Year.
I am just a layman who reads your blog quite frequently. I don’t even profess to understand a lot of what is posted. But I think I understand the basics. If I purchased a weather station, here in Cheshire in the UK, would the data I collected be of use to you?

This is a good initiative but in the compilation showing all data sets (woodfortrees) I would use the equivalent past of one normal solar cycle i.e. is that 11 or 12 years?

Green Sand

Big subject, large area, lots of data. I have broken down my involvement to those that have responsibility within the UK, the MO. So I produce the following:-
http://i49.tinypic.com/b3oifn.jpg
I use the 30 year WMO “Standard” to be compliant, the 15 year as a logical split and 10 years as an here and now indication.
The trend is your friend! The 30 year trend will continue in its reducing direction until the 10 and 15 year trends break up and through their longer friend.
I have to credit Henry (?) for the original chart idea.

Question: Why is CRN not included? I have long been interested in trends shown by the Climate Reference Network and how that network (which requires no adjustments) matches up with the other networks that require various adjustments.

Henry@crosspatch
can you give a link for CRN?

Werner Brozek

crosspatch says:
January 6, 2013 at 1:58 pm
Question: Why is CRN not included?
I have worked with things that WFT has provided. And neither CRN (nor NOAA) is on there. Perhaps this is something that the creator of WFT and justthefacts can discuss.

Your help is needed in building a regular temperature trend analysis for WUWT.
The simple truth is that the – all – measured ‘global’ temperature data sources exhibits the same temperature function over time and because there is only one Earth it seems that the differences in the zero lines are from different defined time ranges.
The question what the best temperature trend analysis is is not a question of science, but from politics to tell the crowd up and down nonsense.
Scientific analysis on the temperature data is possible and can discriminate the terrestrial oscillations like ONI from the solar tide oscillations because of the different frequency bands. And this leads to the low frequency temperature frequencies known as periods of litte ice ages or warm ages as now.
http://volker-doormann.org/images/rss_uah_december_2012.gif
One can clean the global temperature from the ONI function to make the solar tide oscillations visible.
http://volker-doormann.org/images/oni_cleaned_rrs_temps.gif
This holds also for longer time ranges and can show the character of the natural oscillation frequencies without ‘trends’.
http://volker-doormann.org/images/ghi6_plus_temps.gif
V.

HenryP says:
January 6, 2013 at 2:12 pm
Henry@crosspatch
can you give a link for CRN?

Try this old article here:
http://wattsupwiththat.com/2012/08/08/an-incovenient-result-july-2012-not-a-record-breaker-according-to-the-new-noaancdc-national-climate-reference-network/

Having all the data in one place is good, but we still have the problem of everyone using different averaging periods.
Your chart (the “alternate graphical illustration”) shows just how much of a variation there is – seven different sources, seven different trends. If they all have the same “zero”, we’re currently anywhere between .2 and .6 above “zero” (which, again is based on locations, averaging period, type of measurement, addition of “extrapolations”, etc) .
It’s charts like that the “climate scientists” are having a hard time trying to explain – which of the seven sources do they consider the most accurate, and why?
Naturally, they’ll defend the one with the highest CURRENT anomaly – makes it look much worse.

John West

I don’t know why, but I never thought of putting all the data sets on one graph. Seeing it above “Does Anybody Really Know What Time It Is?” popped in my head except temperature replaced time. LOL, I recommend keeping both graphics. Wonderful IMHO.

Gras Albert

JTF/WB
Excellent, WFT is a marvelous, marvelous resource but graphic presentation is not it’s strong point!
I prepared this graph from data extracted from WFT, it presents decadal trends in temperature anomaly increases/decreases since 1987 along with that of CO2, I thought the graph made the relationship between CO2 forcing and temperature change starkly obvious…comment image
I’d be prepared to assist in coding automated graphs should you feel it would help. Should he feel it appropriate, JTF could perhaps integrate the graph in the comment rather than leave it as a link.

Bruce of Newcastle

One suggestion: add the ability to include a non-linear trend, such as a sinusoidal curve.
The reason I suggest this is the apparent 60-65 year oscillation in many climate datasets. This is only rarely put online, not least because it requires a commercial stats package (Excel can’t do it) and also it drives CAGW people nuts for the good reason that it appears to explain about 1/3rd of the temperature ‘rise’ last century due to endpoint selection.
Ray Spencer for some time playfully added a polynomial fit to the UAH data, but unfortunately this opened a door to strawman criticism, as polynomials can only simulate oscillations for so long before they go off scale.

Reblogged this on sainsfilteknologi and commented:
Trend Analysis

SRJ

There is no need to feel uncomfortable about the data from Skeptical Science. The trend calculation is just normal least squares, and the standard error is corrected as in the appendix of F&R. I have checked the results from the trend calculator in several occasions, and my results agree. Eg. for GISTEMP since 1996 the SKS trend calculator gives:
Trend: 0.113 ± 0.122 °C/decade (2σ)
My result is:
Trend: 0.116 ± 0.119 °C/decade (2σ)
The difference is most likely caused by a difference in the year range used for the autocorrelation calculation or to slightly different versions of the GISTEMP dataset.

pkatt

I challenge that any temp series is accurate to a 1/10th of a degree, I would also point out that bad data in, makes for bad data out.

pkatt

I should add that I don’t mean to be harsh, but if you put the same data on a chart with +5/-5 degrees off of the zero line, what you see is a little wiggle that doesn’t even hit a degree. These charts, they look impressive but quite a few folks do not realize the scale is just a little over 1 degree. Even with the “massive warming era” if you put it on a chart that gives it a value most people understand and can relate to, it becomes a non issue because honestly can you tell its .2 degrees warmer or colder? Neither can they.

Streetcred

Interesting ensemble … that supports the view that the CO2 increase follows temperature increase.

climatebeagle

Maybe section 2 would be clearer with a table. Having many lists of these:
> For RSS the warming is NOT significant for 23 years.
> For RSS: +0.130 +/-0.136 C/decade at the two sigma level from 1990
makes it somewhat hard to read because of the repeated information.
Something like (but with better formatting):
DatasetYears With No Significant WarmingRateStart Year
RSS 23 +0.130 ±0.136°C 1990
UAH 19 +0.143 ±0.173°C 1994

David L. Hagen

Werner Brozek
Thanks Werner for your helpful work.
1) Show highest and lowest +/- 95% significant trend limits.
You already have the +/- trend limits at the +/- 95% extreme points at the beginning and end of the period.
I recommend adding dashed lines the +/- significant trend limits.
2) Red noise adjusted trends.
See Lucia’s trend adjustments for red noise, then ARIMA.
See the analyses by Lucia Lilijgren at The Blackboard under Data Comparisons.

Lance Wallace

@Crosspatch, Henry C., etc.
Re the CRN network, a link is provided in my guest post of August 30:
http://wattsupwiththat.com/2012/08/30/errors-in-estimating-temperatures-using-the-average-of-tmax-and-tmin-analysis-of-the-uscrn-temperature-stations/
The CRN is excellent for providing data from well-administered sites meeting all NOAA/WMO criteria. It will be useful in coming decades for establishing trends. However, at present the full network of 114 or so stations has only been operating for four years, so will not be useful in my opinion for trend analysis until a number more years have gone by.

HR

Werner,
I had a go a summarizing your section 1 and 2 into a graph. I only did it for HADCRUT4 as an example. It essential shows what you show in those sections. The trend in HADCRUT4 goes negative in 2001 (blur line crosses zero) and stops being statistically different from zero in 1995 (red line crosses zero).. But I think it also adds a bit more info, it shows how as the time interval gets shorter for the the data series the confidence levels get much larger, I didn’t plot the data after 2006 because they were so large they were making things look wonky. But they do highlight the caution required in trying to make any sort of call based on short time series.
I did also had a go at plotting the expected temperature trends from the CMIP5 model mean on the same graph. That produced an interesting result in that the trend in the model mean is outside the confidence range for the HADCRUT4 dataset for most of the time period (i.e the black dotted line is above the green line). I only calculated the trends in the model mean from linear trends in the annual data, maybe there is a better way of doing it but this might be other way of showing the models are running hot compared with the observations at the moment.
Here’s the example graph I produced.
http://i49.tinypic.com/1t50gh.png

davidmhoffer

This is a great idea, looking forward to seeing what comes out of it. Some thoughts:
o perhaps a bar graph for each index showing the number of month/years of no statistical warming? I’d prefer that to the spaghetti graphs
o would it be possible to depict the model performance vs actual? tough to do because you have to pick en ensemble mean otherwise the spaghetti gets even worse
o would be nice to see an easy way to compare to co2 concentration over the same time period as well, maybe even something depicting theoretical forcing based on ippc guestimates, show it rising commensurate with actual values while the temps wander off to nowhere

herkimer

jJTF//WB AND WERNNER BROZEK
Great ideas and graphs . I personally found the decadal or 11 year moving average that
CLIMATE 4YOU uses in their SUN section very informative . You should also track decadal sunspot number [derived version as Lief proposes] Showing the HADSETT2, HADCRUT 3 and DERIVED SOLAR SUNSPOT NUMBER on the same plot going back 1830 tells an interesting story. We currently have the global SST global surface temperatures and solar sunspot number graphs all going down like they did 1870-1910 . The threshold for cooling appears to be approximately the derived decadal solar sunspot number of about 48-50. Some time the global sst cycle and sunspot numbers cycle are not in sync like the 1950’s when the solar cycle was peaking but the ocean cycle was cooling and again the 1980-1990’s, when the oceans were still warming but the sun spot number curve was declining. The sunspot cycle seems to have started to decline after the 1980’s

john robertson

Would it be possible to note the zero(assumed mean temperature) in degrees C with error range on these anomaly graphs, for the ignoramus such as myself?
I have difficulty discerning which mean is used on which anomaly graph.Hence remain unclear as to the magnitude of the change of temperature.

Ockham

Nice job Werner,
IMO, Section 3 would be more easily compared if in the form of a table. Each data set could be represented by a row. Column headings could be Rank, Warmest Year, Warmest Year Anomaly Value, Highest Monthly Anomaly month/year, HMA Value, 2011 Anomaly, 2011 Rank.

Why not use data from KNMI see here http://climexp.knmi.nl/start.cgi?someone@somewhere. Please note that KNMI are the official repository of daily data for all European countries.
Where the data is raw ie European at least they do not manipulate it like others.
Certainly nothing from SkepticalScience is acceptable (it is an CAGW biased blog and it is known to select data and manipulate it)

RACookPE1978

Lance Wallace says:
January 6, 2013 at 5:20 pm (Edit)
@Crosspatch, Henry C., etc.

Re the CRN network, a link is provided in my guest post of August 30:
http://wattsupwiththat.com/2012/08/30/errors-in-estimating-temperatures-using-the-average-of-tmax-and-tmin-analysis-of-the-uscrn-temperature-stations/

Hmmmn.
No answers allowed on that post anymore, but I strongly recommend you run the analysis again looking for the main-max relationships not against latitude – which was present but only moderately strong – but against humidity or elevation. West TX and NM and AX and mid California and the other “red dots” are over 3000 feet elevation. Humidity, night radiation, night radiation, air clarity, etc … All could affect the night and day readings differently than the low level east, northeast, and Midwest. Chicago is what – only 250 feet elevation? Upstate and lakeside MN, WI, IL, etc are not much higher. Upstate NY and PA and OH are all about the elevation of Lake Eire – 220 feet.
Mountains sure – PA is NOT flat – but where are the sensors?

michael hart

cementafriend,
KNMI may indeed be a reliable source of data, but I am not (yet) familiar enough with it’s background to trust it. I have found both sceptics and warmists to be OK with the lack of obvious bias on the WoodForTrees site. Applause.
(As a slight political aside, which I generally try to avoid, I was not sure whether KNMI was dependent on EU funding. I am pained to admit that my view of the EU has taken quite a reverse since the start of this millenium.)
Back on topic, and probably more important though, is the simple format of WFT. I go there for data and simple graphs, nothing else. My understanding is that Anthony Watts was receiving funding to set up a similar resource at the time of the Gleick/Heartland events, using data sourced and funded by the US. I still hope to see that (I can’t see why a reasonable person would object to it).

Werner Brozek

I would like to thank you for your comments so far. Justthefacts and I will go through them and decide what to do. There are many good ideas. I just wish I had the expertise to implement some of them such as HR and others! We may well need to get another person involved to do things like graphs and tables via computer. I would just like make some general responses from my perspective so far in no particular order. Justthefacts may also decide which other correlations that have been mentioned could be looked at beyond what my focus has been.
I agree that sine waves are more accurate, however the statements by NOAA and Santer seem to imply that things are wrong if the linear trend is 0 for 15 or 17 years. If these are their goal posts, we have to use linear trends to show that Earth has scored a goal or is close it. How could one even define a goal in terms of a sine wave?
As far as bar graphs are concerned, at least the graph with the 7 lines can be viewed as a sideways bar graph to see for how long various data sets show no trend. Mind you, the bars are as thin as lines. But for other bar graphs. I would need someone with good computer expertise.
As far as the “0” is concerned, that is a big problem. Just to illustrate, suppose a data set said the anomaly is 0.2. What would this mean? For GISS, it is lower than 25th; for UAH, it is in 7th place; for Hadcrut3, it is in 19th place, etc. So to put things into perspective for the present year, I give the ranking if a certain anomaly were to continue for the rest of the year. So to just put all numbers into a bar graph, some bars would be very high but would mean little due to the baseline that was used.
I can do what WFT allows me to do, although there may be some tricks I have not figured out yet. I agree that the graph with 14 different things is not useful. Would it work if I made 7 graphs, one for each data set? And for each data set, I would start the year from the point where the warming is not significant and draw a trend line from that point. Then on the same graph, I would draw a line for the longest line where the slope is 0. For RSS, it would look as follows:
http://www.woodfortrees.org/plot/rss/from:1990/plot/rss/from:1990/trend/plot/rss/from:1997/trend

I’ve been running something similar at home, but I didn’t realize there was such a big demand for it. I download the Hadcrut3, GISS, UAH, RSS, and NOAA monthly anomalies and graph them in a big spreadsheet chart on my 1920×1080 24″ LCD monitor. I run on linux, and use the “gnumeric” spreadsheet, saving in native gnumeric XML format. I can save to various Excel formats if anybody’s interested.
UAH seems to be a bit of an outlier in how short the zero/negative-trend period is. I also plot the 12-month running mean. The UAH 12-month running mean is always lower than the RSS 12-month running mean. Last December, that rule was broken for the first time in the history of those 2 data sets. An interim corrected version of UAH was released. The UAH 12-month running mean is now slightly below that of RSS, but there does seem to be a change approximately June 2011. Anything significant happen then?

davidmhoffer

Werner Brozek;
As far as bar graphs are concerned, at least the graph with the 7 lines can be viewed as a sideways bar graph to see for how long various data sets show no trend. Mind you, the bars are as thin as lines. But for other bar graphs. I would need someone with good computer expertise.
>>>>>>>>>>>>>>>
Werner, in section 1 you’ve already got the values calculated, all you need to do is type them into a spreadsheet and draw the graph. About 5 minutes if you are already familiar with Excel or similar. Also, if you look under the WFT graph, there’s a clickable link to raw data. You can copy and paste that into a text document and then import it into Excel. A bit more complicated to learn, but once you know how it is done, pretty straight forward.
I can send you a couple of examples you can use as templates if you’d like to get you going. Just contact me by email, the mods have my email address (I’d rather not post it).

davidmhoffer

Walter Dnes says:
January 6, 2013 at 7:51 pm
>>>>>>>>
Werner, I’m OK with Excel, better than most, but this guy sounds like a heavy hitter!

Werner Brozek

I just thought of an addition to my earlier email. I added two lines to the graphic that I put up before to show the +/- parts of the 95% uncertainty. Does this look better:
http://www.woodfortrees.org/plot/rss/from:1990/plot/rss/from:1990/trend/plot/rss/from:1990/detrend:0.3128/trend/plot/rss/from:1990/detrend:-0.3128/trend/plot/rss/from:1997/trend
P.S. Thank you David and Walter. Let us see what JTF says. Please check back in 24 hours as neither of us may be available for a little while.

Henry@Werner
remember that I predict from my own data set that temps. will drop by -0.3K in the next 8 years or so.
It appears that the average length of one solar cycle is 10.66 years.
So I would plot my graph accordingly,, i.e. 11 years (from 2002) and every month you dump one month at the beginning and you add one month at the end. This will show the increase in cooling that we can expect.

Baa Humbug

John West says:
January 6, 2013 at 3:01 pm
I don’t know why, but I never thought of putting all the data sets on one graph. Seeing it above “Does Anybody Really Know What Time It Is?” popped in my head except temperature replaced time.

You need to decide which data sets to reject. But be careful, it’s the data sets that John West rejects, that make John West the best. (sorry, had to do it)
But seriously, this is good work. my appreciation to werner and ‘facts’.
This may have the unintended consequence of alarming the alarmists as the months go by.

Gras Albert

JTW
Sure. It is an interesting graphic, but the time-frame seems arbitrary. What does it look like with data through present?
Hmmmm, perhaps the graph is not as clear as I thought, 🙂
It’s a plot of ‘ten year’ trends, i.e. the oldest decadal trend starts in 1987 and finishes in 1996, the most recent starts in 2003 and finishes in 2012, it is current through 2012. The graph shows that the rate of warming per decade peaked in 1992 and has decreased every decade since with all but UAH now showing decadal cooling…

Streetcred says:
January 6, 2013 at 4:41 pm
Interesting ensemble … that supports the view that the CO2 increase follows temperature increase.

There is an alternative view that supports the view that the CO2 decrease follows temperature:
http://www.volker-doormann.org/images/down3.gif
V.

Philip Shehan

Bruce of Newcastle says:
January 6, 2013 at 3:22 pm
One suggestion: add the ability to include a non-linear trend, such as a sinusoidal curve…
A great idea, Other simple non linear functions simple polynomial or exponential functions would also be very useful useful.
There is currently a discussion in another section (AGW Bombshell? A new paper shows statistical tests for global warming fails to find statistically significantly anthropogenic forcing
Posted on January 3, 2013 by Anthony Watts) centering on the claim by the authors of the paper that “informal” inspection of the temperature data from 1880 to 2007 shown in panel c of figure 1 is more stable than the curves for the grenhouse gas emmissions.
I contend that this is merely a matter of difference in the y-scaling of the data sets. The temperature data is taken from NASA GISS global temperature (meteorological stations) presented here with a less compressed y scaling, where informal examination shows an increase in the rate of warming over the period.
http://data.giss.nasa.gov/gistemp/graphs_v3/
The impression is confirmed by a simple nonlinear fit to a data set comprised of an average of 10 data sets, including the GISS data:
http://www.skepticalscience.com/pics/AMTI.png
Although the data sets are effectively the same, some have objected to the “provenance” of the nonlinear data fit, prefering individual wood for trees data sets with less well fitting linear plots. The capacity to fit plots with different functions and giving R2 correlation coefficients would be most useful.

Gail Combs

Gras Albert says: @ January 6, 2013 at 3:08 pm
…..I prepared this graph from data extracted from WFT, it presents decadal trends in temperature anomaly increases/decreases since 1987 along with that of CO2, I thought the graph made the relationship between CO2 forcing and temperature change starkly obvious….
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>…
My Husband look at your graph and said you forgot to apply the ‘Mann’ Tiljander flip to turn the temperature chart upside down.

John West

Baa Humbug says:
“it’s the data sets that John West rejects, that make John West the best. “
LOL!
— just in case people aren’t following, it’s a John West Salmon slogan except with fish instead of data, of course.

jlurtz

I know it is necessary to do this! But, all of the models are broken, since none of them incorporate the effects of the Sun. Some might use TSI data, but what about the known effects caused by the “unknown mechanism {stated by Leif}”.
We could use this information as a base line of “previously implemented Theories”. How do we move on to get “better models”?

Folks,
OK, I consider myself chivvied 🙂
First up, WTI has now been updated to use UAH 5.5 (that’s why it was stuck in August, because 5.4 stops then).
For adding CRN / NOAA and anything else, I need:
– URL to data source file in moderately sane text format (I’m used to reading most format horrors now, but basically it needs monthly anomaly data)
– Credit information as in the current credits page
If people can find these details and either post here or mail me it makes it much easier to add rather than a vague wishlist.
Best wishes
Paul

As for new fit functions, trend significance etc. – yes, it would all be nice, but above both my stats skills and available time! If anyone would like to help, the core code is in the ‘analyse’ source package on the site; everything else is just URL munging and gnuplot plumbing. To add a new function you simply need to add a new command line option to ‘analyse’ – hopefully the code is trivial enough to make this obvious.
… and therein lies the rub. I want to keep ‘analyse’ easy to build, understand and verify, which means using only primitive ISO C++ maths, not fancy external stats libraries that I don’t understand and would make it hard for others to build it. But if anyone wants to try just download the source, play with it and send me a patch (or complete update). It’s all in ‘git’ this end so be brave… Indeed if multiple people want to play I’ll put it up on github as well.
Cheers
Paul

If Werner Brozek would like to contact me via WUWT I should very much like to discuss this project with him. It is a first-class idea, and badly needed. But it needs to be well executed. It is particularly important to provide a single, bold, accurate, visually-clear image that will reproduce well on TV and in the news media. That image should include the following elements:
A short headline stating the main point discernible from the graph (e.g. “No global warming for 18 years 7 months”).
Temperature on the y axis, years on the x axis. The period should be the longest period for which the combined mean of the RSS and UAH monthly global mean lower-troposphere temperature anomalies exhibits a trend that is no greater than statistically-indistinguishable from zero to 95% confidence (i.e., the warming, if any, should not exceed 0.05 K over the period).
A substrate showing the predicted global warming interval and central estimate out to 2015 according to as many of the IPCC reports 1-5 as we can accommodate without confusion.
The actual monthly anomalies, taken as the mean of the anomalies on the UAH and RSS satellite-temperature datasets.
The least-squares linear-regression trend line on the anomalies.
A short description of how the graph was compiled, the data sources, and a brief description of the statistical technique used.
This single graph will do more to wake up the world to the failure of climate extremism than anything else. I can provide camera-ready artwork automatically generated by PowerBasic, which has a powerful graphics interface and can easily process the data to make the graph look good.