Has Global Warming Stalled?

Guest Post By Werner Brozek, Edited By Just The Facts

In order to answer the question in the title, we need to know what time period is a reasonable period to take into consideration. As well, we need to know exactly what we mean by “stalled”. For example, do we mean that the slope of the temperature-time graph must be 0 in order to be able to claim that global warming has stalled? Or do we mean that we have to be at least 95% certain that there indeed has been warming over a given period?

With regards to what a suitable time period is, NOAA says the following:

”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

To verify this for yourself, see page 23 of this NOAA Climate Assessment.

Below we present you with just the facts and then you can assess whether or not global warming has stalled in a significant manner. The information will be presented in three sections and an appendix. The first section will show for how long there has been no warming on several data sets. The second section will show for how long there has been no significant warming on several data sets. The third section will show how 2012 ended up in comparison to other years. The appendix will illustrate sections 1 and 2 in a different way. Graphs and tables will be used to illustrate the data.

Section 1

This analysis uses the latest month for which data is available on WoodForTrees.org (WFT). (If any data is updated after this report is sent off, I will do so in the comments for this post.) All of the data on WFT is also available at the specific sources as outlined below. We start with the present date and go to the furthest month in the past where the slope is a least slightly negative. So if the slope from September is 4 x 10^-4 but it is – 4 x 10^-4 from October, we give the time from October so no one can accuse us of being less than honest if we say the slope is flat from a certain month.

On all data sets below, the different times for a slope that is at least very slightly negative ranges from 8 years and 3 months to 16 years and 1 month:

1. For GISS, the slope is flat since May 2001 or 11 years, 7 months. (goes to November)

2. For Hadcrut3, the slope is flat since May 1997 or 15 years, 7 months. (goes to November)

3. For a combination of GISS, Hadcrut3, UAH and RSS, the slope is flat since December 2000 or an even 12 years. (goes to November)

4. For Hadcrut4, the slope is flat since November 2000 or 12 years, 2 months. (goes to December.)

5. For Hadsst2, the slope is flat since March 1997 or 15 years, 10 months. (goes to December)

6. For UAH, the slope is flat since October 2004 or 8 years, 3 months. (goes to December)

7. For RSS, the slope is flat since January 1997 or 16 years and 1 month. (goes to January) RSS is 193/204 or 94.6% of the way to Ben Santer’s 17 years.

The following graph, also used as the header for this article, shows just the lines to illustrate the above. Think of it as a sideways bar graph where the lengths of the lines indicate the relative times where the slope is 0. In addition, the sloped wiggly line shows how CO2 has increased over this period:

The next graph shows the above, but this time, the actual plotted points are shown along with the slope lines and the CO2 is omitted:

WoodForTrees.org – Paul Clark – Click the pic to view at source

Section 2

For this analysis, data was retrieved from WoodForTrees.org and the ironically named SkepticalScience.com. This analysis indicates how long there has not been significant warming at the 95% level on various data sets. The first number in each case was sourced from WFT. However the second +/- number was taken from SkepticalScience.com

For RSS the warming is not significant for over 23 years.

For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990

For UAH, the warming is not significant for over 19 years.

For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994

For Hacrut3, the warming is not significant for over 19 years.

For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994

For Hacrut4, the warming is not significant for over 18 years.

For Hadcrut4: 0.095 +/- 0.111 C/decade at the two sigma level from 1995

For GISS, the warming is not significant for over 17 years.

For GISS: 0.116 +/- 0.122 C/decade at the two sigma level from 1996

If you want to know the times to the nearest month that the warming is not significant for each set, they are as follows: RSS since September 1989; UAH since April 1993; Hadcrut3 since September 1993; Hadcrut4 since August 1994; GISS since October 1995 and NOAA since June 1994.

Section 3

This section shows data about 2012 in the form of tables. Each table shows the six data sources along the left, namely UAH, RSS, Hadcrut4, Hadcrut3, Hadsst2, and GISS. Along the top, are the following:

1. 2012. Below this, I indicate the present rank for 2012 on each data set.

2. Anom 1. Here I give the average anomaly for 2012.

3. Warm. This indicates the warmest year on record so far for that particular data set. Note that two of the data sets have 2010 as the warmest year and four have 1998 as the warmest year.

4. Anom 2. This is the average anomaly of the warmest year just to its left.

5. Month. This is the month where that particular data set showed the highest anomaly. The months are identified by the first two letters of the month and the last two numbers of the year.

6. Anom 3. This is the anomaly of the month immediately to the left.

7. 11ano. This is the average anomaly for the year 2011. (GISS and UAH were 10th warmest in 2011. All others were 13th warmest for 2011.)

Anomalies for different years:

Source 2012 anom warm anom month anom 11ano
UAH 9th 0.161 1998 0.419 Ap98 0.66 0.130
RSS 11th 0.192 1998 0.55 Ap98 0.857 0.147
Had4 10th 0.436 2010 0.54 Ja07 0.818 0.399
Had3 10th 0.403 1998 0.548 Fe98 0.756 0.340
sst2 8th 0.342 1998 0.451 Au98 0.555 0.273
GISS 9th 0.56 2010 0.66 Ja07 0.93 0.54

If you wish to verify all rankings, go to the following:

For UAH, see here, for RSS see here and for Hadcrut4, see here. Note the number opposite the 2012 at the bottom. Then going up to 1998, you will find that there are 9 numbers above this number. That confirms that 2012 is in 10th place. (By the way, 2001 came in at 0.433 or only 0.001 less than 0.434 for 2012, so statistically, you could say these two years are tied.)

For Hadcrut3, see here. You have to do something similar to Hadcrut4, but look at the numbers at the far right. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less.

For Hadsst2, see here. View as for Hadcrut3. It came in 8th place with an average anomaly of 0.342, narrowly beating 2006 by 2/1000 of a degree as that came in at 0.340. In my ranking, I did not consider error bars, however 2006 and 2012 would statistically be a tie for all intents and purposes.

For GISS, see here. Check the J-D (January to December) average and then check to see how often that number is exceeded back to 1998.

For the next two tables, we again have the same six data sets, but this time the anomaly for each month is shown. [The table is split in half to fit, if you know how to compress it to fit the year, please let us know in comments The last column has the average of all points to the left.]

Source Jan Feb Mar Apr May Jun
UAH -0.134 -0.135 0.051 0.232 0.179 0.235
RSS -0.060 -0.123 0.071 0.330 0.231 0.337
Had4 0.288 0.208 0.339 0.525 0.531 0.506
Had3 0.206 0.186 0.290 0.499 0.483 0.482
sst2 0.203 0.230 0.241 0.292 0.339 0.352
GISS 0.36 0.39 0.49 0.60 0.70 0.59
Source Jul Aug Sep Oct Nov Dec Avg
UAH 0.130 0.208 0.339 0.333 0.282 0.202 0.161
RSS 0.290 0.254 0.383 0.294 0.195 0.101 0.192
Had4 0.470 0.532 0.515 0.527 0.518 0.269 0.434
Had3 0.445 0.513 0.514 0.499 0.482 0.233 0.403
sst2 0.385 0.440 0.449 0.432 0.399 0.342 0.342
GISS 0.51 0.57 0.66 0.70 0.68 0.44 0.56

To see the above in the form of a graph, see the WFT graph below.:

Appendix

In this part, we are summarizing data for each set separately.

RSS

The slope is flat since January 1997 or 16 years and 1 month. (goes to January) RSS is 193/204 or 94.6% of the way to Ben Santer’s 17 years.

For RSS the warming is not significant for over 23 years.

For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990.

For RSS, the average anomaly for 2012 is 0.192. This would rank 11th. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147 and it will come in 13th.

Following are two graphs via WFT. Both show all plotted points for RSS since 1990. Then two lines are shown on the first graph. The first upward sloping line is the line from where warming is not significant at the 95% confidence level. The second straight line shows the point from where the slope is flat.

The second graph shows the above, but in addition, there are two extra lines. These show the upper and lower lines for the 95% confidence limits. Note that the lower line is almost horizontal but slopes slightly downward. This indicates that there is a slightly larger than a 5% chance that cooling has occurred since 1990 according to RSS per graph 1 and graph 2.

UAH

The slope is flat since October 2004 or 8 years, 3 months. (goes to December)

For UAH, the warming is not significant for over 19 years.

For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994

For UAH the average anomaly for 2012 is 0.161. This would rank 9th. 1998 was the warmest at 0.419. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.130 and it will come in 10th.

Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to UAH. Graph 1 and graph 2.

Hadcrut4

The slope is flat since November 2000 or 12 years, 2 months. (goes to December.)

For Hacrut4, the warming is not significant for over 18 years.

For Hadcrut4: 0.095 +/- 0.111 C/decade at the two sigma level from 1995

With Hadcrut4, the anomaly for 2012 is 0.436. This would rank 10th. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399 and it will come in 13th.

Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to Hadcrut4. Graph 1 and graph 2.

Hadcrut3

The slope is flat since May 1997 or 15 years, 7 months (goes to November)

For Hacrut3, the warming is not significant for over 19 years.

For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994

With Hadcrut3, the anomaly for 2012 is 0.403. This would rank 10th. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340 and it will come in 13th.

Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to Hadcrut3. Graph 1 and graph 2.

Hadsst2

The slope is flat since March 1997 or 15 years, 10 months. (goes to December)

The Hadsst2 anomaly for 2012 is 0.342. This would rank in 8th. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273 and it will come in 13th.

Sorry! The only graph available for Hadsst2 is this.

GISS

The slope is flat since May 2001 or 11 years, 7 months. (goes to November)

For GISS, the warming is not significant for over 17 years.

For GISS: 0.116 +/- 0.122 C/decade at the two sigma level from 1996

The GISS anomaly for 2012 is 0.56. This would rank 9th. 2010 was the warmest at 0.66. The highest ever monthly anomaly was in January of 2007 when it reached 0.93. The anomaly in 2011 was 0.54 and it will come in 10th.

Following are two graphs via WFT. Everything is identical as with RSS except the lines apply to GISS. Graph 1 and graph 2.

Conclusion

Above, various facts have been presented along with sources from where all facts were obtained. Keep in mind that no one is entitled to their own facts. It is only in the interpretation of the facts for which legitimate discussions can take place. After looking at the above facts, do you think that we should spend billions to prevent catastrophic warming? Or do you think that we should take a “wait and see” attitude for a few years to be sure that future warming will be as catastrophic as some claim it will be? Keep in mind that even the MET office felt the need to revise its forecasts. Look at the following and keep in mind that the MET office believes that the 1998 mark will be beaten by 2017. Do you agree?

WoodForTrees.org – Paul Clark – Click the pic to view at source
2 1 vote
Article Rating
185 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Werner Brozek
February 10, 2013 2:06 pm

An error seems to have been made with all three tables. The new ones should have been put in with all December numbers up to date and with Hadcrut3 and 4 in 10th place.

February 10, 2013 2:16 pm

Fitting a straight line to time series data assumes that the data generating process is linear.
We are quite sure that the underlying process is not linear. The imposition of a linear model is an analyst choice. This choice has an associated uncertainty such that you must either demonstrate that the physical process is linear or add uncertainty due to your model selection.
next, you need to look at autocorrelation before you make confidence intervals.

February 10, 2013 2:19 pm

Well, I’m going to start with the premise that maybe global warming hasn’t stalled. It has stalled, but let’s just say that it hasn’t.
So what? The point is that we are recovering from the Little Ice Age, and the warming has been the product of nature, not of a theorized CO2 GHE. All they have actually is a theoretical model to back their conception of CO2’s GHE. No evidence. And other seemingly equally valid theoretical models do not lead CO2 to raise climate temps; for example, one model maintains that the GHE of CO2 is effectively nil after 200ppm.
The IPCC actually maintained that they had evidence of a causal correlation between CO2 & temperature until around 2004, when they were forced to acknowledge that there was no evidence. None. Nevertheless, Al Gore went ahead in his 2005 movie deceptively suggesting there was evidence. Because by and large the people don’t actually know about Al Gore & the IPCC’s deception on CO2, I ask that if possible you share and help spread the word about this key M4GW produced 3 minute video that shows algor’s blatant deception on CO2: http://www.youtube.com/watch?v=WK_WyvfcJyg&info=GGWarmingSwindle_CO2Lag

February 10, 2013 2:22 pm

Werner/JTF, you also need to place a breakpoint somewhere near the start of the article. This takes up a lot of the WUWT front page.

CoRev
February 10, 2013 2:27 pm

Werner and JTFW, thank you.

Werner Brozek
February 10, 2013 2:29 pm

Just above the Appendix should be this URL:
To see the above in the form of a graph, see the WFT graph below.
http://www.woodfortrees.org/plot/wti/from:2012/plot/gistemp/from:2012/plot/uah/from:2012/plot/rss/from:2012/plot/hadsst2gl/from:2012/plot/hadcrut4gl/from:2012/plot/hadcrut3gl/from:2012
Appendix
As well, the very last one at the end of the conclusion should have been:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16

February 10, 2013 2:33 pm

Climate is always warming or cooling. It does tend to vary within some range for a time, though, unless something happens where the entire variation jumps to a new regime. The temperatures of the late 1990’s and early 2000’s for example topped out at about where they were in the 1930’s and 1940’s. There was really no “warming” in the second half of the 20th century. Temperatures cooled until the late 1970’s and then warmed back to where they were in the 1930’s.
Since the Dalton Minimum we have had the longest stretch of strong solar cycles since the Medieval Warm Period without a grand minimum. We also see the warmest temperatures since the Medieval Warm Period. If we are currently experiencing a period of several weak cycles, we might be in for a regime change as far as climate goes but on top of that regime we may still see a cycle of variation both warmer and cooler as we did in the LIA. Not all periods during the LIA were as cold as other periods.
What bothers me most about all of this “global warming” hype is that a brief portion of the temperature record was compared to CO2 change and causation was declared while at the same time ignoring previous recorded temperature changes where CO2 was not a factor (say 1912 to 1935).

Climate Control Central Command ('Data' and 'Observations' Secretariat)
February 10, 2013 2:39 pm

Dear Supporter
It seems that the temperature graphs are capable of misinterpretation by non-believers. This has led to unfortunate and counter-productive questions being raise by those who wish harm to The Cause. The tentacles of the Big Oil Funded Denier Conspiracy run deep.
We would like to assure all our supporters that we will leave no stone unturned in our quest to eliminate such disbelief.
As a first step, we have appointed a high-powered Adjustment Task Force. Their remit will be to understand in depth the root causes of such misinterpretation and make such data adjustments as are necessary to prevent their recurrence. They are taking guidance in this matter from the Modelling Secretariat to ensure that correct interpretations will then be made.
Secondly they will issue new guidelines on acceptable data for publication. ‘Observers’ please note…data falling outside of the expected ranges (as determined by the Modelling Secretariat) will be rejected.
Please delete this e-mail on receipt. Our colleagues in Norwich England had some difficulties by not doing so, so we urge you to double check.

Jeff Alberts
February 10, 2013 2:55 pm

What’s the slope since Jan 1 1000 CE? 1 CE?

Werner Brozek
February 10, 2013 2:59 pm

OOPs!
Appendix
As well, the very last one at the end of the conclusion should have been:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16
Corrected.
The very last one is the above going from 1990.

DesertYote
February 10, 2013 3:04 pm

Climate Control Central Command (‘Data’ and ‘Observations’ Secretariat)
February 10, 2013 at 2:39 pm
###
Do you write for a living? If not, you should. Your post was brilliant.

Werner Brozek
February 10, 2013 3:09 pm

Steven Mosher says:
February 10, 2013 at 2:16 pm
We are quite sure that the underlying process is not linear.
I agree with you. It should be a sine wave. But how would NOAA even make a statement based on a sine wave? As well, WFT works best with straight lines. So while this is not perfect, we have to use the tools we have.

Jimbo
February 10, 2013 3:39 pm

With regards to what a suitable time period is, NOAA says the following:
”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

Then we have some more for next year if global mean temps fail to rise.

“A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature. ”
http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml

After which the non-deniers have a choice. Deny or accept observations? No?

February 10, 2013 3:51 pm

Steven Mosher says:
February 10, 2013 at 2:16 pm
next, you need to look at autocorrelation before you make confidence intervals.

True, but he’s just established that there is no trend over a given section. How could there be autocorrelation?

February 10, 2013 4:05 pm

It is easy to say that the temperatures are still increasing. Take zero at 1996, say that ’98 and a little bit after was the “warm” part of a natural cycle, and the current bit, the “cool” part of the same cycle. Split the difference, in other words: a clear warming trend from 1996 to today.
But …
The assumption here is that we had a warm-cool cycle from 1998 to today. In order to get back to “business-as-usual” CO2 warming, we must start warming fairly soon, and get a double digit warming as the “cooling” trend stops (moreso if another natural warming cycle starts). So we need to have the temps going up by (in my view) 2015, or two years away.
Now the caveat and CAGW resolution: NOAA said a 15-year stall was not happening at a 95% , level. Okay, it is a 25-year event, predicted only at a 75% level. Remember, a thing before it happens, can have a 1% certainty of happening, but once it happens, it had a 100% chance of occurring. On a 1000-year level, a 25 year stall is unexpected, but does happen every so often.
That what is happening right now: or so the argument could go.
We only give up our fight for the future when 1) the future is upon us, and 2) we don’t care any more. If 1) happens but 2) doesn’t, we’ll just change the date.

Gail Combs
February 10, 2013 4:19 pm

Jimbo says: @ February 10, 2013 at 3:39 pm
…..After which the non-deniers have a choice. Deny or accept observations? No?
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Nah, they just change the goal posts. The monster in the closet has already morphed from “An Ice Age is Coming” to “Global Warming” to “Climate Change” to “Weather Weirding”
The only thing that remains the same is the need for a crisis.
“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary…. Civilization, at bottom, is nothing but a colossal swindle.” ~ H.L. Mencken
Mencken really hit the nail on the head with this one:
“As democracy is perfected, the office of president represents, more and more closely, the inner soul of the people. On some great and glorious day the plain folks of the land will reach their heart’s desire at last, and the White House will be adorned by a downright moron.”

Tom Curtis
February 10, 2013 4:24 pm

As usual, the purported skeptics here fail to look at the interesting question, what is the longest period for each data set such that all trends shorter than that period are differ from a trend of 0.21 C per decade (the IPCC predicted value) by a statistically significant amount.
If you only look at the period in which the data does not differ statistically from zero, you are only using the dominance of noise in short term trends to evade the data, not to analyze it.

John Finn
February 10, 2013 4:43 pm

I’ve got a bit of a problem with the representation of CO2 (compared to linear trends) in the first graph. If you’d plotted the expected temperature response to the CO2 increase that might be more acceptable. Between 1998 and 2012 the annual average CO2 concentration has risen from ~367 ppm to ~394 ppm which according to the Myhre et al formula should produce a forcing of about 0.4 watts/m2 . If you’re a CAGW advocate this equates to a warming of 0.25-0.3 deg C . For a lukewarmer it’s more like 0.1 deg C. I doubt there is a statistically significant difference between the ‘lukewarmer’ projections and (any of) the actual data. This is possibly not the case for the CAGW projections but this ignores other factors such as reduced solar activity over the past decade (~0.1 deg C ??).
In a nutshell: the lukewarmer projections look well on track while the lower end CAGW projections are still possible.

catweazle666
February 10, 2013 5:24 pm

My take – for what it’s worth.
Here is a WfT graph showing HadCrut4 temperature from 1850 – present day.
It is instructive to compare the periods ~1910-1940 (previous to the commencement of anthropogenic influence on climate) and the period ~1970-2000, the trends for which periods are effectively identical in duration and gradientt.
Clearly visible is the ~0.5 deg per century background warming – probably a section of the positive phase of the ~1000 year cycle responsible for the well-documented Minoan, Roman and Little Ice Ages and the respective cold periods separating them, and thus likely to change sign at some time in the future..
This is overlaid by a ~60 year harmonic that correlates quite well with the product of the Atlantic and Pacific ocean oscillations, correlating also with the cooling observed since ~2000.
Particularly noticeable is the statistically practically identical positive phases ~1910 – 1940 and ~1970 – 2000, for which Warmists for some unfathomable reason insist on different causes, presumably being unfamiliar with Occam’s Razor.
It appears from this graph that the cooling that has occurred since ~2000 will reverse ~2030.
This would suggest that – unless the solar effects overcome the current influences – the climate will continue to cool for the remainder of the current negative phase, until ~2030, when warming will recommence for a further ~30 year positive cycle.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/trend/plot/hadcrut4gl/from:1910/to:1940/trend/plot/hadcrut4gl/from:2001/trend/offset:-0.1/plot/hadcrut4gl/from:1860/to:1880/trend/plot/hadcrut4gl/from:1970/to:2000/trend/plot/hadcrut4gl/from:1880/to:1910/trend/plot/hadcrut4gl/from:1940/to:1970/trend/plot/hadcrut4gl/from:1850/to:1860/trend/plot/hadcrut4gl/mean:120

King of Cool
February 10, 2013 5:31 pm

Further to Climate Control Central Command (‘Data’ and ‘Observations’ Secretariat) says:
February 10, 2013 at 2:39 pm.

Also for our comrades in East Anglia – if questioned by children as to why more white fluffy stuff is coming out of the sky they should be reminded that this is just another very rare and exciting event and in no way should be compared with the last event in January which cancelled trains to Norwich and closed over 2000 schools – which was extremely rare and exciting.
(Anglia News of about 2 hours ago:
More snow is falling in many parts of the Anglia region and it’s settling in some places. Up to 10 cm (4 inches) could accumulate in places. All the region’s councils have gritters out as temperatures are expected to fall close to freezing)

bw
February 10, 2013 5:38 pm

Your concise analysis is appreciated. Woodfortrees is a handy tool. Here is another graph with a longer baseline using only BEST, UAH and RSS.
http://www.woodfortrees.org/plot/best/to:1980/plot/rss-land/plot/uah-land
The real issue is the underlying data. There are people who say that the historical LAND surface temps no longer represent reality due to uncheckable computer modifications. The US/NOAA/GISS data are based on data from weather stations that were never intended to maintain scientific integrity. Hadcru has similar problems. In fact, very few surface temperature stations were ever established with any intent of long term scientific value. The surfacestations project shows the major problem, ie “urban” contamination. Another major problem is the Time of Day Bias (TOBS), which is essentially insoluble, but GHCN, NOAA and Hadcru have big computers, so they must be ok.
Here is an example of a selective analysis of good rural stations.
http://hidethedecline.eu/pages/ruti/europe/western-europe-rural-temperature-trend.php
Rural weather stations show zero warming.
Another example is Antarctica. Note these four Amundsen-Scott, Vostok, Halley and Davis stations established in 1957.
All are maintained with the intent of long term scientific integrity, and all are located in a place where the warmers claim will show the first signs of CO2 induced global warming. All four stations show zero warming since 1957. These data alone refute the entire IPCC argument.
In reality, no global network of surface temperature stations exists even today. And certainly not over the global ocean which is 70 percent of the planet.
What DO we have??
Well, 30 years of satellites that don’t really measure surface temps. The USCRN is not global and it’s only 10 years old. Proxy temps have huge error bars, and they may not be global proxies. Ocean thermometers are pretty much non-existent.
As for IPCC being an “authority” on the issue, I’ll never base my understanding on the claims of any “authority”. For this issue, see Delingpole or a few other good sources such as this older summary — http://www.john-daly.com/forcing/moderr.htm

February 10, 2013 5:48 pm

Doug Proctor:
At February 10, 2013 at 4:05 pm you say

Now the caveat and CAGW resolution: NOAA said a 15-year stall was not happening at a 95% , level. Okay, it is a 25-year event, predicted only at a 75% level. Remember, a thing before it happens, can have a 1% certainty of happening, but once it happens, it had a 100% chance of occurring. On a 1000-year level, a 25 year stall is unexpected, but does happen every so often.

Sorry, but that is NOT what NOAA said.
The NOAA falsification criterion is on page S23 of its 2008 report titled ‘The State Of The Climate’ and can be read at
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
It says

ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

So, the climate models show “Near-zero and even negative trends are common for intervals of a decade or less in the simulations”.
But, the climate models RULE OUT “(at the 95% level) zero trends for intervals of 15 yr or more”.
We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.
The facts are clear.
According to the falsification criterion set by NOAA in 2008, the climate models are falsified by the recent period of 16+ years of (at 95% confidence) zero global temperature trend. This is because NOAA says the climate models simulations often show periods of 10 years when global temperature trends are zero or negative but the simulations rule out near zero trends in global temperature for periods of 15 years. What the models “rule out” nature has done.
We need to clearly and repeatedly remind of what NOAA said in 2008. Otherwise the goal posts will be moved and and moved again until they are off the planet.

Richard

Tom Jones
February 10, 2013 6:39 pm

On Feb 10 at 2:16 pm, Steven Mosher says:
“We are quite sure that the underlying process is not linear. The imposition of a linear model is an analyst choice.”
I am quite sure that nobody has any idea what the underlying process is. It has been implied frequently that the underlying process is linear, but I have never seen anybody step up to the plate and say that the underlying process is one thing or another. If someone was willing to do that, one could at least devise an experiment to confirm or falsify that hypothesis. It seems pretty intuitive that the process is not linear, but the notion that it is up to the experimentalist to prove that, boggles my mind.
The business of taking the average of an ensemble of simulations is not even vaguely convincing. The best case is that one is correct and the others are not, but we don’t know which one is correct, do we? Another possibility is that none is correct. Given that all of the GCM’s use the basic notion that GHG concentration is the prime driver, and that CO2 is very, very important, they could all have the same fallacy built in. Maybe that’s not true, you know.

SkepticGoneWild
February 10, 2013 6:42 pm

For us (or maybe just me) statistics dummies, could you briefly explain what “2 sigma” means, and how it relates to the term “statistically significant”. I went to the SkS website and played with their trend calculator, and read some of the definitions. For example if a linear trend is 0.003 +/- 0.003, then the slope of the trend is between 0.000 and 0.006. But what does the 2 sigma mean, and how do you determine statistical significance? And what does that +/- range mean?
Thanks
[Reply: Sigma is shorthand for ‘standard deviation’ (that probably doesn’t explain much if you are not familiar with statistics already). The more standard deviations you have, the further you are from average. The -/+ means how much the value is likely to vary. So saying I have an 8 ounce cup of tea +/- 1 means I might have anywhere form a 7 ounce to a 9 ounce cup of tea. Statistical significance takes more than a note to explain, but mostly says how much you can trust a conclusion. If I have a sample of 6 cups of tea and they are average 8 ounces, it might be that the next one is 9 or 10. If I have a sample of 1200 cups, and they are 8 ounces +/- 0.1 ounce, it is highly likely (very statistically significant) that my next cup will be betwwn 7.9 and 8.1 ounces. I can’t say that about my first sample as it is too small to be ‘statistically significant’. Back on that trend and +/-: When a trend has a value the same as the error range, it means you don’t know much about the trend. Having $2 +/- $2 says I might have money, or not, in equal probability… -ModE]

Ian H
February 10, 2013 6:51 pm

Not too happy about the way you put CO2 levels on the combined graph. What is the vertical axis in this picture? A temperature anomaly? Where is the axis for the CO2 levels? It clearly doesn’t start at zero. A wiggly line going up? I guess CO2 increased. But without numbers attached this is pretty meaningless. With no axis to pin this down you could scale it all over the page. So what does that wiggly line actually mean. Without an axis to give it context, not much.

February 10, 2013 7:36 pm

Tom Jones says:
February 10, 2013 at 6:39 pm
The business of taking the average of an ensemble of simulations is not even vaguely convincing.
========
It is a nonsense. The earth has a near infinite number of possible futures. This is clearly shown by quantum mechanics. Some of those futures are more probable than others, but it is beyond our ability to calculate the probabilities.
Taking the average of 10 wrong answers does not improve the quality of the answer. If you ask 10 idiots the square root of a 100, will the average be closer to the correct answer than any one of the answers? No, it is 10 times more likely that one of the idiots will be closer to the true answer than is the average.
The reason for this is simple. The idiots have 10 chances to get closer to the correct answer. The average only has 1. Thus you would expect that if you repeated the exercise 11 times, only once would the average be closer. The other 10 times one of the idiots would beat the average

pottereaton
February 10, 2013 7:49 pm

Can you imagine the panic and pandemonium if temperatures start trending downward?
For a variety of reasons, nothing would please me more.

February 10, 2013 7:59 pm

SkepticGoneWild says:
February 10, 2013 at 6:42 pm
[Reply: Sigma is shorthand for ‘standard deviation’
=========
To complicate things further, most calculations of ‘standard deviation’ rely on the assumption that the underlying data is a “normal distribution”. Most of our statistical theory relies on this assumption.
However, it has been found that most time-series data (such as stock markets and climate) does not follow a normal distribution. This means that our confidence levels are often incorrect about how “likely” something is to happen. This leads people to bet the farm on a “sure thing” in the stock market, and is very similar to the current situation where politicians, scientists and countries are betting their economic future on climate.
We assume that the future is fairly certain because we expect the future to behave like the past. However, this is an entirely naive belief because we already know the past while we do not know the future. This leads us to significantly underestimate the size of the unknown when we try and consider all the things that might happen going forward.
Thus, the plans of mice and men aft go awry.

Werner Brozek
February 10, 2013 8:33 pm

John Finn says:
February 10, 2013 at 4:43 pm
I’ve got a bit of a problem with the representation of CO2 (compared to linear trends) in the first graph.
Ian H says:
February 10, 2013 at 6:51 pm
What is the vertical axis in this picture?
I would like to thank you both for raising an excellent point. The way WFT works, you can plot several different things at the same time. Naturally the y axis is different for each case. If I were to just plot CO2 alone, it would look as follows and the y axis would go from about 360 ppm to about 397 ppm CO2. See:
http://www.woodfortrees.org/plot/esrl-co2/from:1997/to:2013
When two things are plotted as I have done, the left only shows a temperature anomaly. It happens to go from -0.2 to +0.8 C. I did not plan it this way, but as it turns out, a change of 1.0 C over 16 years is about 6.0 C over 100 years. And 6.0 C is what some say may be the temperature increase by 2100. See:
http://www.independent.co.uk/environment/climate-change/temperatures-may-rise-6c-by-2100-says-study-8281272.html
“The world is destined for dangerous climate change this century – with global temperatures possibly rising by as much as 6C – because of the failure of governments to find alternatives to fossil fuels, a report by a group of economists has concluded.“
So for this to be the case, the slope for three of the data sets would have to be as steep as the CO2 slope. Hopefully the graphs show that this is totally untenable.

Chad Wozniak
February 10, 2013 8:34 pm

Interesting that even with all the obvious fudging of numbers still going on, the alarmies can’t do any better than to say the temperature curve is “flat.” Any statement by them that it is flat infallibly means it is trending downward, and that cooling, not merely a pause in warming, has been occurring for the last 15 years.
A correction to one poster’s comment: There have never been any years anywhere near as warm as the 1934-1938 period since that time, NOAA and IPCC lies to the contrary notwithstanding – not even 1953, a relatively hot one, and certainly not 1997-1998. There has in fact been an overall cooling trend, as demonstrated by the negative slope of the regression line for the past 80 years. Cyclical variations in the meantime have not affected the overall downward trend over the longer term. And interestingly again, this long-term cooling has taken place while CO2 in the atmosphere increased by nearly 40 percent. Therefore, no CO2-caused, hence no man-caused, global warming. Q.E.D.
Second proof: Inter alia, animal respiration alone pours at least 30, maybe as much as 75 times as much CO2 into the air as human activity. And then, there is ordinarily 30 to 140 times as much water vapor – a substantially more effective heat-trapping substance than there is CO2, in the atmosphere at any given time. And these are only a couple of many examples that can be cited on both sides of the CO2 equation. Man is an infinitesimal of an infinitesimal – man’s role in climate change is therefore, mathematically speaking, one over infinity squared. Q.E.D. again.
The hubris and effrontery of people who think they can control climate – especially by controlling only one tiny fraction of one tiny factor in climate change – is truly breathtaking. Back off, God/Mother Nature (take your pick of which), we’re in charge now, they’re saying.

SkepticGoneWild
February 10, 2013 8:34 pm

Ok. Thanks moderator and people. Things are a little bit clearer. Say for example the RSS data you posted:
“For RSS the warming is not significant for over 23 years.
For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990″.
Could you run through how you determined it was 23 years for the above? Is it because the minus value of 0.136 gives one a negative trend of -0.009 C/decade (0.127 – 0.136)? So you are going back as far as you can to obtain a zero or slightly negative slope?

Richard of NZ
February 10, 2013 8:52 pm

ferd berple says:
February 10, 2013 at 7:59 pm
“Thus, the plans of mice and men aft go awry.”
I think I prefer the original
“The best laid schemes of mice and men gang aft agley
An’ lea’e us nought but grief an’ pain,
For promised joy.”
It encapsulates the entire CAGW proposition from “scheme” with its connotation of dishonesty, to the obfuscation of “gang aft agley” which needs an expert to interpret, to the promise of an idealised future for a little inconvenience now.
Perhaps Mr. Burns was the true visionary.

Phil.
February 10, 2013 8:57 pm

Mod E you misunderstand statistical significance. When you say you have $2 +/- $2 it does not mean what you claim, if you’re using 1sigma bands then you’re saying there’s a 64% chance that you have between 0 and $4 and 18% chance that there’s less than 0. If you’re using the more likely 2 sigma limits then you’re saying there’s a 95% chance of 0 to $4 and 2.5% chance of less than 0.
Reply: OK, I mis-spoke. I was picturing a $2 bill that I either have, or do not. A discrete event. Ought to have said “Have $4 or No $ in equal probability”. Not so much a nonunderstanding of statistics as trying to make a simple example for a newbie to have a vague idea what’s going on. So I said “have money” when I ought to have said “have $4”. Please, feel free to actually answer the posters question instead of just nit-picking. Oh, and remember to keep it at a level that requires no prior understanding of statistics. Oh, and make NO simplifying statements for illustration that might in any way be at variance with hard core statistical truth… But make sure it is absolutely clear. And do it in less than 3 sentences. -ModE]
Reply 2: No need to be snippy ModE. Phil. was making a valid point. ~uberMod

sceptical
February 10, 2013 9:04 pm

Wow, this may be the final nail. The U.N. conspiracy has stalled temperature rise to really through off those who don’t see the conspiracy.

Werner Brozek
February 10, 2013 9:12 pm

SkepticGoneWild says:
February 10, 2013 at 6:42 pm
could you briefly explain what “2 sigma” means
To add to what the moderator said, I would like to apply it to the numbers for RSS since 1990, namely: For RSS: +0.127 +/-0.136 C/decade at the two sigma level from 1990. See this illustrated on this graphic from the post:
http://www.woodfortrees.org/plot/rss/from:1990/plot/rss/from:1990/trend/plot/rss/from:1997/plot/rss/from:1990/plot/rss/from:1990/trend/plot/rss/from:1990/detrend:0.3128/trend/plot/rss/from:1990/detrend:-0.3128/trend/plot/rss/from:1997/trend
Now 0.127 – 0.136 is -0.009; and 0.127 + 0.136 is 0.263. What this is saying is that since 1990, on RSS, we can be 95% certain that the rate of change of temperature is between -0.009 and +0.263/decade. So while we may be 93% certain that it has warmed since 1990, we cannot be 95% certain. For some reason, in climate circles, 95% certainty is required in order for something to be considered statistically significant.
Now having said all this, RSS shows no change for 16 years. This means there is a 50% chance that it cooled over the last 16 years and a 50% that it warmed over the last 16 years. Since this exceeds NOAA’s 15 years, their climate models are in big trouble.
So you are going back as far as you can to obtain a zero or slightly negative slope?
Yes, but that was only to the nearest whole year. If you go to:
http://www.skepticalscience.com/trend.php
And if you then start from 1989.67, you would find:
“0.131 ±0.132 °C/decade (2σ)”
In other words, the warming is not significant at 95% since September 1989. However this is just barely true and it only goes until October. If it went to January with the huge jump, it could change.

E.M.Smith
Editor
February 10, 2013 9:25 pm

@Ferd Berple:
I touch on there here:
http://chiefio.wordpress.com/2012/12/10/do-temperatures-have-a-mean/
Oh, and as temerature is an intrinsic property, it is not legitimate to average temperatures anyway (you need to convert to heat using mass and specific heat / enthalpy terms)
http://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/
So the whole “Global Average Temperature” idea is broken in two very different ways, either one of which makes the whole idea bogus… But nobody wants to hear that, warmers or skeptics, as then they have to admit all their arguments over average temperature are vacuous…

SkepticGoneWild
February 10, 2013 10:07 pm

Werner Brozek says:
February 10, 2013 at 9:12 pm
Now 0.127 – 0.136 is -0.009; and 0.127 + 0.136 is 0.263. What this is saying is that since 1990, on RSS, we can be 95% certain that the rate of change of temperature is between -0.009 and +0.263/decade. So while we may be 93% certain that it has warmed since 1990, we cannot be 95% certain.
Werner, thanks for your patient explanation. I understood most of what you stated, but I am somewhat confused by your statement in bold above. So according to the above data, the warming is not significant at 95%. But say the RSS data happened to be o.127+/- 0.127 C/decade at 2 sigma, you could then state that you are 95% certain that it warmed? But since the above real data had a negative value ( -0.009), you could not be 95% certain it warmed?

Climate Control Central Command (Modelling Secretariat)
February 10, 2013 10:34 pm

Dear Supporter
The General Secretary of the CCCC has asked me to write directly to you to inform you of an important change in the way in which our operations will henceforth be conducted.
Following on from the poor publicity and consequent Denier opportunities presented by the mishandling of the recent temperature and and trends by our ‘colleagues’ in the D&O Secretariat, they have all accepted compulsory early retirement.
We have taken this opportunity to remove D&O from our strategic portfolio of work and so the Modelling Secretariat will now have complete control of all ‘data’ released for any form of publication. To boost our capabilities in this area we hope soon to acquire the services (on secondment) of a valued team of experienced climatalarmists currently located in Norwich UK.
Please welcome them to our happy bunch of crusaders.
I urge you all to redouble your efforts to ensure that only Modelling secretariat approved data is ever released. There is only one True Message and we must ensure that there are no further misunderstandings because of a lack of zeal and enthusiasm for the cause..
Though regrettable, we also need to introduce severe disciplinary sanctions for any deviations from the path. To emphasise this point the previous leader of D&O will be making a public confession of his errors live on webcam tomorrow at noon. The memorial service will begin at 14:00.
The General Secretary also wishes to convey his best wishes to you all. His well-earned retirement begins forthwith – at an undisclosed location. Rest assured GS, that we will be able to find you so that your past ‘achievements’ can be put in their proper context and the judgement of history applied!
Of course, our honoured Emeritus Professor – who has recently added a further honour to his distinguished record – will continue to use Twitter to remind you of the scientific facts you need to use in your day-to-day work
Best Wishes to You All
LA
[/sarcasm ? Mod]

Climate Control Central Command (Modelling Secretariat)
February 10, 2013 10:55 pm

@mod
See the earlier message from the Climate Control Central Command (‘Data’ and ‘Observations’ Secretariat)
http://wattsupwiththat.com/2013/02/10/has-global-warming-stalled/#comment-1221832
All should become clear.
LA

Sandor
February 10, 2013 11:07 pm

Did anyone calculate the correlation between CO2 levels and temperature? Would be interestin to see?

Werner Brozek
February 10, 2013 11:43 pm

SkepticGoneWild says:
February 10, 2013 at 10:07 pm
But say the RSS data happened to be o.127+/- 0.127 C/decade at 2 sigma, you could then state that you are 95% certain that it warmed?
You are 95% certain the real slope is between 0.000 and 0.254. So I guess there would be a 2.5% chance the slope is above 0.254 and a 2.5% chance the slope is less than 0. So you would be 97.5% certain there was warming in this case.
But since the above real data had a negative value ( -0.009), you could not be 95% certain it warmed?
I was tempted to say yes that “you could not be 95% certain it warmed”, but I think I may have to ask for help here. In view of the 97.5% that I mentioned above, I am starting to wonder if there is a very small negative value that would allow you to say that you can be sure it warmed at a 95% confidence level. Can Phil. or someone else help me out? Thanks!

EternalOptimist
February 11, 2013 1:12 am

It’s enough to make a grown man cry. Here we have a set of facts that no one is disputing, and the catastrophists are spinning it as a success
the lukewarmers are claiming it proves them right
and the sceptics(inluding me) are agog that anyone can doubt they were right all along

Philip Shehan
February 11, 2013 1:27 am

More cherry picking.
Let’s just concentrate on one of the data sets shown – the claim that Hadcrut4 temperature data is flat since November 2000.
Look also at the data from 1999. Or compare the entire Muana Loa data set from 1958 with temperature.
And remember, “statistical significance” here is at the 95% level. That is, even a 94% probability that the data is not a chance result fails at this level.
http://tinyurl.com/atsx4os

Nigel Harris
February 11, 2013 1:51 am

The final chart with the question “the MET office believes that the 1998 mark will be beaten by 2017. Do you agree?” looks rather different if you put a simple linear trend line through the data presented.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/trend

Greg Goodman
February 11, 2013 1:58 am

Steven Mosher says:
Fitting a straight line to time series data assumes that the data generating process is linear.
We are quite sure that the underlying process is not linear. The imposition of a linear model is an analyst choice. This choice has an associated uncertainty such that you must either demonstrate that the physical process is linear or add uncertainty due to your model selection.
next, you need to look at autocorrelation before you make confidence intervals.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/mean:9/mean:7/derivative/plot/sine:10/scale:0,00001/from:1990/to:2015
=====
This is a very good point. Sadly this is what most of climate science seems to have been feeding us for the last 30 years. Especially the IPCC version of climate history. Show a trend over , say, the last 50 years (innocently chosen round number NOT). Then say “if present trends continue” , with the unstated yet implied assumption that this obviously what will happen if we don’t change our ways.
Now we are all agreed that this is bad and totally unscientific. So can a layman do any better messing around with the limited facilities offered by WFT.org ? [sic]
Firstly , if we are interested in whether temps are rising faster or slower why the hell aren’t we looking at rate of change rather than trying to guess it by look at the time series and squinting at bit. Take the DERIVATIVE .
If rate of change is above zero it’s warming, below zero it’s cooling. You don’t need a PhD to view that.
dT/dt or the time derivative is available on WTF.org , so let’s use it.
While we are there, let’s stop distorting the data with these bloody running means that even top climate scientists of all persuasions can’t seem to get beyond. Running means let through a lot the stuff we image we have filtered out and, worse still, actually INVERT some frequencies. ie turn a negative cycle into a positive one. That is why you can often see the running mean shows a trough when the data is showing a peak.
Again, we can do better even with limited tools like WTF offers.
Without going into detail on the maths , if you do a running mean of 12 , then 9 then 7 months this make a half decent filter that does not invert stuff and gives you a way smoother result because really does filter out the stuff you intended.
So I’ll just take one data series used above at random ( without suggesting it is more or less relevant than the others).
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/mean:9/mean:7/derivative/plot/sine:10/scale:0,00001/from:1990/to:2015
BTW if anyone knows how to get a grid or at least the x-axis, I just hacked a sine wave with minute amplitude but it won’t go all the way across. WFT.org is a bit of mess but easy to use for those who don’t know how to get and plot data themselves.
Just The Facts may like to add this sort of graph for all the datasets reported here.

Greg Goodman
February 11, 2013 2:07 am

Here’s another view of this, upping the filter to remove 2 years an shorter.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1970/mean:24/mean:18/mean:14/derivative/plot/sine:10/scale:0,00001/from:1990/to:2015
Here we see the 80s and 90s each had two warming periods and one cooling each. Since 1997 it is swinging about evenly each side of zero.
I’d called that ‘stalled’ if we want to draw a simplistic conclusion.

Max™
February 11, 2013 2:18 am

Switch the columns for rows on the last table.
___Data|Sets|Across|Here
M
O
N
T
H
S
H
E
R
E

Greg Goodman
February 11, 2013 2:19 am
MikeB
February 11, 2013 2:20 am

Last Wednesday,, David Attenborough said on the BBC’s much acclaimed nature programme ‘Africa’ that the world had warmed by 3.5 deg. C over the last 2 decades. So much for temperatures being flat!
But it seems that the BBC has now backed down on this claim.
http://www.telegraph.co.uk/earth/environment/climatechange/9861908/BBC-backs-down-on-David-Attenboroughs-climate-change-statistics.html
The comment, first broadcast in the final episode of the Africa series last Wednesday, was removed from Sunday night’s repeat of the show.

cRR Kampen
February 11, 2013 2:35 am

So the extra heat must have gone into the oceans, where it fed Irene, Sandy and Nemo and is melting sea- and shelf ice everywhere.

February 11, 2013 2:50 am

E.M.Smith:
At February 10, 2013 at 9:25 pm you say

So the whole “Global Average Temperature” idea is broken in two very different ways, either one of which makes the whole idea bogus… But nobody wants to hear that, warmers or skeptics, as then they have to admit all their arguments over average temperature are vacuous

Not true.
For many years several of us have been pointing out those and other fundamental problems with the global temperature data sets. Please read Appendix B and the list of signatories to that Appendix in the item at
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
Richard

Kelvin Vaughan
February 11, 2013 3:06 am

MikeB says:
February 11, 2013 at 2:20 am
Last Wednesday,, David Attenborough said on the BBC’s much acclaimed nature programme ‘Africa’ that the world had warmed by 3.5 deg. C over the last 2 decades. So much for temperatures being flat!
He probably misread the script and missed the point in front. Don’t forget he is getting old and his eyesight is probably not good.
The Central England Temperature maximum trend for March has gone up 0.5 degrees C since 1994 whereas the minimum has fallen 1 degree C. If you look at the maximums and minimums for each month they are all doing different things. Some months are warming and some months are cooling.
If plot all the 1sts of January since 1878 you can see an 88 year (approx) sine wave.

February 11, 2013 3:24 am

Philip Shehan:
Your entire post at February 11, 2013 at 1:27 am says

More cherry picking.
Let’s just concentrate on one of the data sets shown – the claim that Hadcrut4 temperature data is flat since November 2000.
Look also at the data from 1999. Or compare the entire Muana Loa data set from 1958 with temperature.
And remember, “statistical significance” here is at the 95% level. That is, even a 94% probability that the data is not a chance result fails at this level.
http://tinyurl.com/atsx4os

I know it is a big ask for you, but please try not to be idiotic.
The question being addressed is “Has global warming stalled?”.
That question is about the here and now: it is not about some other time.
There is only one period to choose when considering what is happening NOW: i.e. back in time from the present (we can’t go forward because we don’t know the future). Any other period is a ‘cherry pick’: indeed, it is inappropriate.
So, the real issue is determination of how far back in time is needed discern global warming.
And an important consideration in this determination is whether or not (at the 95% level) a zero trend has existed for 15 or more years. This is because in 2008 NOAA reported that the climate models show “Near-zero and even negative trends are common for intervals of a decade or less in the simulations”.
But, the climate models RULE OUT “(at the 95% level) zero trends for intervals of 15 yr or more”.
I explain this with reference, quote, page number and link to the NOAA 2008 report in my post at February 10, 2013 at 5:48 pm.
Furthermore, the appropriate comparison to MLO CO2 records is over the determined period(s) of (at the 95% level) a zero trend(s). It is not “since 1958” because the global temperature has had (at the 95% level) a positive trend since then and, therefore, it would be an ‘apples to umbrellas’ comparison.
I understand your difficulty. What the models “rule out” nature has done, and this falsifies your cherished models. You need to come to terms with it.
Richard

wayne Job
February 11, 2013 3:40 am

I have but one question, if the rather dubious corrections to the older temperatures { people in the old days did not read the thermometers correctly} the trend may be down rather than zero.
This is more of a worry than warming, the last long cold spell some time ago was not conducive to comfort nor the growing of food.
It bothers me that I have young grand children and fools are squandering our future capacity to adapt quickly to a fate worse than a degree or two of warming.

MikeB
February 11, 2013 3:48 am

So the extra heat must have gone into the oceans, where it fed Irene, Sandy and Nemo and is melting sea- and shelf ice everywhere.

Yes, it must have done mustn’t it? We can’t find it there but that’s no problem, where else can it have gone? ( This begs the question of why it should suddenly decide to store itself in the ocean instead of warming the air as it did before)
On the other hand, if the extra heat is not hiding somewhere then there is something wrong with the theory – and we can’t have that.
Sherlock Holmes said to Holmes…

It is a capital mistake to theorise before one has data. Insensibly one begins to twist facts to suit theories, instead of making a theory to fit the facts

garymount
February 11, 2013 4:48 am
Nigel Harris
February 11, 2013 5:03 am

None of the analysis presented here actually answers the question asked at the top of the page “Has global warming stalled?”. To answer that question, you need to start with the null hypothesis (to be disproved) that global warming has not stalled, but continues unabated.
Take Hadcrut4 for example. You give two facts: the trend from Nov 2000 is flat, and the trend since 1995 does not show significant warming (i.e. it fails to reject a null hypothesis of no warming at the 95% level). However, if you also look at what was happening before these dates, you can compare before and after.
For Hadcrut4, after Nov 2000, the trend is -0.008 +/- 0.171 C/decade. Between 1900 and Nov 2000 the trend was +0.064 +/- 0.010. The rising trend of 0.064 is well within the 2-sigma bounds of the trend since Nov 2000. It is also possible that the trend has doubled to 0.128 per decade, as that is also comfortably within the 95% limits of the trend since Nov 2000. So there is no statistical evidence presented here that suggests that temperature is not continuing to rise at at the rate that it was rising prior to Nov 2000, or even at a significantly faster rate.
It is indeed within the bounds of 95% confidence limits that global warming has stalled. It is also possible that it has reversed is now heading downwards. But it is also statistically quite possible that it has continued or even increased in trend.
As for the fact that trend since 1995 does not show significant warming; at 0.095 +/- 0.111 it may not be statistically distinguishable from a zero trend, but the trend estimated from this data is actually HIGHER than the long-run trend since 1900.
The same failure to answer the basic question is true of every fact presented here. Just because a cherry-picked period (and the analysis in this post is the very definition of cherry-picking!) has a trend that is statistically indistinguishable from zero doesn’t mean that global warming has stalled. You need to show that the trend is statistically distinguishable from continued (or even accelerated) warming.

Phil.
February 11, 2013 5:09 am

Werner when we apply significance tests to this data as to whether the trend is warming we ought to be applying a ‘one tailed’ test not a ‘two tailed’ test. The latter says that there’s a 95% chance of being in the range whereas the former says there’s a 2.5% chance of being below the lower bound. In the case of the RSS data there’s an ~3% chance of being below 0, therefore you’d say that warming was significant at the 95% level. A better way of testing would be to use a test like the Pearson test and work with the correlation coef but you’d still use the ‘one tailed’ version. The 95% cut-off is commonly used in science and engineering often thought of as 20:1 odds of being right.

Jim Ryan
February 11, 2013 5:21 am

…the extra heat….
Begging the question.

Graham W
February 11, 2013 5:28 am

Tom Curtis says:
February 10, 2013 at 4:24 pm
“As usual, the purported skeptics here fail to look at the interesting question, what is the longest period for each data set such that all trends shorter than that period are differ[ent] from a trend of 0.21 C per decade (the IPCC predicted value) by a statistically significant amount.”
A nonsensical question really, since if you pick any specific time period within any temperature data set and then look at trends, within that period, of a shorter length of time – all these shorter length trends will automatically have a higher level of uncertainty attached to them (the shorter the time period the greater the uncertainty in the data) and therefore the chances of them being statistically distinguishable, at the 95% level, from a trend of 0.21 C per decade will of course be lower and lower the shorter the trends get.
However, there’s plenty of long periods of time where the trend is different from a trend of 0.21 C per decade by a statistically significant amount, for instance, using this tool:
http://www.skepticalscience.com/trend.php
And looking at the HADCRUT 4 data from 1863 – 2013, you get a trend of 0.051 +/- 0.007 C/decade. So that’s a 150 year period where the trend only has a 5% chance of being higher than 0.058 C/decade, nowhere near as high as 0.21 C per decade.
It’s the same story with every other trend data set you can look at with the trend calculator – apart from the two satellite data sets since of course the data doesn’t go back as far. Can’t get a 150 year period with that. Interestingly though, with those satellite datasets, the trend since satellite records began, to present, is still statistically distinguishable from a trend of 0.21 C per decade, with 95% confidence in one case. The two results are:
RSS: 0.133 +/- 0.073 C/decade (so a maximum trend – at the 95% confidence level – of 0.2060 C per decade…still not as high as the IPCC predicted value of 0.21 C per decade unless you’re going to round up to 2 decimal places…and it could also be as low as 0.06 C/decade with the same level of confidence.
UAH: 0.138 +/- 0.074 C/decade (so a maximum trend – at the 95% confidence level – of 0.2120 C…so JUST within the bounds of their prediction. Could also be as low as 0.064 C/decade.

DirkH
February 11, 2013 5:48 am

Philip Shehan says:
February 11, 2013 at 1:27 am
“More cherry picking.
Let’s just concentrate on one of the data sets shown – the claim that Hadcrut4 temperature data is flat since November 2000.
Look also at the data from 1999. Or compare the entire Muana Loa data set from 1958 with temperature.”
See Richard Courtney’s comment above. It is NOAA’s falsification criterion. Complain to NOAA. Stop confusing the Null Hypothesis with the weird CO2AGW theory’s predictions. CO2AGW is falsified NOW; this is enough. Make a new theory. (But I hope you fund yourselves next time)
cRR Kampen says:
February 11, 2013 at 2:35 am
“So the extra heat must have gone into the oceans, where it fed Irene, Sandy and Nemo and is melting sea- and shelf ice everywhere.”
Do you have any evidence for a radiative imbalance that has not been fudged with a junkyard climate model?

Espen
February 11, 2013 6:17 am

cRR Kampen says:
February 11, 2013 at 2:35 am
So the extra heat must have gone into the oceans, where it fed Irene, Sandy and Nemo and is melting sea- and shelf ice everywhere.
Antidotes for your ignorance are just a click away: http://wattsupwiththat.com/2013/02/08/bob-tisdale-shows-how-forecast-the-facts-brad-johnson-is-fecklessly-factless-about-ocean-warming/

Hot under the collar
February 11, 2013 6:29 am

Kelvin Vaughan says: ……”he probably misread the script and missed the point in front. Don’t forget he is getting old and his eyesight is probably not good.”
Oh no he didn’t misread the script. Don’t expect the BBC to miss a trick and not slip a bit of global warming propaganda in if they think they can get away with it. The ‘source’ of the warming figures were ‘unbiased’ ‘reliable’ ‘scientific’ / sarc, green campaign organisations such as Greenpeace and the WWF.
http://blogs.telegraph.co.uk/news/jamesdelingpole/100202234/no-david-attenborough-africa-hasnt-warmed-by-3-5-degrees-c-in-two-decades/

Boblo
February 11, 2013 6:46 am

And yet a February 7 post on RealClimate, “2012 Updates on Model-Observation Results” concludes that all is well with the models. Would appreciate comments on their main tricks.

February 11, 2013 7:59 am

It’s chaotic. Why bother?

Hot under the collar
February 11, 2013 8:06 am

Climate Control Central Command (Modelling Secretariat) says: …….
Re: [/sarcasm ? Mod]
Mod, I think it was more parody / satire personally.

Doug Danhoff
February 11, 2013 8:40 am

Boblo, did you really believe the good folks at RealClimate would say anything different? Truely they are the “deniers”. They are denying the proof of the failure of their climate models… a standard of proof they themselves established.

Werner Brozek
February 11, 2013 9:00 am

Nigel Harris says:
February 11, 2013 at 5:03 am
None of the analysis presented here actually answers the question asked at the top of the page “Has global warming stalled?”.
Thank you for your excellent points! I know there is a difference of opinion as to exactly what the 95% refers to in NOAA’s statement. I am not going to get into semantics here. For me, the most important facts are that RSS, Hadsst2 and Hadcrut3 show a slope of 0 for over 15 years. I realize there are error bars, but they have a 50% chance of going up or down from the point of 0 slope. So at the minimum, these three sets would be proof that global warming has stalled for a significant period of time. As for the other three sets, there could be room for debate here.
P.S. Thank you for a good idea!
Max™ says:
February 11, 2013 at 2:18 am

Werner Brozek
February 11, 2013 9:13 am

Phil. says:
February 11, 2013 at 5:09 am
Thank you! So I want to be sure I said the correct thing. Earlier, I said the following:
“If you go to:
http://www.skepticalscience.com/trend.php
And if you then start from 1989.67, you would find:
“0.131 ±0.132 °C/decade (2σ)”
In other words, the warming is not significant at 95% since September 1989.”
Is this last statement completely correct or should it be modified in some way? If so, how? Thanks!

Gail Combs
February 11, 2013 9:24 am

sceptical says:
February 10, 2013 at 9:04 pm
Wow, this may be the final nail. The U.N. conspiracy has stalled temperature rise to really through off those who don’t see the conspiracy.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
No it is not.
As long as the Mass Media papers over the cracks and no reporter will touch it, the Propaganda Machine will keep deflecting attention to ‘Weather Wierding’
The banks, financiers, and large corporations (advertisers) have much to much riding on this scam. (See my comment here on the MSM – banker connections )
For those like _Jim who think people are in control of corporation stock: Start with A Brief History Of The Mutual Fund “…Shady dealings at major fund companies demonstrated that mutual funds aren’t always benign investments managed by folks who have their shareholders’ best interests in mind….” Mutual funds separate the ‘owner’ of the stock from the ‘voter of the stock’ and therefore shift control of the company into the hands of the mutual funds.

World’s Stocks Controlled by Select Few
A recent analysis of the 2007 financial markets of 48 countries has revealed that the world’s finances are in the hands of just a few mutual funds, banks, and corporations. This is the first clear picture of the global concentration of financial power……
A pair of physicists at the Swiss Federal Institute of Technology in Zurich did a physics-based analysis of the world economy … revealing what they called the “backbone” of each country’s financial market. These backbones represented the owners of 80 percent of a country’s market capital, yet consisted of remarkably few shareholders….
…. Glattfelder said. “If you then look at who is at the end of these links, you find that it’s the same guys

Then there are our ‘benign’ bankers: Top Senate Democrat: bankers “own” the U.S. Congress and the last fleecing of the sheep: How AIG Bailout Drivings More Foreclosures and this article “How Wall Street Insiders Are Using the Bailout to Stage a Revolution” and this interview Heist of the century
Finally this article:

Ignoring Elites, Historians Are Missing a Major Factor in Politics and History
Steve Fraser, Gary Gerstel (2005)
Over the last quarter-century, historians have by and large ceased writing about the role of ruling elites in the country’s evolution. Or if they have taken up the subject, they have done so to argue against its salience for grasping the essentials of American political history. Yet there is something peculiar about this recent intellectual aversion, even if we accept as true the beliefs that democracy, social mobility, and economic dynamism have long inhibited the congealing of a ruling stratum. This aversion has coincided, after all, with one of the largest and fastest-growing disparities in the division of income and wealth in American history….Neglecting the powerful had not been characteristic of historical work before World War II.

On Robberies committed with paper:
“Of all the contrivances for cheating the laboring classes of mankind, none has been more effectual than that which deludes them with paper money. This is one of the most effectual of inventions to fertilize the rich man’s field by the sweat of the poor man’s brow. Ordinary tyranny, oppression, excessive taxation: These bear lightly the happiness of the mass of the community, compared with fraudulent currencies and robberies committed with depreciated paper.” ~ Daniel Webster 1832

The newest ‘Paper’
World Bank Carbon Finance Report for 2007
The carbon economy is the fastest growing industry globally with US$84 billion of carbon trading conducted in 2007, doubling to $116 billion in 2008, and expected to reach over $200 billion by 2012 and over $2,000 billion by 2020.

The ‘Carbon Economy’ is just the newest twist on the old game of “cheating the laboring classes of mankind” because every single one of those dollars come from the pockets of the laboring classes and finds its way into the pockets of the financiers with nothing given in return except the false promise that we are ‘Controlling the Climate’
Waking Activists up to these facts is where the true fight is. People may not understand Physics and Chemistry and Statistics but they do understand scams and frauds. In our favor the trust in bankers/financiers is at an all time low. It does not matter if you are far right or far left, or in the middle, we should all be on the same side when preventing a massive rip-off and the ‘collateral damage’ of the deaths of thousands if not millions.
The people behind the fraud know this is where the fight is. They have been very proactive against us in several different ways. These include:
1. Funding activists through foundations. They even muddy the funding trail by setting up go-betweens such as the Tides Foundation
2. Controlling the activists. This activity started long ago with the ‘Innocents’ Clubs’ and has moved to the modern NGOs

“Very few of even the larger international NGOs are operationally democratic, in the sense that members elect officers or direct policy on particular issues,” notes Peter Spiro. “Arguably it is more often money than membership that determines influence, and money more often represents the support of centralized elites, such as major foundations, than of the grass roots.” The CGG [Commission on Global Governance] has benefited substantially from the largesse of the MacArthur, Carnegie, and Ford Foundations…. http://www.afn.org/~govern/strong.html

3. Repeating ad nauseum their claim that ‘deniers’ are funded by ‘Big Oil’ .
4. Declaring that ‘deniers’ are mentally deficient. To aid this latest tactic Dr. Lewandowsky is trying to publish two peer-reviewed papers designed to make skeptics look like flat-earth mouth breathers unfit for polite society.

Werner Brozek
February 11, 2013 9:28 am

Nigel Harris says:
February 11, 2013 at 1:51 am
looks rather different if you put a simple linear trend line through the data presented.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/trend

Take a closer look at this. 2012 ended at 0.274. The slope of the line is 0.015. So if we multiply this by 5 to get us to 2017, we get 0.075, and adding to 0.274 gives 0.35 which is less than 0.40. So it would have to rise at a faster slope than is shown to reach the 1998 mark.

February 11, 2013 9:39 am

Sherlock Holmes says
It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of making a theory to fit the facts
Werner Brozek starts
In order to answer the question in the title, we need to know what time period is a reasonable period to take into consideration.
Henry says
A reasonable time period is one normal solar cycle i.e. at least 11 years.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2014/plot/hadcrut4gl/from:2002/to:2014/trend/plot/hadcrut3vgl/from:2002/to:2014/plot/hadcrut3vgl/from:2002/to:2014/trend/plot/rss/from:2002/to:2014/plot/rss/from:2002/to:2014/trend/plot/gistemp/from:2002/to:2014/plot/gistemp/from:2002/to:2014/trend/plot/hadsst2gl/from:2002/to:2014/plot/hadsst2gl/from:2002/to:2014/trend/plot/uah/from:2002/to:2014/trend
It is clear the odd one out here is UAH because it must have some calibration errors.
Obviously I did what Sherlock did, making my own dataset ensuring strict requirements for each weather station to be included.
My own data set for means shows a downward slope of -0.02 degree C per annum since 2000.
For the maxima I was able to do a best fit sine wave. We will be cooling until about 2038.

February 11, 2013 10:00 am

richardscourtney says:
February 11, 2013 at 3:24 am
And an important consideration in this determination is whether or not (at the 95% level) a zero trend has existed for 15 or more years. This is because in 2008 NOAA reported that the climate models show “Near-zero and even negative trends are common for intervals of a decade or less in the simulations”.
But, the climate models RULE OUT “(at the 95% level) zero trends for intervals of 15 yr or more”.
I explain this with reference, quote, page number and link to the NOAA 2008 report in my post at February 10, 2013 at 5:48 pm.

But you neglect to mention that those models did not include ENSO (clearly stated in the report).
I understand your difficulty. What the models “rule out” nature has done, and this falsifies your cherished models. You need to come to terms with it.
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.

Greg Goodman
February 11, 2013 10:27 am

Philip Shehan says:
More cherry picking.
Let’s just concentrate on one of the data sets shown – the claim that Hadcrut4 temperature data is flat since November 2000.
Look also at the data from 1999. Or compare the entire Muana Loa data set from 1958 with temperature.
And remember, “statistical significance” here is at the 95% level. That is, even a 94% probability that the data is not a chance result fails at this level.
http://tinyurl.com/atsx4os
==========
Yes, I would agree with your introductory phrase , what you are doing is “more cherry picking”. The usual IPCC deception. Chose a period where both are going in the same direction and pretend this shows causation.
Why did you chose 1968? Let’s see. What does 1928 look like?
http://www.woodfortrees.org/plot/hadcrut4gl/from:1928/mean:12/plot/esrl-co2/from:1928/normalise/scale:0.75/offset:0.2/plot/hadcrut4gl/from:1999/to:2013/trend/plot/hadcrut4gl/from:2000.9/to:2013/trend
Now Mauna Loa record does not go that far back but no one pretends CO2 was dropping from 1930-1960. So what YOU were doing is cherry picking.
What the author was doing was testing the data against a specific claim that was intended to be falsifiable statement … which turned out to be falsified. There is no “cherry-picking” involved in testing where a hypothesis fits the claims of its authors.
That is simple, honest application of scientific method.
What part of “simple” and “honest” is causing you problems Professor Cherry-picker?

February 11, 2013 10:29 am

Werner Brozek says:
February 11, 2013 at 9:13 am
Phil. says:
February 11, 2013 at 5:09 am
Thank you! So I want to be sure I said the correct thing. Earlier, I said the following:
“If you go to:
http://www.skepticalscience.com/trend.php
And if you then start from 1989.67, you would find:
“0.131 ±0.132 °C/decade (2σ)”
In other words, the warming is not significant at 95% since September 1989.”
Is this last statement completely correct or should it be modified in some way? If so, how? Thanks!

You’re welcome. In your last statement I’d say that the null hypothesis is that no warming took place, i.e. that the real trend is less than or equal to zero. My alternate hypothesis would be that warming did take place, I therefore need to reject the null hypothesis on statistical grounds, I choose to do so at the 95% level. Using the data we have and doing a ‘one tailed’ test I would need to show that there is a less than 5% chance of the trend actually being zero or less. In the case you presented the data shows an ~2.5% chance of zero or below so the null hypothesis is rejected therefore warming is significant at the 95% level. I’d prefer to work with the original data and do a Pearson type test but I’d expect a similar result.
To fail to reject the null at the 95% level on this basis the threshold would be at 1.65 sigma so roughly 0.131±0.216°C/decade based on my back of the envelope calc.

Greg Goodman
February 11, 2013 10:41 am

Phil says: Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that.
===
Oh, you gotta love it.
When the data is “corrected” ….. Hey bud, the DATA is correct, that’s why it’s called the data (singular datum: point of reference) . That’s where science starts. Now how about correcting the frigging models?
No one was “correcting” for the lack of ENSO when they were winding up the world at the end of the century. Then the “uncorrected” warming was a “wake up call” etc. etc. Now it we have to take it into account.
How about we “correct” for the 60y cycle and the 10y and the 9y and then come back and look at what climate is really doing?
When all the cycle peaked at once it was all fine and dandy to claim the world was about to turn into Venus. Now the wind is blowing in the other direction , it suddenly has to be corrected…. until next time it starts going up again and we can forget the “corrections”.
The sad thing is you guys are probably serious and honestly believe this garbage.

Werner Brozek
February 11, 2013 11:28 am

Phil. says:
February 11, 2013 at 10:00 am
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.
I would be more inclined to agree with you if there had been a strong El Nino and then neutral conditions afterwards. However as it turns out, the La Ninas that followed the 1998 El Nino effectively cancelled it out. So the slope since 1997, or 16 years and 1 month is 0, however the slope from March, 2000, or 12 years and 11 months, is also 0. So in my opinion, nature has corrected for ENSO. However even if it didn’t, there comes a point in time where one has to stop blaming an ENSO from 14 years back for a lack of catastrophic warming. It would be like a new president blaming the old president for things 7 years after taking office. Note that the graph below actually starts in the La Nina region.
http://www.woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend/plot/rss/from:2000.16/trend
P.S. Thank you for
Phil. says:
February 11, 2013 at 10:29 am

JazzyT
February 11, 2013 11:49 am

richardscourtney says:
February 10, 2013 at 5:48 pm

The NOAA falsification criterion is on page S23 of its 2008 report titled ‘The State Of The Climate’ and can be read at
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

DirkH says:
February 11, 2013 at 5:48 am

See Richard Courtney’s comment above. It is NOAA’s falsification criterion.

The term “falsification criterion” is highly misleading. There is no “falsification criterion” published by NOAA or anyone else. Here again is the relevant quote:

“Near-zero and even negative trends are common for intervals of a decade or less, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

This defines a discrepency, not falsification. There’s no reason to believe NOAA indends these words to be synonlyms.
To start with, when an event takes place that was only 5% probable, the only thing we can say for sure is that we are surprised. (Or, in this case if you’re Phil Jones, you can say that you’re worried.) Sometimes we have tests that will accept or reject a hypothesis (really, rejecting or accepting a null hypothesis) at the 95% level–if we define it that way, and we’re willing to live with the mistakes. We do this In the context of, say, drug testing, We can’t do statistical tests without error, and getting down to a 1% error is very expensive. And, for the final tests, the statistics is all we have. So, some benificial drugs are rejected, and some useless ones accepted, because that simply cannot be helped.
But in science, when the question is, “what’s going on? How does nature work in this system?” we look at the data, perhaps noting that some of it is surprising, and then view it in context with the rest of the data. Some data may show a problem for the hypothesis (or the model) but there’s no universal, automatic rejection criterion.
Even if you decide that the hypothesis (or computer model) is not working, it may be possible to salvage it. It may be that a modified version will work much better, and the discrepencies between data and prediction may even point the way to refining the hypotheses (or model). This is quite common in science. A great many scientists could give advice on how to do this, and, importantly, on when to give up. But, there aren’t any hard-and-fast rules on it, certainly none so simplistic as a 95% criterion for one test.
So: There is no specific “falsificaton citerion” for models based on flat temperatures, only an informal definition of “discrepency.” A discrepency does not falsify the models; it is interesting enough to merit a closer look.
BUT: there is an even more serious problem with this 95% discrepency definition: not all of the processes contributing to temperature are explicitly predicted in the models. Specifically, you cannot predict ENSO, Solar radiation, or even volcanoes many years into the future. So, instead, the models include future El Nino and La Nina events at random (although with frequency and strength consistent with what we know), and do not consider solar and volcanic forcings at all for the future. (Anybody who understands these things can at least guess how solar brightening or dimming, or volcanic eruptions, would affect the model projections.) These all come into play for “hindcasting,” wherein they run the models using the observed ENSO, solar, and volcanic events. 1998 saw a record-breaking El Nino, followed by La Nina events soon afterards and also recently. The sun has been quiet lately, too, which should be of great interest to anyone interested in climate issues.
People keep ignoring this. Really, this is not that difficult. Suppose a soccer (football) commentator, doing TV broadcasts of the games, acquired an uncanny knack for predicting the winners and scores of individual games. But, one week, some of the best players on the stronger team are injured an auto accident and cannot play. They lose. The following week, there’s a surprise announcement; a weaker team acquires a star player and wins handily. The commentator is wrong on both counts. Does everyone say, “he’s lost his touch, he’s no good at predicting!” Or do they say, “He could not have foreseen that?” Reasonable people would say the latter.
So, in looking at recent temperatures, how to deal with ENSO, modeled only statistically in forecasts, and with solar (and volcanic) influences? Rahmstorf and Foster did so with a multivarite regression analysis: http://iopscience.iop.org/1748-9326/7/4/044035/article
This is from 2011, but since then, the issues have been more of the same.
Bob Tisdale has taken issue with the ENSO part, at least:
http://wattsupwiththat.com/2012/11/28/mythbusting-rahmstorf-and-foster/
and
http://wattsupwiththat.com/2012/01/14/tisdale-on-foster-and-rahmstorf-take-2/
Although I think Rahmstorf and Foster have shown that it’s plausible that there is no discrepency for temperatures over the last two decades and longer–or to put it another way, there is not necesarily any discrepency–I would like to see some hindcasts run over that time period, showing how ENSO, solar, and volcanoes fit with known physical parameters, not just fitting them to one another.
Science is complicated. We’re used to getting simple explanations that summarize a lot of this complexity into a more digestible form. But some explanations, like “flat temperatures–models falsified” are much too simple. Even a little digging is enough to falsify a misleading explanation such as that one.

Nigel Harris
February 11, 2013 12:40 pm

Werner,
Thanks for your kind responses.
Although I don’t think you answered your headline question, you’re quite right to point out that the global temperature anomaly series are all showing long periods of time during which temperatures have shown no significant upward trend. It’s clearly a striking departure from the sharp rise that was clearly playing out between 1970 and the late 1990s, and it is well worth drawing attention to this.
I’ve been playing around with LOESS curve fitting to the temperature series, using a 15-year smoothing period, (which to me seems to be a way to ask the question about changes in trends without cherry picking too badly) and they’re all showing similar behaviour, with the WoodForTrees index dead flat since 2007, Hadcrut4 showing a downward trend in the last 5 years, but UAH (including the January 2013 figure) is still rising slowly.
I’m not sure that it has yet reached the point where anyone can really say that the expectation of continued rise in temperature is disproved or falsified. If you fit a straight line to the first half of the satellite era (when everyone agrees temperatures were rising) and extrapolate it forward to the second half, it really doesn’t look that stupid. OK, most series are currently below the extrapolated trend line, but not by an amount that is exceptional.
Lucia Liljegren (The Blackboard) periodically attempts to analyse the question of whether current temperature observations are compatible (at the 95% confidence level) with the predictions of climate models, or with an assumed 0.2C/decade trend. Her understanding of the impact of autocorrelation on linear trend confidence intervals is way beyond mine, She seems to be of the view that recent observations have been skirting very much along the bottom of, and (depending on choice of model) possibly outside, the 95% envelope, but at this stage it wouldn’t take much of an upward move to bring them back into line.
I fear you may find that your analysis is rather sensitive to what happens next. If temperatures drop, then you are going to be able to report longer and more significant non-warming periods as time goes on (and paradigms will probably have to change!), but temperatures show much of a rise, the opposite will happen.
The January 2013 figure for UAH of +0.506, which you haven’t included in this analysis, has already invalidated the 8 year flat-or-negative trend that you report above. With that figure included, you’re down to 2008 (4 years) as the longest flat-or-negative trend.
The very impressive over-16-years RSS flat trend includes the January 2013 figure, but if RSS stays at exactly the same level for the rest of 2013, I’m afraid you won’t reach Ben Santer’s 17 full years of no warming, because by Dec 2013, you’ll still have a month to go and the trend will have turned positive!

Nigel Harris
February 11, 2013 1:00 pm

Werner,
Apologies for the second post, but I wanted also to respond specifically to your comment (February 11, 2013 at 9:28 am) about my suggestion:

Your final chart looks rather different if you put a simple linear trend line through the data presented.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/plot/hadcrut4gl/from:1990/mean:12/offset:-0.16/trend

You said:

Take a closer look at this. 2012 ended at 0.274. The slope of the line is 0.015. So if we multiply this by 5 to get us to 2017, we get 0.075, and adding to 0.274 gives 0.35 which is less than 0.40. So it would have to rise at a faster slope than is shown to reach the 1998 mark.

Werner, if you think that the slope of 0.015 has any validity as a predictor of what might happen next, then you have to embrace the entire linear regression model. The linear regression model is telling us both that the slope is 0.015 and that the best estimate of the real value (after eliminating noise) at the end of 2012 is 0.381. You can’t pick out the slope figure and apply it using the Dec 2012 figure of 0.274 as the starting point, because the model tells us we’re already at 0.381 at Dec 2012.
The latest observation, 0.274, is just a single observation. It happens to be 0.108 (roughly 2 standard deviations according to Skeptical Science’s calculator) below the best estimate of the central trend value at Dec 2012. If the linear model is right, that 0.108 deviation is noise. By 2017, we’re as likely to end up 0.108 above the trend as 0.108 below it. So a figure of 0.35 is possible, but so is a figure of 0.57! And the most likely outcome (if the linear model is right) is going to be bang on the current trend line, or round about 0.45, which is comfortably above the 1998 figure.

Werner Brozek
February 11, 2013 1:40 pm

Nigel Harris says:
February 11, 2013 at 12:40 pm
Nigel,
Thanks for your kind responses.
As far as me not including January for UAH, that is because UAH has not put the January numbers in the data set that WFT uses. But it gets worse than that. I only could go to November with GISS, Hadcrut3, and WTI since WFT does not have the December numbers on its site yet. If I see new numbers tonight, I will give an update. Should all numbers appear, then UAH will get shorter and Hadcrut3 and WTI will get longer. I do not know about GISS since while there was a huge drop in December, they adjusted many numbers upwards last month.
As far as the jump in January was concerned, I did find it odd since ENSO has been dropping for the last 5 months. However sudden rises and drops are not unheard of. See:
Hadcrut4 data is shown from October 1996 to March 1997. From November to January, the anomaly jumped by 0.293. Then from January to February, it dropped by 0.28.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2006.75/to:2007.25

Werner Brozek
February 11, 2013 2:12 pm

Nigel Harris says:
February 11, 2013 at 1:00 pm
Hello Nigel,
The Hadcrut people did something really strange with regards to their prediction of 0.43. They used a different base line where the 1998 value became 0.40 instead of 0.56. That is why I had to offset it by -0.16. Here is what they really should have used:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1997/mean:12
However this makes no difference to my final conclusion.
Note that the final point on the above graph is close to the 2012 average of 0.434. I do not think the slope will go up at 0.015/year. I am just saying that even if it did, you would not reach the 1998 height. That would rival the fastest trends ever recorded. See:
http://news.bbc.co.uk/2/hi/science/nature/8511670.stm

February 11, 2013 2:43 pm

Phil.:
At February 11, 2013 at 10:00 am you perhaps inadvertently mislead by only quoting the synopsis in my post at richardscourtney February 11, 2013 at 3:24 am.
Please read my earlier post at February 10, 2013 at 5:48 pm.
In that post I cite, reference, link to and quote the entire NOAA falsification criterion. To save you finding it I copy the pertinent section from it here.

The NOAA falsification criterion is on page S23 of its 2008 report titled ‘The State Of The Climate’ and can be read at
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf
It says

ENSO-adjusted warming in the three surface temperature datasets over the last 2–25 yr continually lies within the 90% range of all similar-length ENSO-adjusted temperature changes in these simulations (Fig. 2.8b). Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.

So, the climate models show “Near-zero and even negative trends are common for intervals of a decade or less in the simulations”.
But, the climate models RULE OUT “(at the 95% level) zero trends for intervals of 15 yr or more”.
We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.
The facts are clear.
According to the falsification criterion set by NOAA in 2008, the climate models are falsified by the recent period of 16+ years of (at 95% confidence) zero global temperature trend. This is because NOAA says the climate models simulations often show periods of 10 years when global temperature trends are zero or negative but the simulations rule out near zero trends in global temperature for periods of 15 years. What the models “rule out” nature has done.
We need to clearly and repeatedly remind of what NOAA said in 2008. Otherwise the goal posts will be moved and and moved again until they are off the planet.

Please note that I specifically stated
“We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.”
So, a claim that I misstated the criterion and did not consider ENSO is a falsehood.
Richard

February 11, 2013 2:52 pm

Phil.:
This is a deliberate addendum to my reply to your post at February 11, 2013 at 10:00 am.
This is separate so it is clear and not hidden among other points.
You wrongly assert

Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.

ENSO is not understood so no method for removing ENSO can be rationally asserted as being better than any other.
I wrote

“We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.”

You do not have a point.
Richard

February 11, 2013 3:00 pm

JazzyT:
I read your sophistry at February 11, 2013 at 11:49 am.
If you insist then I am willing to agree that the criterion be called a ;’discrepancy criterion’ and not the usual ‘falsification criterion’.
So, according to your wording
the models are not falsified when they are “discrepant” with reality.
Perhaps you would explain how “discrepant” they have to be for them to be falsified?
Richard

February 11, 2013 3:05 pm

Werner Brozek says:
February 11, 2013 at 11:28 am
Phil. says:
February 11, 2013 at 10:00 am
“Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.”
I would be more inclined to agree with you if there had been a strong El Nino and then neutral conditions afterwards. However as it turns out, the La Ninas that followed the 1998 El Nino effectively cancelled it out. So the slope since 1997, or 16 years and 1 month is 0, however the slope from March, 2000, or 12 years and 11 months, is also 0. So in my opinion, nature has corrected for ENSO. However even if it didn’t, there comes a point in time where one has to stop blaming an ENSO from 14 years back for a lack of catastrophic warming.

Certainly there comes a time however the statistical treatment using ENSO Index suggests that that time is not yet. However the main point is that you can’t use a criterion based on models which don’t include an ENSO model and apply it to a time series which does include ENSO.

February 11, 2013 3:12 pm

Greg Goodman says:
February 11, 2013 at 10:41 am
Phil says: Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that.
===
Oh, you gotta love it.
When the data is “corrected” ….. Hey bud, the DATA is correct, that’s why it’s called the data (singular datum: point of reference) . That’s where science starts. Now how about correcting the frigging models?

The data is correct, but it can not be compared with the results of models which don’t contain a model for the ENSO phenomenon. Yes to incorporate a model for ENSO would be nice, but since it’s a phenomenon which doesn’t occur on a regular basis that’s difficult to do! What has been done is to account for the known events and then compare, which shows no such extended flat period.

February 11, 2013 3:20 pm

richardscourtney says:
February 11, 2013 at 2:52 pm
Phil.:
This is a deliberate addendum to my reply to your post at February 11, 2013 at 10:00 am.
This is separate so it is clear and not hidden among other points.
You wrongly assert
Since nature has not eschewed ENSO, nature has not in fact ‘done it’, in fact when the data is corrected for the presence of ENSO no such 15 year period is observed. You need to come to terms with that. Pick a period starting with an El Niño and ending with a La Niña and you’d expect flattening.
ENSO is not understood so no method for removing ENSO can be rationally asserted as being better than any other.
I wrote
“We now see that reality has had (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.”
You do not have a point.

Actually I do, you are attempting to compare apples with oranges, you also assert that no method can be rationally used to correct for ENSO in which case even the comparisons made using statistical adjustments for ENSO aren’t valid, So your continued quoting of the NOAA criterion isn’t applicable to real world data. The ENSO phenomenon over the last 17 years is much more than a single event in 1998. So I reiterate, nature has not done it.

Philip Shehan
February 11, 2013 3:27 pm

Greg Goodman says:
February 11, 2013 at 10:27 am…
You miss my point. I cherry picked 1999 to show that it gave a quite different result to November 2000.The point being that you can cherry pick a short term period to show any trend you like. (My choice of 1958 was not cherry picking. That is when Muana Loa data starts.)
Furthermore, the concentration on “statistical significance” actually shows precisely why short term data trends are misleading. In the short term the noise swamps the signal, and the shorter the time frame the greater the uncertainty. Take the Hadcrut4 data above, and proceed backwards decade by decade. The trend and 95 % confidence limits per decade are as follows :
11/2000: -0.008 ± 0.171
1999 : 0.079 ± 0.149 °
1995: 0.098 ± 0.111
1990: 0.144 ± 0.080
1980: 0.158 ± 0.045
1970: 0.164 ± 0.031
1960: 0.132 ± 0.025
And so on the further back you go.
http://www.skepticalscience.com/trend.php
So those who pick short periods of time and declare them “not statistically significant” are actually explaining why their data should be ignored.
They are setting the data up to fail. Only multidecadal trends are meaningful.

D.B. Stealey
February 11, 2013 4:18 pm

“Has Global Warming Stalled?”
Of course it has. Only the ignorant and the dishonest would claim otherwise.

February 11, 2013 4:33 pm

Phil.:
At February 11, 2013 at 3:12 pm you say

The data is correct, but it can not be compared with the results of models which don’t contain a model for the ENSO phenomenon. Yes to incorporate a model for ENSO would be nice, but since it’s a phenomenon which doesn’t occur on a regular basis that’s difficult to do! What has been done is to account for the known events and then compare, which shows no such extended flat period.

I feedback what I read that to say because if this is how I understand your comment then other will, too.
You are saying that when the empirical data don’t agree with the model then the empirical data must be adjusted to agree.
That contravenes every principle of scientific modelling.
Of course, one may want to exclude the effect of ENSO from the data because the model does not emulate ENSO. But nobody understands ENSO and, therefore, parsimony dictates the exclusion needs to be interpolation across – or extrapolation across – an ENSO event. Any other ‘adjustment’ for ENSO is a fudge.
The global temperature time series each shows (at the 95% level) zero trends for more than 17 years whether or not one interpolates across or extrapolates back across the 1998 ENSO peak.
I am sure you will want to respond to this and I apologise that I will not see any reply for about a week because I am about to go on one of my frequent but irregular trips which exclude me from communications.
Richard

Werner Brozek
February 11, 2013 5:27 pm

Philip Shehan says:
February 11, 2013 at 3:27 pm
In the short term the noise swamps the signal, and the shorter the time frame the greater the uncertainty.  …..Only multidecadal trends are meaningful.
But on the other hand, if the warming rate is high enough, then 16 years is sufficient. Check out the following for Hadcrut4:
Start of 1995 to end 2009: 0.133 +/- 0.144. Warming for 15 years is NOT significant.
Start of 1995 to end 2010: 0.137 +/- 0.129. Warming for 16 years IS significant.
Start of 1995 to end 2011: 0.109 +/- 0.119. Warming for 17 years is NOT significant.
Start of 1995 to October 2012: 0.098 +/- 0.111. Warming for 18 years is NOT significant.

Phil.
February 11, 2013 6:57 pm

Werner based on the ‘one tailed’ test all of those examples show significant warming at the 95% level.

D.B. Stealey
February 11, 2013 7:06 pm

Phil.,
You sound positively jealous that Werner Brozek has posted such a credible argument.
There’s nothing stopping you from posting your own article, you know. Then you would see all the similar nitpicking comments about whether a test has one tail or two, etc.
Face facts, Werner has made a good case. Global warming has stalled.

Werner Brozek
February 11, 2013 7:25 pm

UPDATE
UAH has now been updated for January and it has had a huge effect on the length of time that the slope is at least slightly negative. The time dropped from 8 years and 3 months to 4 years and 7 months. So the slope is now negative from June 2008 to January 2013. See the green line at:
http://www.woodfortrees.org/plot/uah/from:2000/plot/uah/from:2008.5/trend

Philip Shehan
February 11, 2013 7:58 pm

Werner, You are not only cherry picking years, you are picking years and months within a very short time frame and find one set that is only just significant. You are rather making my point for me.

Werner Brozek
February 11, 2013 7:59 pm

Phil. says:
February 11, 2013 at 6:57 pm
Werner based on the ‘one tailed’ test all of those examples show significant warming at the 95% level.
Phil, I appreciate your help, and from earlier comments, I understand where you are coming from. I realize your expertise in statistics is greater than mine and I see no reason to dispute the above statement. I checked the site at: http://www.skepticalscience.com/trend.php
I found this statement with regards to using this site:
“What can you do with it?
That’s up to you, but here are some possibilities:
Examine how long a period is required to identify a recent trend significantly different from zero.”
You may recall Phil Jones’ comments in 2010 that the warming was not significant for 15 years from 1995 to 2009, but later said it was significant for 16 years from 2009 to 2010. The only thing that makes sense to me is that he used the ‘two tailed’ test that skeptical science uses. Would you agree with that? I appreciate you informing me that I cannot mention the 95%. I will therefore play it safe and say that according to the skeptical science site, such and such is where the trend is “significantly different from zero”. Does the skeptical science site do things wrongly? Perhaps, although I am not in a position to judge that.
Regards

Werner Brozek
February 11, 2013 8:19 pm

Philip Shehan says:
February 11, 2013 at 7:58 pm
Werner, You are not only cherry picking years, you are picking years and months within a very short time frame and find one set that is only just significant.
At this point in time, three of the data sets that I discussed show no warming for over 15 years, namely RSS, Hadcrut3 and Hadsst2. And three do not show this, namely GISS, UAH and Hadcrut4. It will be interesting to see what this year brings.

Philip Shehan
February 11, 2013 9:49 pm

justthefactswuwt says:
February 11, 2013 at 9:17 pm…
It’s not really whether or not a particular period shows no warming. Its whether or not a particular data set says anything meaningfull with regards to warming, cooling or stasis.
I included the period 1995 to the present because there is a concurrent discussion of this on Andrew Bolt’s website. Professor Sinclair Davidson correctly identifies that period as showing no statistically significant warming at the 95% confidence level. (Actually he was looking at hadcrut 3 data rather than hadcrut 4 as I have done):
Trend: 0.098 ±0.111 °C/decade (2σ)
What this means is that there is a 95% probability that the actual trend is between 0.209 and -0.013 °C/decade. In other words, you can’t be sure (at least at the 95% level) whether the temperature is actually warming, cooling or static over this period, not just whether or not it is warming.
Werner points out that if you look at a shorter period from 1995 to the end of 2010 rather than the the beginning of 2013 you can identify a trend (to 95% confidence) for that period and it is warming, but perhaps by as little as 0.008 °C/decade.
So for any period you choose, if you want to call the period warming, cooling or static, you need to look at the entire 95% probability range and see of the trend is positive or negative. The range will generally be narrower the longer the period you look at. (But not certainly – again because the signal to noise is lower for short periods, and the assumption here is that the trend is linear which may not hold over many decades.)

David Cage
February 11, 2013 11:42 pm

Global warming has not stalled .It never was. At least one group of climate scientists were told right at the start of the theory in the early seventies that pattern analysis showed that most of the change was entirely cyclic and that the rest was well within the level of noise that has occurred at least ten time in recorded climate information. The recent noise levels on the temperature graph makes the cyclic nature clearer than at most times, as well as showing clearly the warming phase is over and we are going into the cooler phase.
When will climate scientists start to look at the data not as climate but as patterns to be analysed using the best analysis methods available rather than their archaic filtering methods so that they can start to understand what is the difference baseline they should be using? Without this they can not hope to get a correct figure.
Also when will people stop nit picking about whether the curve is flat plus or minus a gnats whisker and have to cherry pick in both directions to decide? All we need to decide is whether the temperature is on a wildly escalating out of control feedback curve with forty eight months to doomsday or not. Anyone who believes we are should have to sign up to the idea of full compensation by climate scientists if they are wrong since they refuse to have their work external scrutinised by external examiners.

Nigel Harris
February 12, 2013 1:28 am

justthefactswuwt:
How about if I look for another unique attribute of each data set, such as the maximum regression slope for periods ending with the latest value (excluding periods of less than 10 years which give extreme but clearly non-significant values) and say that UAH has been rising by 2.05 C/decade since October 1991
http://www.woodfortrees.org/plot/uah/from:1991.75/plot/uah/from:1991.75/trend
and Hadcrut4 has been rising by 1.74 C/decade since December 1973
http://www.woodfortrees.org/plot/hadcrut4gl/from:1973.9/plot/hadcrut4gl/from:1973.9/trend
(and so on for other data sets). Is that not be cherry picking?

February 12, 2013 1:44 am

David Cage says
http://wattsupwiththat.com/2013/02/10/has-global-warming-stalled/#comment-1222903
henry says
there are not too many of us who actually seem to have figured out that natural cycle/
I did, but only found it by looking at maxima…
which NOBODY has done, yet….and is still not doing.
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Henry@Werner
Nice job.
Still, I would consider throwing out UAH as it does not fit in with any of the other data sets, including my own.
All data sets including my own show a negative trend over the past 11-12 years (which is the equivalent of at least one complete solar cycle period) – meaning that we have entered a cooling off period.
http://www.woodfortrees.org/plot/hadcrut4gl/from:2002/to:2013/plot/hadcrut4gl/from:2002/to:2013/trend/plot/hadcrut3vgl/from:2002/to:2013/plot/hadcrut3vgl/from:2002/to:2013/trend/plot/rss/from:2002/to:2013/plot/rss/from:2002/to:2013/trend/plot/gistemp/from:2002/to:2013/plot/gistemp/from:2002/to:2013/trend/plot/hadsst2gl/from:2002/to:2013/plot/hadsst2gl/from:2002/to:2013/trend

Philip Shehan
February 12, 2013 2:22 am

“David Cage says:
February 11, 2013 at 11:42 pm
…Anyone who believes we are should have to sign up to the idea of full compensation by climate scientists if they are wrong since they refuse to have their work external scrutinised by external examiners.”
The idea that scientists, climate or otherwise, refuse to have their work scrutinised by “external examiners” or anyone else is complete nonsense.
The entire scientific project rests on the publicaton, discussion and challenging of results and theories which is precisely what is going on here.
Who are these “external examiners” to be? Presumably they would have to be people who are most qualified to understand the science. Scientists in fact. And who is going to appoint them? Governments? “Independent” think tanks? Industry? Conservation groups?
And as for compensation, are skeptics to be made to pay for losses incured if the majority of climate scientists are right.
In Italy we have had the ludicrous situation where scientists have been convicted of a criminal offense because they were not “alarmist” enough about the possibility of an earthquake occuring.

MikeB
February 12, 2013 4:31 am

There is an update on the BBC story about parts of Africa becoming 3.5 deg.C warmer in just 20 years.
The Guardian correspondent, Leo Hickman. has got the bit between his teeth on this one and is intent on tracking down where this erroneous statement came from (yes, I said Guardian).
After following the ‘evidence’ from one green lobby group to another it all comes down to a single weather station in Africa, at Kericho.

It confirms what the original Christian Aid report had claimed about the Kericho weather station recording a 3.5C (3.6C, actually) rise in “maximum temperatures” over a 20-year period.

But then, thanks to Dr Menno Bouma at the London School of Hygiene and Tropical Medicine, he discovered that there were known problems with the Kericho temperature record

In researching the temperature records of Kericho, we discovered that the meteorological station of Kericho, in operation at least since 1957, had been moved to a new location on a lower altitude in 1986. In view of the local “lapse rate”, the decline in temperature with increasing altitude, this change from 2,184 meter to 1,977 meter accounts for an additional change in temperature of between 1.1C and 1.3C. This move of the observations to a lower altitude does not appear to have been taken into account in temperature records released by the Kenyan meteorological institute, and this has wrong-footed researchers and publications based on these data.

http://www.guardian.co.uk/environment/blog/2013/feb/08/bbc-global-warming-attenborough-africa

Graham W
February 12, 2013 5:11 am

Nigel Harris says:
February 12, 2013 at 1:28 am
justthefactswuwt:
“How about if I look for another unique attribute of each data set, such as the maximum regression slope for periods ending with the latest value (excluding periods of less than 10 years which give extreme but clearly non-significant values) and say that UAH has been rising by 2.05 C/decade since October 1991
http://www.woodfortrees.org/plot/uah/from:1991.75/plot/uah/from:1991.75/trend
and Hadcrut4 has been rising by 1.74 C/decade since December 1973
http://www.woodfortrees.org/plot/hadcrut4gl/from:1973.9/plot/hadcrut4gl/from:1973.9/trend
(and so on for other data sets). Is that not be cherry picking?”
Not cherry picking, just wrong by a factor of 10. The trends you are referring to are 0.204 +/- 0.150 C/decade, for UAH, and 0.175 +/- 0.035 C/decade. These figures are from the skeptical science trend calculator.
2.05 C/decade would give a rise of 20.5 degrees C over 100 years.
And the answer to your question is no, it’s not cherry picking. These two trends exist. What of them? What’s your point?

Philip Shehan
February 12, 2013 5:50 am

justthefactswuwt says:
February 12, 2013 at 5:31 am…
Perhaps “cherry picking” was the wrong term but the point is that going back to find when global warming has “stalled” is not likely to be usefull unless you are limited to time periods that give a statistically significant result. So unless Werner has done such of an analysis on his flat data sets they are not telling us much.
Forgive me for not doing this using the program I have given above but it is 12:50 am here in Melbourne so I think I will retire for the night.

Graham W
February 12, 2013 5:58 am

Nigel Harris says:
February 12, 2013 at 1:28 am
justthefactswuwt:
P.S: As I’ve said already, the trend in the HADCRUT 4 data from 1863 – present, is 0.051 +/- 0.007 C/decade. So there is only a 5% chance that the trend over this entire time period is greater than 0.058 C/decade. Nevertheless, if you look at shorter time periods within that 150 year period, you will find higher trends, like the ones you have. So clearly temperatures haven’t and don’t just go up in a straight line. You can see that just from looking at a graph of the data over the 150 years.
If there are short term trends higher than this 0.051 C/decade, with 95% confidence (and there are, for instance 1975 – 1991 in the HADCRUT 4 data) then there MUST be lower short-term trends to balance it all out. It’s just that you can’t discern these lower or zero or slightly negative short trends from the noise because the computed trends are inevitably going to be lower than their 95% confidence intervals. I mean how long a period with a “trend” of very near to zero would you need to have to be 95% sure it was a trend, in this noisy data!? 50 years? 100 years? How likely is that to happen!? This doesn’t mean that these near zero trends can’t exist [and in fact, they MUST do as I have demonstrated], and that it’s just “cherry picking” to point out trends that are statistically-indistinguishable from zero.

D.B. Stealey
February 12, 2013 6:26 am

Shehan says:
“So unless Werner has done such of an analysis on his flat data sets they are not telling us much.”
Given that Shehan has done NO analysis on anything, we can dismiss his pseudo-scientific conjectures out of hand, as rank amateur nonsense.
Of course global warming has stalled. Only blinkered idiots believe otherwise. Empirical data tells the truth — unlike climate alarmists, who feel the need to lie because the planet is not cooperating with their false narrative.

February 12, 2013 6:28 am

Werner Brozek says:
February 11, 2013 at 7:59 pm
Phil. says:
February 11, 2013 at 6:57 pm
Werner based on the ‘one tailed’ test all of those examples show significant warming at the 95% level.
Phil, I appreciate your help, and from earlier comments, I understand where you are coming from. I realize your expertise in statistics is greater than mine and I see no reason to dispute the above statement. I checked the site at: http://www.skepticalscience.com/trend.php
I found this statement with regards to using this site:
“What can you do with it?
That’s up to you, but here are some possibilities:
Examine how long a period is required to identify a recent trend significantly different from zero.”
You may recall Phil Jones’ comments in 2010 that the warming was not significant for 15 years from 1995 to 2009, but later said it was significant for 16 years from 2009 to 2010.

Actually as part of a BBC interview Jones was asked the question which had be submitted by Lindzen:
“Do you agree that from 1995 to the present there has been no statistically-significant global warming?”
To which Jones answered that it was only just below the 95% significance level because of the shortness of the period and would likely become significant when more data was added. A point he confirmed when the next year’s data was added.
The only thing that makes sense to me is that he used the ‘two tailed’ test that skeptical science uses. Would you agree with that? I appreciate you informing me that I cannot mention the 95%. I will therefore play it safe and say that according to the skeptical science site, such and such is where the trend is “significantly different from zero”. Does the skeptical science site do things wrongly? Perhaps, although I am not in a position to judge that.
It probably was a ‘two tailed’ test however as stated above that makes no sense to me. In all the data you’ve presented above the probability of the true trend being zero or below is around 3% or less, so the null hypothesis of ‘not warming’ is rejected at the 95% level. To cobble that together with the finding that there is a 3% probability that the trend exceeds 0.25 (say) therefore the chance of ‘not warming’ now falls below 95% is nonsense. If you have a normally distributed variable the answer to the question ‘what is the chance that the value is below 2 sigma below the mean?’ is ~0.25%, the answer to the question ‘what is the chance that the value is outside the range of 2 sigma from the mean?’ is ~5%. Those are not the same question and therefore don’t have the same answer! To those who analyse the data in the way you have and then use the result to say that ‘there is no warming’ I could say with equal accuracy that the data means that ‘there is warming in excess of 0.264 (for the Hadcrut data I think). I think that illustrates the error in making such a claim. Based on the actual data analysis that you have done we can say that there is more than a 20:1 chance that warming continued over the period.

Werner Brozek
February 12, 2013 9:31 am

HenryP says:
February 12, 2013 at 1:44 am
I would consider throwing out UAH as it does not fit in with any of the other data sets, including my own.
The latest straight line of 4 years and 7 months certainly looks extremely out of place, but it would not take too many cool months to get that to around 11 years. Let us see what version 6 shows.
UAH is using version 5.5, however a more accurate version 6 has been in the works for a while, but it is not completed. Hopefully it will narrow the gap when it is done.
From Dr. Spencer on January 3, 2012:
“I’m making very good progress on the Version 6 of the global temperature dataset, and it looks like the new diurnal drift correction method is working for AMSU. Next is to apply the new AMSU-based corrections to the older (pre-August 1998) MSU data.”

Werner Brozek
February 12, 2013 9:38 am

Philip Shehan says:
February 12, 2013 at 5:50 am
So unless Werner has done such of an analysis on his flat data sets they are not telling us much.
I have done no analysis. I just let NOAA tell me how big the goal post was, namely 15 years of 0 slope and I have just confirmed that Earth scored a goal on three of the data sets. For all I know, NOAA could be wrong. You would need to take that up with them.

Werner Brozek
February 12, 2013 10:03 am

Phil. says:
February 12, 2013 at 6:28 am
To which Jones answered that it was only just below the 95% significance level because of the shortness of the period and would likely become significant when more data was added. A point he confirmed when the next year’s data was added.
We also cannot ignore the fact that 2010 was extremely warm. In fact, two of the data sets show 2010 as the warmest on record. However the warming was no longer significant at the end of 2011 and 2012 since they cooled off. So the extra years did not help here.
To those who analyse the data in the way you have and then use the result to say that ‘there is no warming’ I could say with equal accuracy that the data means that ‘there is warming in excess of 0.264 
I agree with you. I should have said that according to skeptical science, this is the period that is not “significantly different from zero”. It seems that we are dealing with two different definitions, each of which has its own validity, depending on the purposes. If you feel skeptical science is doing things wrongly, by all means, take it up with them. In the meantime, I am not a statistician, and I am forced to use tools at my disposal.
Regards

Werner Brozek
February 12, 2013 1:14 pm

UPDATES
We do not have any update from WFT for GISS and Hadcrut3 for December, however this site has now been updated to the end of December: http://www.skepticalscience.com/trend.php
In the report, I said:
1. For GISS, the slope is flat since May 2001 or 11 years, 7 months. (goes to November)
2. For Hadcrut3, the slope is flat since May 1997 or 15 years, 7 months. (goes to November)
With the update, that is now changed to:
1. For GISS, the slope is flat since March 2001 or 11 years, 10 months. (goes to December)
2. For Hadcrut3, the slope is flat since March 1997 or 15 years, 10 months. (goes to December) (The slope is 0.000/decade here)
In addition, all parts of section 2 are changed slightly, however I do not think it is worth commenting on a change in the third decimal place or a change of a single month. However GISS changed from October 1995 to June 1995 for the period where there has not been significant warming according to their criteria.

Philip Shehan
February 12, 2013 2:03 pm

Oh heck. He who thinks that abuse is a substitute for scientific argument has tracked me down again.
“Given that Shehan has done NO analysis on anything, we can dismiss his pseudo-scientific conjectures out of hand, as rank amateur nonsense… Empirical data tells the truth — unlike climate alarmists, who feel the need to lie because the planet is not cooperating with their false narrative.”
Actually anyone who has taken even a cursory glance at my posts here can see I have done a great deal more analysis of the empirical data than the Great Abuser. But he never lets the facts get in the way of his manure shoveling.
Anyway, let’s just take a different perspective on his graph. Excuse me for substituting Hadcrut 4 data for RSS but the latter does not go back to 1958 when Muana Loa data began.
http://tinyurl.com/a7pv4th
As it happens, the Hadcrut 4 data shows a positive trend for the period GA chooses:
Trend: 0.038 ±0.137 °C/decade (2σ)
So GA can accept on this basis of this empirical data that warming has not stalled, or that what the data is really showing is that there is a 95% probability that warming of as much as 0.175 or cooling of -0.099 °C/decade has occurred.

Werner Brozek
February 12, 2013 2:38 pm

Philip Shehan says:
February 12, 2013 at 2:03 pm
Excuse me for substituting Hadcrut 4 data for RSS but the latter does not go back to 1958
Trend: 0.038 ±0.137 °C/decade (2σ)

Is there any reason you did not take Hadcrut3? For the same period, it gives
Trend: -0.010 ±0.146 °C/decade (2σ)
However even going with 0.38/century, that is way less than 2/century, so it is obviously nothing to worry about, right?

D.B. Stealey
February 12, 2013 3:12 pm

Werner Brozek,
Thank you for your excellent article. No one knows if, or when, global warming will resume, or if global cooling will follow. What we do know is that currently, global warming has stalled.
Next, you should not waste your time trying to educate Philip Shehan, it is an impossible task. He still insists that global warming is accelerating [he climbed down from that false assertion until he was called on it — then he went right back to posting his fabricated SkS chart purportedly showing a bogus ‘acceleration’ of global warming]. Shehan can post all the nonsense he wants, but if he still insists that global warming is accelerating, he will get challenged with verifiable facts. The fact is that global warming has remained on the same trend line since the LIA. It has never “accelerated”. The long term natural warming trend is ≈0.3º/century.
Finally, HadCRUT3 is not used because it began showing uncomfortable facts, so version 4 was substituted.

Philip Shehan
February 12, 2013 3:19 pm

Werner:
No reason at all not to use Hadcrut3. Its just I had been using Hadcrut4 in my earlier analysis and I assume 4 is the new improved version but all the data sets are close enough. Preferring one data set over another is another example of cherry picking. But this is another way of making my point here. Over multidecal time periods, these data sets are so similar that cherry picking is rather pointless.
And yes, an increase of 0.38 over the next century would not be cause for concern, but the point I am making is the 95% range for that fit is 1.75 to -0.99 C per century (on the dubious assumption that the trend is linear over a century)
For the Hadcrut3 data the trend is a drop of 0.1 C per century within a range of 1.36 to -1.56 C.
The point here is that although one data set predicts a small rise in trend and the other a small drop, the uncertainties in short term trends make this distinction next to meaningless.

February 12, 2013 7:58 pm

Global warming has not stalled, and Global warming is not a question to wait out; it is a fact, and it is happening. Strings of warm temperatures are unlikely to be due by chance. Small sets of data are meaningless in this case, and a trend in data must be examined over millions of years, not recent peaks or troughs. By doing this you will see that the trend proves that the annual temperature is increasing. Further proof of global warming is in the melting of Arctic snow and ice. In fact, the current extent of summer ice is so low near the poles that a Northwest passage has been opened. More proof lies in the biosphere. The biosphere is not waiting to be told that it is warmer, it is already responding. For instance, the growing season between 45 and 70 degrees N latitude has lengthened by about 10 days (warmer temperatures affect plant growth positively). Migration patterns of British birds have extended their ranges northward by about 19km over the last 20 years. And hibernation yellow-bellied marmots emerge 38 days earlier in the Rocky mountains. Global warming is not a question, it is a fact. There are many examples of the biosphere changing and responding due to warmer temperature. It seems ever creature is acting and adapting to the warming climate except for humans, who ironically, are the number one cause of warming.

D.B. Stealey
February 12, 2013 8:29 pm

Earth to Rachel:
Global warming has stalled. That is a fact.
Sorry about your belief system. Reality intrudes.

D.B. Stealey
February 12, 2013 9:31 pm

Justthefactswuwt,
Excellent rebuttal. Anyone can see that we are currently in a cold phase — not a full blown Ice Age, but a temporary cool interglacial that could end at any time. It was not that many thousands of years ago that the northern U.S. was covered in mile-thick glacier ice. That will certainly happen again.
If Rachel replies, I expect her to back up her belief system with facts similar to those that you have posted above. But of course, she probably will either not reply, or she will go off on an emotional rant devoid of any scientific evidence. Because lacking empirical evidence, emotion is the driving force behind the belief system of climate alarmists.

Philip Shehan
February 12, 2013 9:40 pm

For the record.
I never stated that temperature is accelerating, nor did I “climb down” from that position. I stated, in reference to a paper which presented data from 1880 to 2007 that the temperature had accelerated over that period.
http://www.skepticalscience.com/pics/AMTI.png
The Great Abuser has never been able to grasp the distinction between present and past, no matter how many times I explain it.
As my posts here demonstrate, I am of the opinion that short term trends are too prone to uncertainty to be taken as a guide to long term trends.

Philip Shehan
February 12, 2013 9:51 pm

justthefactswuwt says:
February 12, 2013 at 8:52 pm…
Yes Rachel is mistaken. Global temperatures have varied wildly over geologic timescales. As your plots indicate, a primary driver is changes in the Earth’s axis tilt and orbit.
The thing is that for these factors have had an insignificant effect on temperature change in the last 150 years – a blink in geologic timescales. For the first time in the Earth’s history, humans have been affecting climate by burning huge amounts of carbon locked up in the earth’s crust and pumping large amounts of greenhouse gasses into the atmosphere as a waste product.
Attempts to account for the increase in temperature since the onset of the industrial revolution using entirely “natural” forcings without including this anthropogenic factor have not matched the temperature data.

D.B. Stealey
February 12, 2013 9:53 pm

Glad to see that Shehan is once again climbing down from his formerly repeated claim that global warming has been ‘accelerating’. If necessary, I will post his bogus SkS “accelerating” graph — covering quite a long time frame [not just “short term trends”] — which is the basis for Shehan’s alarmist argument.
The fact is that global warming since the LIA is entirely natural. The planet has been warming along the same long term trend line for hundreds of years — whether CO2 was low, or high. Thus, CO2 makes no measurable difference to global warming. None at all.
Therefore, the demonization of harmless, beneficial “carbon” is completely falsified by verifiable, testable scientific evidence. The CO2=CAGW conjecture fails. It was always a stupid assumption anyway.

February 12, 2013 10:00 pm

I had a look at my own data set again, (47 weather stations, with complete or nearly complete records, balanced by latitude and balanced by @sea/inland 70/30 – longitude does not matter)
specifically looking at the speed of warming :
The speed of warming/cooling for means is 0.014K/annum calculated from 1974 (38 yrs), 0.013K/annum from 1980 (32 yrs), 0.014 from 1990 (22 years) and -0.017 from 2000 (12 years).
The speed of warming/cooling for maxima is 0.036K/annum from 1974 (38 yrs), 0.029K/annum from 1980 (32 yrs), 0.014 from 1990 (22 years) and -0.016 from 2000 (12 years).
So basically, we changed sign from warming to cooling, at least before 2000….
I can therefore confirm that I would expect “a stalling” for an even longer period than 12 years.
From your graphs, it seems to me Hadcrut3 comes nearest to my own observations.
Does anyone know: What exactly is the difference between Hadcrut3 and HADCRUT4?

Philip Shehan
February 12, 2013 10:17 pm

Huh?
I already posted the “bogus” SkS “accelerating graph”.
And yes it covers “quite a long time frame” (from 1850 to 2010) compared to the short time periods of less than a couple of decades. But not so long that factors that operate over geologic timescales must be taken into account.
Which is the point I have been making here. Only multi decadal timeframes can give reliable trends.
I told you all that this person has comprehension difficulties.

Philip Shehan
February 12, 2013 10:23 pm

HenryP says:
February 12, 2013 at 10:00 pm …
http://www.thegwpf.org/an-updated-hadcrut4-and-some-surprises/
A complaint levelled at earlier Hadcrut versions was that it had limited sampling from higher latitudes in the Northern hemisphere, where greater warming has been observed and thus underestimated the global temperature.

Werner Brozek
February 12, 2013 10:23 pm

Philip Shehan says:
February 12, 2013 at 9:51 pm
Attempts to account for the increase in temperature since the onset of the industrial revolution using entirely “natural” forcings without including this anthropogenic factor have not matched the temperature data.
Then can you explain why the last 30 years is no different from a 30 year period about 70 years ago. See
 
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/plot/hadcrut3gl/from:1912/to:1942/trend/plot/hadcrut3gl/from:1982.58/to:2012.58/trend
 
“#Selected data from 1912
#Selected data up to 1942
#Least squares trend line; slope = 0.0154488 per year”
 
“#Selected data from 1982.58
#Selected data up to 2012.58
#Least squares trend line; slope = 0.0151816 per year”

Werner Brozek
February 12, 2013 10:32 pm

HenryP says:
February 12, 2013 at 10:00 pm
Does anyone know: What exactly is the difference between Hadcrut3 and HADCRUT4?
Do you mean other than making the hottest year 2010 instead of 1998? Presumably Hadcrut4 is more accurate since it does things like taking more northern stations into account properly making it more like GISS. Perhaps others have a more complete answer.

JazzyT
February 12, 2013 10:43 pm

richardscourtney says:
February 11, 2013 at 3:00 pm

JazzyT:
I read your sophistry at February 11, 2013 at 11:49 am.
If you insist then I am willing to agree that the criterion be called a ;’discrepancy criterion’ and not the usual ‘falsification criterion’.
So, according to your wording
the models are not falsified when they are “discrepant” with reality.
Perhaps you would explain how “discrepant” they have to be for them to be falsified?

At this point, it’s possible to resolve the recent temperature records with the models by correcting for the effects of ENSO, solar forcing, and volcanoes, since these are not (and cannot be) explicitly modeled many years into the future. So, I don’t see the models as being “discrepent” with reality. Still, as I’ve mentioned, I would rather see a full hindcast run using the known ENSO, solar, and volcanic data rather than just a statistical fit, as Rahmstorf and Fostr presented.
I would say that the models were having a problem if the discrepency between them and the temperature data could not be resoved with such corrections. In that case, it would be necessary to see what could be done to fix the models, with newer data, bettter understanding of basic processes, different parameter selection for processes that were not known precisely, etc. With these, you could then see whether the fit was better, or whether you had to go back to the drawing board, or whatever.
It remains to be seen whether some of this is happening now, with James Annan’s criticism of some sensitivty estimates. Even so, that might just move sensitivity down to a lower end of the range, the kind of thing that should be happening as more data comes in. But for real improvements, I’m waiting for better understanding and modeling of cloud formation, and computer hardware capable of modeling it on a sufficiently fine scale.

Philip Shehan
February 12, 2013 10:53 pm

Werner.
The evidence I had in mind was this (and other references which show much the same thing but I frankly cannot be bothered searching for at the moment):
http://www.grida.no/publications/other/ipcc_tar/?src=/climate/ipcc_tar/wg1/figspm-4.htm
Have not looked to see if the detail of this figure explains your question about the two thirty year periods.

Philip Shehan
February 12, 2013 11:34 pm

Werner: Some more recent model comparisons are given here (Figure 2 and discussion above)
http://www.csiro.au/en/Outcomes/Climate/Reliability-Climate-Models/In-detail.aspx

February 13, 2013 4:47 am

Philip says
A complaint leveled at earlier Hadcrut versions was that it had limited sampling from higher latitudes in the Northern hemisphere, where greater warming has been observed and thus underestimated the global temperature.
Henry says
Interesting.
When I considered the sample I was going to take, I thought that
1) longitude would not matter, because I would be looking at average yearly temps. at that specific spot on earth and record how that spot changes over time. So, the earth’s seasonal up and down shift during the year would be included and cancelled out and since earth rotates every 24 hours I would be looking continuously at exactly the correct amount of energy beamed down, so to speak.
2) I thought it would be very important to balance my sample by latitude! In fact when you add all my plus and negative latitudes you come to just about zero. Otherwise you would /could get a bit of a warped picture of the actual global temperature trend.
3) just as a further precaution I also balanced my sample by @sea/inland 70/30 because I considered that inland rises and falls usually are much more dramatic then those at or near sea.
Nevertheless, at first glance my results for means do look meaningless. However, by looking at the development of the maxima,
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
you do begin to see that my means also make sense. We are on a parabolic looking curve and earth is trying to balance and straighten things out. There is some lag between what (change) we get from the sun and what earth is putting out (average temp.)
Note that as shown in the second graph of the quoted blog, it looks like each particular place on earth is on its own sine wave of energy (change) coming from the sun.
Having said that, and knowing what I know now, I would say that if (all?) the data sets being quoted here did not take into account to balance the sample by latitude, neither by 70/30 sea/land, I would expect to get and to see lob sided results.

Graham W
February 13, 2013 5:43 am

I’m really confused here as to what the argument actually is, from many people here. Not targeting any one person in particular, but let’s look at this trend:
Trend: 0.038 ±0.137 °C/decade (2σ)
So it could be as high as 0.175 C/decade or as low as -0.099 C/decade with 95% confidence. So one side of the argument simply says “there you go, warming is still continuing, we’ve still got 0.175 C/decade worth of warming potentially going on, how can you say it’s stalled?” even though the 0.175 C/decade is the HIGHEST end of the estimate…there’s no-one saying “the temperature is falling at a rate of -0.099 C/decade, what are we going to do!?” This “let’s look at the high side of each estimate” argument is clearly biased, whereas what people have ludicrously called the “denier” side is simply saying that the actual trend, i.e. the computed trend which lies exactly between the two extreme estimates, is lower than before (whatever the error bars). Some here say “stalled” (and that would apply to those data sets where the trend is actually zero), but personally I don’t even see the need to go so far – it seems pretty clear the rate of temperature rise has recently slowed in ALL data sets and that is entirely contradictory to what was expected. It really is just a question of semantics. Basically whatever way you look at it, what was happening previously (i.e. 70’s through to end of 90’s) is no longer happening to the same extent, and this runs contrary to projections. So the projections needs adjusting? Certainly – and the MET office, for instance, have already started doing just that as their recent announcements showed.
Before anyone starts ranting about the noise in the data, short time periods, blah blah blah. I know all that, and have demonstrated my understanding of that in previous posts, and what I’m saying above is not contradicted by anything you might have to say about that. Whatever way you look at it, something has changed, something has happened/is happening, and that needs investigating. End of story.

Reply to  Graham W
February 13, 2013 6:55 am

Graham, you are right. It has been cooling. My research suggests this cooling is natural.

Werner Brozek
February 13, 2013 7:55 am

Philip Shehan says:
February 12, 2013 at 11:34 pm
Thank you for the input. They talk about different models but this post seems to suggest that the models are in big trouble. Suppose we were living in 1945 and had the data until then. You would well be justified in saying that warming is accelerating at that point in time. However looking back, we see the opposite happened after 1945. And at the present time, there have been a number of posts that show we are at or below the very lowest projections of the models.

Werner Brozek
February 13, 2013 8:19 am

Graham W says:
February 13, 2013 at 5:43 am
Thank you, and I agree with you. About three years ago I had a talk with a person who believed in CAGW and I believe he was a former university professor. That was just after Phil Jones said the warming was not 95% significant over the last 15 years and he thought the headlines: “No warming for 15 years” were extremely untruthful. He also said that what happens over 8 years means nothing, but what happens over 15 years cannot be easily ignored. In my opinion, policy makers should look at just two things:
1. What is the most recent rate of warming without the error bars?
2. Is the time period long enough that it cannot be ignored?

D.B. Stealey
February 13, 2013 9:02 am

Werner Brozek says:
“…can you explain why the last 30 years is no different from a 30 year period about 70 years ago?”
That is the crux of the argument. Since there is no measurable difference in global warming between times when CO2 was low, and times when CO2 is high, then CO2 does not matter. QED
And we know why it does not matter: the warming effect of adding more CO2 is inconsequential at current concentrations. That is why global warming is currently stalled.
It is amusing to watch the consternation of CAGW true believers, as they are confronted with the plain fact that Planet Earth herself is falsifying their cherished beliefs. It turns out that “carbon” does not matter at all regarding natural global warming. They have been barking up the wrong tree the whole time.
Will they admit they were wrong? No. Their incurable cognitive dissonance will not allow it.

Philip Shehan
February 13, 2013 2:51 pm

Werner: (Excuse me if this post does not contain references which I do not have readily to hand or detailed number crunching. I have a day job I must attend to. Will try to post refs later.)
The temperature behaviour before and after 1945 is another demonstration of what I have been saying. The temperatures seem to drop off a cliff for the 16 year period after 1940 but recover and continue an upward trend thereafter, with a similar slope as for the period 1910 to 1940.
http://www.woodfortrees.org/plot/hadcrut3vgl/compress:12/offset/plot/hadcrut3vgl/from:1850/to:2010/trend/offset/plot/hadcrut3vgl/from:1940/to:1956/trend
If you look at the models I gave in the above posts none of them seem to adequately account for the local peak around 1940 with or without the AGW contribution. I submit that the models are pretty good at hindcasting otherwise but obviously there are some things they cannot account for yet.
Anyway, with actual data or models, you need to look at the long term trend, not what happens for a couple of decades or less.
Graham writes: “Whatever way you look at it, something has changed, something has happened/is happening, and that needs investigating.”
Quite so. The apparent change recently have been explained by the contribution of natural forcings (solar cycles, el nino/la nina etc) which are giving a cooling contribution. Factoring those in gives a trend more in line with past decades. (Sorry, been looking for a particular published paper but can’t find it at the moment)
The Foster and Rahmstorf adjustments to the temperature trend calculator I have been using also attempt to account for solar cycles etc which generally give more positive slopes recently and a lower uncertainty range.
http://www.skepticalscience.com/temperature_trend_calculator.html
I have not used that in my analyses because one has to be a bit wary of such adjustments in case they are merely post hoc fudge factors designed to give the desired result rather than an objective adjustment based on sound theory. Not that I am accusing those authors or the ones of the paper(s) I can’t locate at the moment, but you have to take a close look at such adjustments and I did not want to buy into that argument.
You see I am skeptical in the true sense when it comes to not wishing to overinterpret data.

D.B. Stealey
February 13, 2013 3:24 pm

Philip Shehan says:
“You see I am skeptical in the true sense when it comes to not wishing to overinterpret data.”
LOL! As If. Shehan is desperate to find something — anything — that supports his alarmist belief system. But unfortunately for him, the planet itself is ridiculing his beliefs. And he has been forced to climb down from his preposterous assertions that global warming is ‘accelerating’. It isn’t.
In fact, global warming is currently stalled, thus dealing a severe blow to the climate alarmists’ wrongheaded belief in their debunked CO2=CAGW conjecture. Empirical evidence shows that CO2 is harmless, and that it is beneficial to the biosphere — which is starved of CO2. Those are verifiable, testable facts. Contrast those scientific facts with the fact-free and false demonization of harmless, beneficial “carbon” [by which the confused alarmist contingent means CO2 — a tiny trace gas].

Werner Brozek
February 13, 2013 4:46 pm

Philip Shehan says:
February 13, 2013 at 2:51 pm
Anyway, with actual data or models, you need to look at the long term trend, not what happens for a couple of decades or less.
That is very true. However the models are way off in their projections. As far as I know, the steepest rise for any period is about 0.17/decade for any period of 15 years or more. This only gives 1.7/century. This is clearly not alarming. If you know of a larger slope than 0.17/decade for at least 15 years, please let me know. Then if we go to larger times such as 40 or 50 or more years, this slope of 0.17/decade is never maintained. So why is any one concerned about 3 or 4 or 6 degrees of warming by the year 2100? The required rates have never been seen and all who believe that CO2 causes some warming agree that the law of diminishing returns applies to CO2.

Werner Brozek
February 13, 2013 5:29 pm

D.B. Stealey says:
February 13, 2013 at 3:24 pm
In fact, global warming is currently stalled, thus
Thank you for your contributions here! However I need to point out that the above needs to be updated for UAH. As you know, UAH saw a huge jump in January and due to the way the numbers worked out, the straight line changed from 8 years and 3 months to 4 years and 7 months, so the present line slopes up very noticeably. It should now start from July 2008 or 2008.5.
I could give you the new set of lines, but even that would not be completely correct since WFT has not updated WTI, GISS, and Hadcrut3 since November. I was able to get the GISS and Hadcrut3 slope to December using SkS. That pushed things back to March for these two sets, but if I gave you the March date, the line would slope up on WFT since it does not have the very low December values. I wrote to Paul Clark but had no success. If you can use your influence to get GISS and Hadcrut3 and WTI updated to the end of December, I would really appreciate it.
Here is the latest. The ones with a * are where changes have occurred from what is stated in this report.
*1. UAH: since July 2008 or 4 years, 7 months (goes to January)
*2. GISS: since March 2001 or 11 years, 10 months (goes to December) (Confirmed by SkS)
3. Combination of 4 global temperatures: since December 2000 or 12 years (goes to November) * (I know this is different, but I do not know what it is, perhaps 2 or 3 months more.)
*4. HadCrut3: since March 1997 or 15 years, 10 months (goes to December) (Confirmed by SkS)
5. Sea surface temperatures: since March 1997 or 15 years, 10 months (goes to December)
6. RSS: since January 1997 or 16 years, 1 month (goes to January)
RSS is 193/204 or 94.6% of the way to Santer’s 17 years.
7. Hadcrut4: since November 2000 or 12 years, 2 months (goes to December.)

D.B. Stealey
February 13, 2013 5:47 pm

Werner,
The time frame has changed due to the current anomaly, but that does not change the fact that not only is there no accelerating trend in global warming, but as of now, global warming is still stalled.
We need more global warming. We are now at the cool end of the Holocene. And with more global warming, we will get the benefit of more CO2. It’s a win-win!

Graham W
February 13, 2013 6:03 pm

Philip Shehan says:
“Quite so. The apparent change recently have been explained by the contribution of natural forcings (solar cycles, el nino/la nina etc) which are giving a cooling contribution. Factoring those in gives a trend more in line with past decades. (Sorry, been looking for a particular published paper but can’t find it at the moment)”
So I have heard. There are two things about this that puzzle me though:
1) This seems to be mentioned (though I’m not accusing you of this necessarily) at a certain point in the dialogue when a reduction in the rate of warming has been accepted…and only then. I don’t understand why there are these two dual “modes of defence”, if you like, which ought to be mutually exclusive, yet are often used in tandem. You are first told that warming is all “business as usual”. Everything is happening still within the expected boundaries. Then, if successfully challenged, the argument morphs into “natural forcings have now temporarily cancelled the warming effect to an extent”…but if that’s true, then warming HAS slowed. So mode of defence number one is invalidated. It’s no longer ludicrous to claim that warming has slowed/stalled even though before it apparently was!
2) Isn’t the idea of natural forcings not having such a cooling effect over the 70s to late 90s warming period and then suddenly having a cooling effect like they supposedly have now less logical than the idea of CO2 having little effect and climate change being dominated by natural forcings? Because if you argue that the natural forcings changed from being positive over that period to being negative (or lower) recently to try to defend the former position, then that explanation would also apply to the latter position…only the latter position also makes more sense being as how CO2 levels have risen at an increasing rate?

John Brookes
February 14, 2013 12:15 am

So I downloaded the HADCRUT4 data from 1958, and got the 95% confidence interval of a linear fit from 1958 to Nov 2012. It gave warming of between 0.115 and 0.129 at the 95% confidence interval.
Then I ran it for every start data until 5 years ago, and each time took the end point to be Nov 2012. The earliest start date for which the 95% confidence interval included zero was April 1997. After a while the 95% confidence no longer included zero, until October 1998. After that, the 95% confidence interval of the warming trend has always included zero. Except, in 3 months since then both the upper and lower values were below zero (but I really shouldn’t have told you guys that).
So I can’t understand how you could get a date in 1995 where it was possible at the 95% certainty level that there was no warming.
Is it because I’m not taking into account autocorrelation? Or some other error?

Philip Shehan
February 14, 2013 12:32 am

Graham: First, apologies – still have not found the particular reference I mentioned above. 7 pm here in Melbourne as I am typing this and been busy today.
Generally, climate theory has always acknowledged that climate depends on anthropogenic and natural factors. When people first began to postulate that increasing CO2 and other greenhouse gases would cause warming they recognised that at that time the anthropogenic signal was not detectable from the natural forcings.
The Melbourne University climate scientist David Karoly first became interested in the climate change question believing he could account for changes using natural forcings only. He found he could not and thus became one of those who accept AGW.
In 1981 Hansen thought a clear anthropogenic signal may emerge in the 80’s but possibly not until the end of the century.
http://pubs.giss.nasa.gov/docs/1981/1981_Hansen_etal.pdf
“Summary. The global temperature rose by 0.20C between the middle 1960’s and
1980, yielding a warming of 0.4°C in the past century. This temperature increase is
consistent with the calculated greenhouse effect due to measured increases of
atmospheric carbon dioxide. Variations of volcanic aerosols and possibly solar
luminosity appear to be primary causes of observed fluctuations about the mean trend
of increasing temperature. It is shown that the anthropogenic carbon dioxide warming
should emerge from the noise level of natural climate variability by the end of the
century, and there is a high probability of warming in the 1980’s…”
Of course climate models which evaluated the various contributions were rudimentary then and this 1981 paper considers only CO2 ,volacanoes and the sun but gave a pretty good fit for temperature data up till then (Fig 5 ).
In the last 3 decades work has been continually done to refine the theory and the models. My post above gave examples of these refinements from 2001 and 2006. It has not happened recently to account for a slowdown of warming in the last decade.
No one with any knowledge of the science has ever thought that global temperatures can be explained by increases in greenhouse gases alone.

Philip Shehan
February 14, 2013 3:55 am

John Brooks:
Where are you getting your numbers from? When I run this program
http://www.skepticalscience.com/trend.php
with Hadcrut4 between 1958 and November 2012, I get this result:
Trend: 0.123 ±0.024 °C/decade
That is, warming between 0.147 and 0.099
I am not sure what period you are referring to here:
“Except, in 3 months since then both the upper and lower values were below zero”
And where does this come from:
“So I can’t understand how you could get a date in 1995 where it was possible at the 95% certainty level that there was no warming.”
For Hadcrut4 from 1995 to the present I get
1995: 0.098 ± 0.111
Anyway, once again I submit you are overinterpreting the data. As I noted at 3:27 pm on Feb 11, the shorter the time period, the greater the 95% range because the signal to noise is becoming very low. As you come in from 1995 results become increasingly meaningless.
The Hadcrut4 result from 5 years ago is obvious nonsense in any practical sense.
Trend: 0.080 ±0.657 °C/decade
Warming or cooling of 6.5 degrees over the next century?!?
I would be very surprised if any data set since 1995 did not include zero.

Graham W
February 14, 2013 5:36 am

Thanks for the response Philip. I would say though that the crux of your post is this:
“No one with any knowledge of the science has ever thought that global temperatures can be explained by increases in greenhouse gases alone.”
And I’d just like to remind you that this is not what I was suggesting either. I said:
“Isn’t the idea of natural forcings not having such a cooling effect over the 70s to late 90s warming period and then suddenly having a cooling effect like they supposedly have now less logical than the idea of CO2 having little effect and climate change being dominated by natural forcings? Because if you argue that the natural forcings changed from being positive over that period to being negative (or lower) recently to try to defend the former position, then that explanation would also apply to the latter position…only the latter position also makes more sense being as how CO2 levels have risen at an increasing rate?”
So the “former position” I am referring to is this idea that natural forcings have recently over-shadowed, if that’s the right word (probably not) the effect of CO2 whereas over the 70s to late 90s warming period this was not the case. This is implied by your statement “The apparent change recently have been explained by the contribution of natural forcings (solar cycles, el nino/la nina etc) which are giving a cooling contribution”, since if the forcings are having this cooling effect now, why didn’t they previously? Then, anticipating a response, I continued “if you argue that the natural forcings changed from being positive over that period to being negative (or lower) recently to try to defend the former position, then that explanation would also apply to the latter position…only the latter position also makes more sense being as how CO2 levels have risen at an increasing rate?”
The “latter position” referring to the idea that CO2 levels have little effect and climate change is dominated by natural forcings.
Nowhere in my comments have I argued that anyone suggests “global temperatures can be explained by increases in greenhouse gases alone”.
As for the points you have made at the end of your post of February 14th at 3:55am, specifically:
“I would be very surprised if any data set since 1995 did not include zero”.
It would surprise me an enormous amount if a trend since 1995 DID include zero, if the computed trend was higher than the typical 95% level usually associated with whatever time frame you’re looking at! What I’m trying to say is, take for example the HADCRUT 4 data, a typical 15 year period (more recently) will tend to have error bars in the area of +/- 0.125 or something close to that. So if the true trend in the data was any greater than this, the computed trend for the data would be discernible from zero with 95% confidence (the result would be a positive trend greater than its associated error margin). If the true trend was still the 0.17 C/decade that people claim applies to the last 30 years, then this trend would still be “visible” above the noise even over 15 years…but it isn’t. This shows us that the rate of warming has certainly decreased, more recently. There is then a need to investigate and explain why this is happening…and then you’re back to the points I’ve already made in my previous comment of February 13th, 6:03pm, and clarified further at the beginning of this one.

Graham W
February 14, 2013 5:52 am

John Brookes says:
February 14, 2013 at 12:15 am
“Is it because I’m not taking into account autocorrelation? Or some other error?”
I believe the skeptical science trend calculator does take autocorrelation into account so yes, I think that might be the reason for your results differing to ones obtained from the trend calculator.

Graham W
February 14, 2013 6:01 am

P.S (to post to Philip Shehan):
The trend in HADCRUT 4 data from 1975 – 1990 is 0.184 +/-0.141. The trend from 1975 – 1991 is 0.207 +/- 0.129. The trend from 1995 – 2013 is 0.095 +/- 0.110. The trend from 1975 – 2013 is 0.169 +/- 0.037. I hope, considering the points I have made, this makes sense as to what I’m suggesting with these trends…it’s clear the rate of warming has recently slowed.

Werner Brozek
February 14, 2013 8:08 am

John Brookes says:
February 14, 2013 at 12:15 am
Then I ran it for every start data until 5 years ago, and each time took the end point to be Nov 2012. The earliest start date for which the 95% confidence interval included zero was April 1997.
In addition to
Philip Shehan says:
February 14, 2013 at 3:55 am
I would like to make the following comments. When I made the post, SkS was only up to October, 2012, but since then, they have updated to the end of December so there is no reason to not just put in 2013 as the end date. I also do not know where your numbers are from. Perhaps you have a different program with different criteria? We had a discussion about this point earlier. But for what it is worth, here is my latest with the update to the end of December.
Section 2
For this analysis, data was retrieved from SkepticalScience.com. This analysis indicates for how long there has not been significant warming according to their criteria. The numbers below start from January of the year indicated. Things have now been updated to the end of December 2012. In every case, note that the magnitude of the second number is larger than the first number.
(I am well aware of the fact that RSS and UAH have really jumped in January 2013, but SkS does not show January yet.)
For RSS the warming is not significant for over 23 years.
For RSS: +0.126 +/-0.135 C/decade at the two sigma level from 1990
For UAH, the warming is not significant for over 19 years.
For UAH: 0.143 +/- 0.170 C/decade at the two sigma level from 1994
For Hadcrut3, the warming is not significant for over 19 years.
For Hadcrut3: 0.095 +/- 0.115 C/decade at the two sigma level from 1994
For Hadcrut4, the warming is not significant for over 18 years.
For Hadcrut4: 0.095 +/- 0.110 C/decade at the two sigma level from 1995
For GISS, the warming is not significant for over 17 years.
For GISS: 0.111 +/- 0.124 C/decade at the two sigma level from 1996
If you want to know the times to the nearest month that the warming is not significant for each set, they are as follows: RSS since August 1989; UAH since May 1993; Hadcrut3 since August 1993; Hadcrut4 since July 1994; GISS since June 1995 and NOAA since May 1994.

Werner Brozek
February 14, 2013 8:19 am

Graham W says:
February 14, 2013 at 5:36 am
It would surprise me an enormous amount if a trend since 1995 DID include zero,
The trend for Hadcrut3 from March 1997 (1997.17) to 2013 is:
Trend: 0.000 ±0.135 °C/decade (2σ)
This means there has been no warming on Hadcrut3 for 15 years and 10 months. Right?

D.B. Stealey
February 14, 2013 9:36 am

Graham W,
Thank you for pointing out in your 6:03 pm comment above the rank dissembling of the climate alarmists’ story. They are not honest. When the facts contradict their narrative, they pull the “O, Look over there! A squirrel!” routine, change their strawmen, and try to argue from a different direction by moving the goal posts that they previously set. They are desperate to keep their false alarm alive, despite a lack of any testable evidence showing that CO2=CAGW.
The facts show clearly that warming is moderating [and it is certainly not accelerating, as some falsely claim].
Not that there is anything wrong with continued global warming, if it should resume. The planet has been much warmer earlier in the Holocene, and during those warm periods the biosphere teemed with life.
The day the alarmist contingent begins acknowledging the true scientific facts of the matter is the day that their false alarm begins to collapse. They can only keep it alive with provable lies, such as the false claim of ‘accelerating’ global warming.

Graham W
February 14, 2013 10:36 am

Werner, thanks for this article! Yes it’s most likely that there’s no warming with the trend you’ve posted in your response to me. However you’ve taken my quote out of context there a bit, what I’ve said only makes sense if you add in the rest of the sentence (well it makes sense as you’ve written it but its not what I’m saying).
What I was trying to say to Philip Shehan is that you won’t get a trend where the error bars include zero if the computed trend is greater than the error bars. So in other words if you have a trend of say 0.143 +/- 0.142, then zero is not an option at the 95% confidence level. Hence the trend is statistically discernible from zero with 95% confidence and you know there will be some degree of warming.
And the point I was making by saying that was, if the true trend in the data was 0.17 C, then you would get a statistically significant (i.e statistically discernible from zero) result even over a short length of time such as 15 years…because the typical error bars for such a length of time are around 0.12 or 0.13 C/decade above and beneath the computed trend (in HADCRUT 4 data for 15 year periods). The fact that this is not happening in recent periods but DID happen in prior periods suggests (in fact proves for certain) that the rate of warming has slowed.
Slowed/stalled, whatever you like. Basically the opposite of what was projected to happen with an increasing rate of CO2 level increase.

February 14, 2013 12:12 pm

richardscourtney says:
February 11, 2013 at 4:33 pm
Phil.:
At February 11, 2013 at 3:12 pm you say
The data is correct, but it can not be compared with the results of models which don’t contain a model for the ENSO phenomenon. Yes to incorporate a model for ENSO would be nice, but since it’s a phenomenon which doesn’t occur on a regular basis that’s difficult to do! What has been done is to account for the known events and then compare, which shows no such extended flat period.
I feedback what I read that to say because if this is how I understand your comment then other will, too.
You are saying that when the empirical data don’t agree with the model then the empirical data must be adjusted to agree.

That’s a very strange interpretation! What I said was that if you wish to compare the results of a model which does not include a model for an important phenomenon with the empirical data in which that phenomenon does occur then you must make an a posteriori correction. Whether the adjustment is made to the modelled result or the empirical data doesn’t matter as long as it’s done correctly. What you and others have been doing is using a result obtained from models which don’t include ENSO and applying it to real world data which does include ENSO, that’s apples to oranges!
That contravenes every principle of scientific modelling.
But as stated that isn’t what’s being done.
Of course, one may want to exclude the effect of ENSO from the data because the model does not emulate ENSO. But nobody understands ENSO and, therefore, parsimony dictates the exclusion needs to be interpolation across – or extrapolation across – an ENSO event. Any other ‘adjustment’ for ENSO is a fudge.
ENSO cannot be predicted ahead of time because it’s not a regular event, that doesn’t mean it’s not understood, just like volcanoes, we have a fairly good idea what eruptions will do but we can’t predict when they will occur or what their magnitude will be but after the fact the effect can be estimated. Your idea of interpolation across an ENSO event isn’t clear since in the time frame considered there are multiple events of both positive and negative sign.

D.B. Stealey
February 14, 2013 12:25 pm

Graham W says:
“Slowed/stalled, whatever you like. Basically the opposite of what was projected to happen with an increasing rate of CO2 level increase.”
That is the central point in the entire debate. All the predictions of runaway global warming due to rising CO2 have been falsified.
Empirical evidence shows conclusively that CO2 does not have the effect predicted by the alarmist crowd.
It turns out that the putative ‘correlation’ between T and CO2 was nothing but a short term coincidence. Thus, the entire basis for the AGW scare has been deconstructed. The only empirical measurements between CO2 and T show that ∆T causes ∆CO2; not vice-versa.
The alarmist crowd started off with the incorrect premise that ∆CO2 was the cause of ∆T, therefore, the conclusions based on that false premise are necessarily wrong. That is precisely what the planet is telling us. But alarmists have so much of their ego and their belief system tied up in the incorrect assumption that CO2 causes temperature changes, that they cannot gracefully withdraw from the argument. But most WUWT readers know the truth.

Graham W
February 14, 2013 12:54 pm

D.B.Stealey, I think you may well be right! It is hard for people to accept that “the consensus” has been overturned. However this has been shown to happen before in the history of science! So there’s no reason it can’t be the case this time.

Werner Brozek
February 14, 2013 12:57 pm

I would like to express my thanks to JTF and Anthony Watts along with all who have added their thoughts so far.
JTF, If there was ever any doubt about the attention grabbing abilities of the graphic at the start of the article, I believe it has been settled since I was very pleased to see that even Andrew Bolt from Australia used it, (along with many other facts from the article).
See:
http://blogs.news.com.au/heraldsun/andrewbolt/index.php/heraldsun/comments/craig_emerson_was_wrong_on_global_warming_and_must_retract/

CoRev
February 14, 2013 2:16 pm

Werner and JTF, well done and thanks

Philip Shehan
February 14, 2013 7:26 pm

Graham, I was not suggesting that you needed reminding that climatologists have always known and said that climate is multifactorial, but I often think others do.
With regard to your comment (and forgive me if this is not the gist of what you are saying) I think it is incorrect to say that it is only latterly that particular emphasis has been made as to how “natural” forcings affect temperature. The extent to which natural forcings combine to override an increase due to rising CO2 concentration varies from time to time.
If you have a “perfect storm” of cooling events occurring at the same time, such as volcanic eruption, a low point in the solar cycle and an el nino event you will get a dramatic total cooling effect. At other times the natural forcings partially cancel or give a positive contribution (eg the extreme el nino southern summer of 1997/1998).
And dramatic cooling periods have occurred in the past. To quote from one of my posts above:
“The temperature behaviour before and after 1945 is another demonstration of what I have been saying. The temperatures seem to drop off a cliff for the 16 year period after 1940 but recover and continue an upward trend thereafter, with a similar slope as for the period 1910 to 1940.
http://www.woodfortrees.org/plot/hadcrut3vgl/compress:12/offset/plot/hadcrut3vgl/from:1850/to:2010/trend/offset/plot/hadcrut3vgl/from:1940/to:1956/trend
If you look at the models I gave in the above posts none of them seem to adequately account for the local peak around 1940 with or without the AGW contribution. I submit that the models are pretty good at hindcasting otherwise but obviously there are some things they cannot account for yet.”
Since I wrote that I came across a possible explanation of the models to fully account for the local peak
“The biggest disagreement is just prior to and during world war II, when the method of measuring sea surface temperature changed, which may have caused a discrepancy in the observed temperature data.”
http://web.archive.org/web/20100322194954/http://tamino.wordpress.com/2010/01/13/models-2/
Of course as CO2 concentrations rise with time and the positive contribution from that forcing becomes larger, periods when a combination of negative forcings can overide that effect will become rarer.

Philip Shehan
February 14, 2013 7:28 pm

Sorry, clearly the perfect storm would require a la nina event not el nino as I typed above.

D.B. Stealey
February 14, 2013 7:36 pm

Thank you, Graham. I have no doubt that scientific skeptics are right. None at all.
But Shehan — contrary to all scientific evidence — remains a True Believer:
“Of course as CO2 concentrations rise with time and the positive contribution from that forcing becomes larger, periods when a combination of negative forcings can overide that effect will become rarer.”
Translation: ‘Global warming causes the observed global cooling. Pay no attention to the fact that the planet is falsifying our alarmist case. Somehow, catastrophic AGW must still be true!’
Cognitive dissonance in action.

Philip Shehan
February 14, 2013 9:41 pm

I try to ignore this person but his interpretation of my post as saying ‘Global warming causes the observed global cooling” yet again shows his total failure of comprehension or reasoning ability.

Graham W
February 15, 2013 4:03 am

Philip Shehan: My point was that a cooling or warming effect of “natural forcings” (here defined as “all climate forcings bar anthropogenic CO2”) will have always affected the climate regardless of anthropogenic CO2.
Now, there is a period of relatively rapid increase in temperatures from the 70s to the 90s, and subsequent to that (90s to present) this period we are identifying as a time of relatively reduced temperature increase/possibly “stalled” temperatures…and also possibly, at least for a part of the most recent time, cooling temperatures!
The difference between the rate of rise in temperatures over these periods could be explained in different ways. Let’s say at one extreme end of these “different ways”, climate sensitivity to CO2 is very high, and at the other end of the spectrum, climate sensitivity to CO2 is very low (even non-existent).
With both extremes, and with everything inbetween, the difference in rate of rise in temperatures can be accounted for by variation in the warming or cooling effect of natural forcings, to different extents. However, and this is the crux of my entire point really: it is surely more logical to opt for the low end of the “sensitivity to CO2” spectrum since if the sensitivity to CO2 was higher, the fact of such rapidly rising CO2 levels over the entire time period examined (1970-present) would outweigh the effect from natural forcings (since that’s what a high climate sensitivity to CO2 means in the first place)…but this has not been shown to be what has happened/is happening as far as the temperature response is concerned.
You are then arguing about what has happened in the past – back to the 40s to 70s era – but the levels of CO2 in the atmosphere were far less back then compared to what has been happening with CO2 levels over the period from the 70s to the present. Hence, back then, natural climate forcings should have dominated climate change more so than they do now (assuming higher climate sensitivity to CO2). So if anything, that adds to the skeptic argument, rather than detracting from it. We should be even less likely to be seeing a reduction in rate of temperature rise now than we did back then, if climate sensitivity to CO2 is high.
So climate sensitivity to CO2 must be at the lower end of the spectrum. Exactly how low is hard to say, but it is more logical for it to be lower and not higher for these reasons.

Werner Brozek
February 15, 2013 9:07 am

Hello JTF,
I will do the three tables in a vertical format for next time. Of course this would reduce it to two tables. Naturally the one table will be much shorter at the start of the year since there are far fewer months to show. But with the vertical format, extra months can easily be added without running out of room at the side. It has the further advantage that I can write “2012 anomaly” instead of “anom1” to save space and then explain what it says.
I will of course make all edits. But as near as I can tell, SkS updates every two months, so if you want an update every month, some things such as the longest time there is no significant warming will not change every month.
Then there is the problem with WFT. GISS, Hadcrut3 and WTI are still not updated past November. As I alluded to above, I can use SkS which I did for the December slope for GISS and Hadcrut3. WTI can be done, but it is extremely awkward without WFT and I doubt it is worth the effort.
As for the month/date/time, sometimes the Hadcrut3 and 4 values come out sooner and sometimes later. I think the best thing to do is wait for these to show up and then I could send you an email and have the new post show up the following Sunday at the usual time. I have no problem doing it every month, but I just wonder if things change enough to warrant it. For example, if the period of significance changes in the third decimal place or if the time for a straight line changes from 11 years and 7 months to 11 years and 9 months, is it worth a new post to mention this? We could certainly try it monthly and see how the interest is. How about if we have a post after all January data is in and then play it by ear? Perhaps every two or three months may be more appropriate?
You had mentioned your own article at one time where you wanted to incorporate some of my stuff. Just send me an email when you want the very latest from me for anything in this post.

February 15, 2013 2:24 pm

Werner, I also believe this should be a monthly article at least for a while. This has gained much attention at many blogs. Some just noting the obvious and others trying to refute. Those trying the refutation approach are hilariously deflecting from the data.
So, let me repeat. Well done!.

Philip Shehan
February 15, 2013 4:53 pm

Corey. I am not defelecting from the data, I am subjecting it to statistical analysis. As a biomedical research sceintist, I have some understanding of these matters.
With all due respect to Werner and his efforts, there is a falure to recognise that the concept of statistical significance is largely dependent on sample size. If the sample size is too small you are wasting time and effort and establishing nothing of importance.
One extreme example of that here is the person who examined the last five years, which shows that over the next century, temperatutes might rise by about 6 C. Or fall by about 6 C. You have to look at 30 decades to come up with a sufficiaently small error range to decide whether temperatures are rising or falling with atrend that can be considered as having some accuracy.
I have recently been involved in a clinical trial for a medical treatment, double blind, with hundreds of subjects.
No one in their right mind would declare on the basis of 16 subjects that the medication was or was not effective.

Philip Shehan
February 15, 2013 4:55 pm

Just read my post. Sorry for the typos. Sunday morning here and have not had my coffee.

Philip Shehan
February 15, 2013 5:06 pm

PS. I think I can state without violating any confidentiality agreements that preliminary results indciate the treatment to be highly effective (one might even hazard using the term “cure”) for a very serious and widespread disease.

Philip Shehan
February 15, 2013 5:07 pm

PPS Very peripherally involved. Not taking any credit.

Philip Shehan
February 15, 2013 5:24 pm

Gawd it was a heavy night. It’s only Saturday. On to my second coffee.

D.B. Stealey
February 15, 2013 6:30 pm

Graham W says:
“So climate sensitivity to CO2 must be at the lower end of the spectrum. Exactly how low is hard to say…”
Sensitivity to CO2 is very low — really, it is non-existent for all practical purposes at current CO2 concentrations.
The reason for this is clear: At current concentrations, adding more CO2 makes no measurable difference. That is why the UN/IPCC’s predictions are so wildly off-base.

February 15, 2013 11:13 pm

Henry
depending on what you want to do research on, your sampling technique is very important. First and foremost, it has to be random. In the case of global temperature: I have explained things here.
http://wattsupwiththat.com/2013/02/10/has-global-warming-stalled/#comment-1223942
Each place on earth is on its own sine wave of temp. change with wavelength of ca. 88 years, but height of temp. change being dependent on the ozone & others above.
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/

CoRev
February 16, 2013 6:58 am

Philip Shehan, what a mess you wrote. 😉 You start off with: “Corey. I am not defelecting from the data, I am subjecting it to statistical analysis. ” and follow with an appeal to your own authority on the subject, the, you go completely off the rails. This was I hope a typo? “You have to look at 30 decades to come up with a sufficiaently small error range to decide whether temperatures are rising or falling with atrend that can be considered as having some accuracy.” If not, if you actually believe this, then your whole position re: GW validity is undermined.
Anyway, I hope your day is going better now.

D.B. Stealey
February 16, 2013 2:56 pm

Yes, global warming has stalled. The facts are clear. It may resume, or not, or global cooling may follow. Whatever happens, most of us will follow the empirical evidence, rather than engaging in wishful thinking.
Dr Irving Langmuir explained scientific wishful thinking in a series of lectures titled Pathological Science. Dr Langmuir would use the term today to describe the belief in AGW. Langmuir explained the symptoms:

Characteristic Symptoms of Pathological Science
The characteristics of this Davis-Barnes experiment and the N-rays and the mitogenetic rays have things in common. These are cases where there is no dishonesty involved but where people are tricked into false results by a lack of understanding about what human beings can do to themselves in the way of being led astray by subjective effects, wishful thinking or threshold interactions. These are examples of pathological science. These are things that attracted a great deal of attention. Usually hundreds of papers have been published upon them. Sometimes they have lasted for fifteen or twenty years and then they gradually die away.
Now, the characteristic rules are these (see Table I}
TABLE I
Symptoms of Pathological Science:
The maximum effect that is observed is produced by a causative agent of barely detectable intensity [think: AGW], and the magnitude of the effect is substantially independent of the intensity of the cause.
The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.
Claims of great accuracy.
Fantastic theories contrary to experience.
Criticisms are met by ad hoc excuses thought up on the spur of the moment.
Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.
The maximum effect that is observed is produced by a causative agent of barely detectable intensity. For example, you might think that if one onion root would affect another due to ultraviolet light, you’d think that by putting on an ultraviolet source of light you could get it to work better. Oh no! OH NO! It had to be just the amount of intensity that’s given off by an onion root. Ten onion roots wouldn’t do any better than one and it doesn’t make any difference about the distance of the source. It doesn’t follow any inverse square law or anything as simple as that, and so on. In other words, the effect is independent of the intensity of the cause. That was true in the mitogenetic rays, and it was true in the N-rays. Ten bricks didn’t have any more effect than one. It had to be of low intensity. We know why it had to be of low intensity: so that you could fool yourself so easily. Otherwise, it wouldn’t work. Davis-Barnes worked just as well when the filament was turned off. They counted scintillations.
Another characteristic thing about them all is that, these observations are near the threshold of visibility of the eyes. Any other sense, I suppose, would work as well. Or many measurements are necessary, many measurements because of very low statistical significance of the results.
In the mitogenetic rays particularly it started out by seeing something that was bent. Later on, they would take a hundred onion roots and expose them to something and they would get the average position of all of them to see whether the average had been affected a little bit by an appreciable amount. Or statistical measurements of a very small effect which by taking large numbers were thought to be significant. Now the trouble with that is this. There is a habit with most people, that when measurements of low signifcance are taken they find means of rejecting data. They are right at the threshold value and there are many reasons why you can discard data. Davis and Barnes were doing that right along. If things were doubtful at all why they would discard them or not discard them depending on whether or not they fit the theory. They didn’t know that, but that’s the way it worked out.
There are claims of great accuracy. Barnes was going to get the Rydberg constant more accurately than the spectroscopists could. Great sensitivity or great specificity, we’ll come across that particularly in the Allison effect.
Fantastic theories contrary to experience. In the Bohr theory, the whole idea of an electron being captured by an alpha particle when the alpha particles aren’t there just because the waves are there doesn’t make a very sensible theory.
Criticisms are met by ad hoc excuses thought up on the spur of the moment. They always had an answer — always.
The ratio of the supporters to the critics rises up somewhere near 50% and then falls gradually to oblivion. The critics can’t reproduce the effects. Only the supporters could do that. In the end, nothing was salvaged. Why should there be? There isn’t anything there. There never was. That’s (p.7) characteristic of the effect. Well, I’ll go quickly on to some of the other things…[source]

AGW fits this template exactly. There is no measurable, empirical evidence of AGW. It is a conjecture. As atmospheric CO2 continues to rise steadily, the Null Hypothesis remains un-falsified: there are still no measurable effects that can be directly attributed to AGW. That may well be simply because there isn’t anything there.

Werner Brozek
February 16, 2013 8:11 pm

I will do the three tables in a vertical format for next time.
The last article had all the statistics from 2012 so it necessitated 3 tables. However next time, I will just have the January, 2013 data to deal with. As well, with the new vertical format, I will not run out of room at the side, so a single table works very well for now. (I will send it to you as an email with only UAH and RSS updated for January.) I see no reason not to use the single table all year. It will just get longer vertically.
I’ve copied this article in full back to our staging area, an played with the title, which you’ll want to do to avoid the titles getting mixed from an indexing perspective and to keep it fresh.
If we just have the following for next time:
“Has Global Warming Stalled? January Update” and so on, would that be O.K.? I know it is not very creative, but on the other hand, people know what the article is about and every title will be different.
It is your call in terms of timing, but given the positive response that this article has received, I’d be inclined to keep beating the drum until the message is clearly heard. Given that WUWT readership tends to have a reasonable split between weekdays and weekends, you might want to publish the next one midweek so that audience will also get exposed. Also to prevent the article from getting stale, you might want to tweak the intro and conclusion, tie in new findings and occurrences, introduce/test new components/graphs, etc.
I was not aware of the split in readership, but with that being the case, it would be a good idea to change the times from one month to the next so the weekenders get an update every two months, just like the others. Then we can see how the responses are. As far as staleness is concerned, there have been many good responses and comments and questions this time so I will end up tweaking quite a bit for next time automatically.
Let me know a week before you want to publish the article again and I will try to reach out to Paul to see if we can get the data as current as possible.
That would be nice. But if it is not updated, I will use SkS to make the best guess that I can.
Will do, I envisage several of your graphs and summaries being added to the next Big Picture update and, if you are open to it, I’d appreciate your help in reviewing and editing that article when I get around to drafting it.
No problem!

D.B. Stealey
February 16, 2013 8:22 pm

Werner and JTFWUWT,
In line with other WUWT series’ [eg: ‘Sea Ice #4’, etc.], may I suggest the simple and unambiguous: “Has Global Warming Stalled #2”? [Then #3, #4, etc.]
It would make an archive search very easy for our readers.