The other big story today: BBC forced to admit global warming 'static'

Forecast for warming revised downward.

BBC_forecast_revised

The UK Met Office has revised one of its forecasts for how much the world may warm in the next few years.

It says that the average temperature is likely to rise by 0.43 C by 2017 – as opposed to an earlier forecast that suggested a warming of 0.54C.

The explanation is that a new kind of computer model using different parameters has been used.

The Met Office stresses that the work is experimental and that it still stands by its longer-term projections.

These forecast significant warming over the course of this century.

The forecasts are all based on a comparison with the average global temperature over the period 1971-2000.

The earlier model had projected that the period 2012-16 would be 0.54C above that long-term average – within a range of uncertainty from 0.36-0.72C.

By contrast the new model, known as HadGEM3, gives a rise about one-fifth lower than that of 0.43C – within a range of 0.28-0.59.

This would be only slightly higher that the record year of 1998 – in which the Pacific Ocean’s El Nino effect was thought to have added more warming.

If the forecast is accurate, the result would be that the global average temperature would have remained relatively static for about two decades.

Blog suspicions

An apparent standstill in global temperatures is used by critics of efforts to tackle climate change as evidence that the threat has been exaggerated.

Climate scientists at the Met Office and other centres are involved in intense research to try to understand what is happening over the most recent period.

The most obvious explanation is natural variability – the cycles of changes in solar activity and the movements and temperatures of the oceans.

Infographic (Met Office) The forecasts are based on a comparison with the average global temperature over the period 1971-2000

A Met Office spokesman said “this definitely doesn’t mean any cooling – there’s still a long-term trend of warming compared to the 50s, 60s or 70s.

“Our forecast is still for temperatures that will be close to the record levels of the past few years.

“And because the natural variability is based on cycles, those factors are bound to change the other way at some point.”

The fact that the revised projection was posted on the Met Office website without any notice on December 24 last year has fuelled suspicions among bloggers.

However the Met Office says the data had been published in a spirit of transparency as soon as it became available from the computer that produced it.

 

Future forcings

It describes the decadal projections as part of an experimental effort launched in 2004 to fill the gap between daily weather forecasts and century-long estimates for climate change.

But this is an emerging and highly complex area of science because of the interplay of natural factors and manmade greenhouse gases at a time when a key set of temperatures – in the deep ocean – is still relatively unknown.

One aim of attempting to project the climate on this timescale is to be able to rapidly check the accuracy of the models being used.

A paper published last month in the journal Climate Dynamics, authored by scientists from the Met Office and 12 other international research centres, combined different models to produce a forecast for the next decade.

It said: “Decadal climate prediction is immature, and uncertainties in future forcings, model responses to forcings, or initialisation shocks could easily cause large errors in forecasts.”

However the paper concluded that, “in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record”.

Scrutiny of Met Office forecasts and climate science generally is set to increase in the build-up to the publication of the next assessment by the UN’s Intergovernmental Panel on Climate Change (IPCC) in September.

Source:

http://www.bbc.co.uk/news/science-environment-20947224

=========================================================

Re: that last paragraph, with the release of the IPCC AR5 leak #2 today, ya think?

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

301 Comments
Inline Feedbacks
View all comments
Skiphil
January 11, 2013 11:15 am

This comment is about the media and Met Office handling of the controversy so far. If I may cross-post with what I said at Bishop Hill, some reactions from the Met Office so far are quite interesting. BBC’s Roger Harribin has a tweet which says some anonymous ‘@metoffice insider’ termed the Dec. 24 release ‘naive’ and ‘bloody stupid.’ [yes, inconvenient to the party line, but what happened to the vaunted ‘public right to know’]
I must call attention to Messenger’s recall of (head of Met Office) Julia Slingo’s comments to the Feedback show at 4:30 (over on the BH “Rumbling On” thread). This does confirm that she is responding directly to at least some media inquiry and that she sounds “very cross” about the matter …. so it’s at least plausible to think that Harrabin may be channeling her or someone close to her. Speculation, yes, but why does that aspect matter? Because we should want to know why it is regarded as such a mistake to have published the (half) decadal forecast on Dec. 24, experimental or not. Why was this act described as so ‘innocent’** and ‘naive’ and ‘bloody stupid’??
Why should the public NOT be allowed to know about this research? Whose call is it whether such research is provided to the public? Is it a state secret when there are uncertainties or new projections or model outputs that differ from previous public pronouncements? Is this a scientific organization or a PR organization? etc. So many questions, not enough answers.

“So was the headline correct ? says Roger B. “We’ll ask Julia Slingo.”
Julia Slingo sounds VERY cross. It is the sceptic blogs’ fault and we have cast nasturtiums and misrepresented the integrity of science. … the earth will continue to warm to record levels and in future records may well be broken. In no way has the Met Office made changes in the long term projections and we still will have serious problems.
Or words to that effect.
Jan 11, 2013 at 4:54 PM | Messenger
==================================================================
** actually the word ‘innocent’ in context (although it’s only a brief tweet, admittedly) seems not to be redundant with ‘naive’ but more the opposite of ‘guilty’ as in did someone in the Met Office intend to create this fuss. This will be a revealing period for assessing the amount of political and institutional pressure brought to bear on individual scientists, both explicitly and implicitly. How does one need to speak and behave to be regarded as ‘innocent’ and not ‘guilty’ by the @metoffice ‘insider’ type speaking to Harrabin? Is it Team Science or Team Politics or Team Slingo here….??

Graham W
January 11, 2013 11:56 am

My opinion of the forecast is that, since their 90% confidence range includes all values between 0.28 C and 0.59 C, and since according to their graph (Figure 1 in the link I posted previously) this covers the possibility of temperatures declining, remaining stable, and rising over the next four years – that this is not really a forecast at all, certainly not a very useful one.
I think if you look at the graph, at the thick blue line representing their forecast, and the thin blue lines representing the 90% confidence range – the BBC’s statement regarding temperatures remaining static over 20 years is entirely dependent on what happens next (sorry if this is stating the obvious but bear with me). If temperatures end up in the lower part of the range (or outside of it) then actually you could conclude that temperatures had dropped over the 20 years rather than remaining static. If temperatures end up in the higher part of the range (or outside of it) then you could conclude that temperatures had risen over the 20 years rather than remaining static. If temperatures follow the thick blue line then…do you conclude that the temperatures have remained static? Hard to say, because of the variation.
Essentially you then need more data to decide, and then more data…and so on. This is the problem really – you can’t use statistics to “prove” to a degree of confidence the existence of a zero trend. You can only say that results are “statistically indistinguishable from zero” at which point you will be told by some that this analysis is not valid because of the noise and uncertainty in the data, and you in fact need x number of years before you can claim that this is correct (the value of x seemingly increasing as the years go by!). The problem seems to be at what point can you accept the data suggests the trend is ‘flat’…is it in fact possible for temperatures to be flat? Surely they are always rising or falling to different degrees.
Either way the next few years are certainly going to be interesting. It’s certainly also an interesting change to see the mainstream media even contemplating the idea of a pause in global warming. It’s not been so widely reported or discussed in these channels before I think it’s fair to say. I think we can conclude that whatever your opinion on the issues at hand, there is a general growing acceptance even in the MSM that the rate of global warming has slowed over this millennium. Many will not go so far as to say “paused” of course, but even The Guardian would agree with “slowed”, it seems, from the other link I posted further back.

Werner Brozek
January 11, 2013 2:18 pm

Thank you Clive!
– The new forecast uses a baseline of 1971 – 2000.
– The published Hadcrut3/4 datasets all use a baseline of 1961-1990.
This would seemingly all be designed to confuse us !

Then how are we supposed to easily see if they are on track with their forecasts?
Presently, Hadcrut4 says 1998 was at 0.523. Yet they do not seem to be even using this value since the following is from:
Graham W says:
January 11, 2013 at 5:20 am
The Met Office told us that the 0.40 degrees figure is “based on a 12-month period which isn’t synchronised with the calendar year – in which case 1998 is the warmest on record”.
Here is what I calculated quite a while ago which agrees with the above.
As we are aware, Hadcrut4 has replaced 1998 as the hottest year with 2005 and 2010 being warmer. The average anomalies for these three years are as follows according to the woodfortrees numbers: 0.523, 0.535 and 0.5375 respectively. However when one digs a bit deeper, an interesting fact emerges. The hottest consecutive 12 month period is still from the previous century. The hottest 12 month period around 1998 is from September 1, 1997 to August 31, 1998. Here, the anomaly according to Hadcrut4 is 0.5675. 2005 is not changed by adding or subtracting months. However for the period around 2010, the hottest 12 month period is from August 1, 2009 to July 31, 2010. And for this period, the average anomaly is 0.565, which is 0.0025 below the 1998 value. Of course I am not going to suggest any significance to this, just like there is no significance to 2010 being 0.0145 warmer than 1998 with the error bar being about 0.1. But it is something to keep in mind in case someone comments that 2010 was the warmest year due to the “fluke” of how our calendar is constructed.
You can also see it here that 1998 was warmer by a line width:
http://www.woodfortrees.org/plot/hadcrut4gl/from:1980/mean:12
Now 0.5675 – 0.43 = 0.1375. So to answer my own question above, if I want to know how each month compares to their prediction, I have to subtract 0.1375 from their latest anomaly and see if this is below 0.43. Is that correct?
So assuming this is correct, the average anomaly through the first 11 months of 2012 on Hadcrut4 is 0.448. Subtracting 0.1375 gives 0.3105. Since this is way below 0.43, 2012 is way below their prediction for future years. On what basis, other than their models, do they expect a sudden turnaround?
herkimer says:
January 11, 2013 at 7:39 am
In any case i think regardless of which index or base period they use , the trend of their forecast seems to be wrong still as they use faulty models with faulty assumptions leading to too high temperature forecasts.
That’s for sure!

Skiphil
January 11, 2013 2:54 pm

Update (h/t Alex Cull at Bishop Hill)
Julia Slingo, head honcho at the UK’s Met Office, regrets that the model forecast slipped out lacking the ‘appropriate messaging’ around it…. This transcript on the BBC is well worth a read:
https://sites.google.com/site/mytranscriptbox/home/20130111_fb

January 11, 2013 3:05 pm

Graham W:
In your thoughtful post at January 11, 2013 at 11:56 am you rightly say

The problem seems to be at what point can you accept the data suggests the trend is ‘flat’…is it in fact possible for temperatures to be flat? Surely they are always rising or falling to different degrees.

Yes, and that is why one discusses whether the trend is discernibly positive or negative at a stated confidence.
In principle, an observation can have any confidence. But a science defines the confidence which an observation is required to possess for that observation to be accepted as valid.
Climate science uses 95% confidence (i.e. 2-sigma confidence). More rigorous disciplines use 99% confidence. But climate science has decided by ‘custom and practice’ that climate trends need to be observed with 95% confidence for them to be valid.
So, in climate science, if a trend is different from zero at 95% confidence then it is a valid observation that the trend is either positive or negative. But if a trend is NOT different from zero at 95% confidence then it is a valid observation that the trend is indistinguishable from zero.
In other words, if a global temperature trend cannot be discerned to be different from zero at 95% confidence then it is a valid observation that no global warming or global cooling can be discerned over the time period of the analysed trend.
There has been no global warming or global cooling discernible at 95% confidence for ~16 years.
This is important because in 2008 NOAA reported that climate models “rule out” a period of no global warming discernible at 95% confidence for ~16 years. Hence, the recent global temperature data falsify the climate models.
Richard

Philip Shehan
January 11, 2013 3:32 pm

D Böehm Stealey says:
January 11, 2013 at 9:39 am
Shehan says:
“…contrary to D. Boehm Stealy assertion an increase of 0.43 rather than 0.53 is still an increase.”
My apologies. That should have read “…contrary to D. Boehm Stealy’s assertion, an increase of 0.43 rather than 0.53 is still an increase.”
My misrepresentation of your position was entirely unintentional and clearly counter to the thrust of my argument. I apologise to all readers for my frequent sloppiness in failing to correct typos and edits before hitting the submit button. Will try to do better.
That said, Stealy’s complete inability to understand that commenting on results implies neither acceptance nor rejection of the results is… I was going to type “astonishing” but using that word to describe his arguments is becoming a habit.
The full (corrected) sentence reads:
“Again I am only discussing the numbers given by the met taken at face value, so contrary to D. Boehm Stealy’s assertion, an increase of 0.43 rather than 0.53 is still an increase.”
Then there are these comments of mine:
“D.Boehm Stealy, I am commenting on the BBC’s interpretation of the data. That is not an endorsement, [of the data itself]…”
And this:
“On the other thread you failed to understand the meaning “for the sake of argument” and then proclaim that I finally agree with your viewpoint. I don’t, and you clearly do not understand what that caveat means. Your argument there was about a post of mine that has not yet appeared. (You should really explain how you know these things. People have suggested you are also a moderator here.)”
(To clivebest) “I was not actually commenting on whether I though[t] the figures themselves correct or incorrect, only their interpretation of the figures, accepted at face value.”
Astonishing.

Werner Brozek
January 11, 2013 4:11 pm

richardscourtney says:
January 11, 2013 at 3:05 pm
There has been no global warming or global cooling discernible at 95% confidence for ~16 years.
For this analysis, data was retrieved from SkepticalScience.com .
For RSS the warming is NOT significant for 23 years.
For RSS: +0.130 +/-0.136 C/decade at the two sigma level from 1990
For UAH, the warming is NOT significant for 19 years.
For UAH: 0.143 +/- 0.173 C/decade at the two sigma level from 1994
For Hacrut3, the warming is NOT significant for 19 years.
For Hadcrut3: 0.098 +/- 0.113 C/decade at the two sigma level from 1994
For Hacrut4, the warming is NOT significant for 18 years.
For Hadcrut4: 0.098 +/- 0.111 C/decade at the two sigma level from 1995
For GISS, the warming is NOT significant for 17 years.
For GISS: 0.113 +/- 0.122 C/decade at the two sigma level from 1996

Philip Shehan
January 11, 2013 4:35 pm

Werner Brozek says:
January 10, 2013 at 8:59 pm…
Sorry Werner. Meant to get back to you but got tied up discussing other matters.
I agree that the term “about two decades” is too flexible. But presenting data sets defined by tenths of a degree is way too specific. Within the two decades since 1993 you can produce varying trends.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1993/plot/hadcrut3gl/from:2000.3/trend/plot/hadcrut3gl/from:1993/trend/plot/hadcrut3gl/from:1998/trend/plot/hadcrut3gl/from:1996/trend/plot/hadcrut3gl/from:1999/trend.
Trends with start points differing by as little as 3 years can produce distinguishable results.
The trend from your start point of 2000.3, (the green line) is almost indistinguishable from the trend from 1998 (purple line). Yet the line beginning between those values (1999) is quite different, but similar to the line beginning 3 years earlier (1996).
In other words a period of two decades or less is too short for a selected subset within that period to be taken as representative even of that period, let alone the long term record, say from 1880.
D.Boehm has berated me incessantly for not accepting his proposition that the last 16 years of that long term record (and numerous other subsets he selects) is representative of the entire trend since 1880.
When I ask him (this is the fourth or fifth time now) why the 16 year trend from 1940 to 1956, or indeed any other subset with a negative slope, cannot be used to claim that the global temperature has been falling, he says that I am not smart enough to trap him into answering.
http://www.woodfortrees.org/plot/hadcrut3vgl/compress:12/offset/plot/hadcrut3vgl/from:1850/to:2010/trend/offset/plot/hadcrut3vgl/from:1940/to:1956/trend
Astonishing.

Graham W
January 11, 2013 4:43 pm

@richardscourtney: Thanks for the clear explanation. That’s similar to how I understood it but it’s helped to clear up a few things. The trouble I have in communicating these ideas to some people, is that they refuse to accept the idea that the x number of years flat trend, or rather the x number of years of trend that is statistically indistinguishable from zero, means anything. More specifically, they will argue that because the global warming trend we’re trying to detect is so small (typically we’re talking about 0.1 – 0.2 of a degree C rise over a decade, or less more recently) and because the noise in the data is so great, that x years cannot be enough to time to “see” the trend emerge. In fact, they argue, you need at least y number of years for the trend to be statistically valid.
This is one of the points I was making in my last comment. They will always argue, if I said for instance x=16 years, that for the result to be meaningful I would instead need y=20, or 24, or 30 years to be able to say that the trend is truly “at zero”. This seems to me to be a poor argument on their part, because the requirement of y number of years is surely only conditional to the statistical validation of a positive or negative trend and NOT for the statistical “rejection” of a trend. In other words you surely don’t need 20 or 24 years, or indeed any specific number of years, to say that a trend is statistically indistinguishable from zero. This could be true of a trend that is over one year or over a hundred years, if no trend can be shown at 95% confidence over that time period.
Why then, do they refuse to accept the significance of the trend being statistically no different to zero? They will simply say, for example “you must look at periods of 30 years”…and you can argue all you like that over the last 30 years, for the first half of that time temperatures were rising comparatively rapidly, then in the second half of the 30 years this trend seems to have slowed or stalled…and they will simply say “statistically speaking, over the 30 years as a whole, which is a more valid amount of time to use, the trend is positive and is this amount, ergo the world steadily warms”.
This is what I find frustrating. Clearly there exists some kind of disconnect in their mind between actually looking at the data, and seeing what it is doing, to relying solely on statistical analysis which ignores short term changes in the trend. Those changes are surely still important and worthy of investigation regardless, but they say they are not significant.

January 11, 2013 4:46 pm

Shehan, quit being such a crybaby. My central point is, and always has been, that you are flat wrong to claim that global warming is accelerating. It is not accelerating. In fact, global warming has stalled. Correcting your ‘acceleration’ misinformation is my only concern.

herkimer
January 11, 2013 5:54 pm

Skiphill
Thanks for posting the transcript of the Julia Slingo comments with respect to their new forecast. She maintains that their forecast is “experimental and work in progress”. I agree that experimental to me means PROBATIONARY, TRIAL AND ERROR or PRELIMINARY. If that is the case why is all this agw science and their associated decadal and long term forecasts being sold to the public as SOLID SCIENCE with a 90% confidence level. . There is a real disconnect here to the public. Billions of world dollars are being urged to be spent based on science that is barely exploratory and which is proving to be so wrong year after year that it should be reclassified as CONCEPTUAL only and all funding for this work by the public should be halted for all implementation work until such time that the science is truly proven in actual field tests and is properly worthy of refunding. That point may be decade or two away still in my opinion.

Philip Shehan
January 11, 2013 6:08 pm

From the AGW bombshell thread:
Philip Shehan says:
Your comment is awaiting moderation.
January 11, 2013 at 6:00 pm
D. Boehm is commenting on anything but the content of my 2.30 post, and it is easy to understand why. He is a consummate ducker and diver…

Philip Shehan
January 11, 2013 6:12 pm

To quote further from that post:
‘I agree that that the correlation, or model if you prefer, explains nothing about the last 15 years or anything prior, in the sense D. Boehm enunciates:
“Layman is correct when he says that we can’t really tell anything from the past 15 years from that model. As I have repeatedly pointed out, the only way to see if global temperatures are accelerating is by using a long term trend chart, based on verifiable data.” Examination of the last fifteen years of the data set, or prior to 1880 cannot substitute for an examination of the entire data set from 1880-2007.“
[This is a ‘Gotcha’ as Boehm completely reversing himself having interminably berated me for making exactly this point. He insisted over and over again that the last 15 years could substitute for that entire set. Astonishing.]

Reply to  Philip Shehan
January 12, 2013 2:13 am

Philip writes:

Examination of the last fifteen years of the data set, or prior to 1880 cannot substitute for an examination of the entire data set from 1880-2007.“

Lets look at the entire data set:
Lets also assume an AGW forcing term S=ln(C/C0) giving approximately: DT = 2*Ln(C/C0) i.e. today C=400ppm C0=280pmm.
If you fit now the HadCrut3(4) global temperature data from 1850 to 2011 to a logarithmic dependence on CO2 levels, you observe a clear 60 year oscillation and a smaller 11 year oscillation present in the data. see analysis here. The 60 year cycle was responsible for 2 cooling periods 1880 – 1910 and 1940-1970. This indeed seems to be superimposed on a gradual warming due to increasing CO2 DT=2.5Ln(C/C0). So AGW is “true” but it is much more benign than IPCC would have you believe.
The rapid warming from 1970-2000 which generated the IPCC hype was mostly caused by the upturn in this oscillation. We have now entered a downturn (natural cooling) period which will last until 2030. The net result will be that globally temperatures will remain static until 2030. Assuming that CO2 levels continue rising until the end of the century, we can expect a further rise of ~0.5 C between 2030 and 2060, followed by another stalling on temperatures until 2100. So in total a rise of about 1.5C – no big deal.
What is the cause of this oscillation? One theory is that a resonance of Jupiter and Saturn’s orbit around the sun induces both tidal effects and a shift in the solar barycentre (Scaffetti). Another proposal is that it is caused by changes to the thermohaline circulation (AMD oscillation) in the Atlantic.
The fact is that all IPCC GCM models were tuned to the observed rapid rise between 1950 to 2000 on the assumption that the only driver of climate was CO2 enhanced by positive water feedback. Temperatures have now stalled for about 17 years, and as a result their predictions are proving to have been over-exaggerated. Hence why the Met Office has downgraded its prediction.
Conclusion: It is not worth dismantling western civilization in order to save ~0.5 degree rise in temperatures. There are about 80 years to develop nuclear fusion – the only realistic alternative to fossil fuels. Unfortunately the UK spends 25million on Nuclear Fusion research, 200 million on the MET Office and 8 billion subsidizing daft wind farms. Biofuel is even more daft. The UK hit “peak Wood” crisis in the 16th century !

January 11, 2013 6:23 pm

I see that Shehan is still being a thin-skinned crybaby. Good. I like the amusement. Someone please hand him a hanky.☺
My one and only point is that Shehan is wrong when he falsely asserts that global warming is accelerating. It is not. Shehan cannot credibly refute the temperature record, so he cries and rants. That’s OK. We know the truth about global warming.
Global warming is not only NOT accelerating, it has stalled — as everyone on both sides of the debate except Shehan now acknowledges.

Werner Brozek
January 11, 2013 8:02 pm

Philip Shehan says:
January 11, 2013 at 4:35 pm
Temperatures go up and down like a yo-yo, whether from 1940 to 1956 or many other times. The sun also goes up and down like a yo-yo with the sunspot cycles. However CO2 went up slowly at the start of the century and it went up faster after about 1945, but temperatures were all over the place going up and down and no where. Perhaps it is the sun that is the major driver and not CO2?
The trend from your start point of 2000.3, (the green line) is almost indistinguishable from the trend from 1998 (purple line).
It sounds like the La Nina in 1999 balanced out the 1998 El Nino. Then what is wrong with starting a slope in 1997?

January 12, 2013 2:17 am

Graham W:
In your post at January 11, 2013 at 4:43 pm you say to me

This is one of the points I was making in my last comment. They will always argue, if I said for instance x=16 years, that for the result to be meaningful I would instead need y=20, or 24, or 30 years to be able to say that the trend is truly “at zero”. This seems to me to be a poor argument on their part, because the requirement of y number of years is surely only conditional to the statistical validation of a positive or negative trend and NOT for the statistical “rejection” of a trend. In other words you surely don’t need 20 or 24 years, or indeed any specific number of years, to say that a trend is statistically indistinguishable from zero. This could be true of a trend that is over one year or over a hundred years, if no trend can be shown at 95% confidence over that time period.

YES! “This could be true of a trend that is over one year or over a hundred years, if no trend can be shown at 95% confidence over that time period.”
That is the crucial and important point.
If something is too small for it to be detected then it is not a cause for concern because it has no effects (observation of its effects would be its detection). And discernment from ‘noise’ is defined as needing 95% confidence for an observation to be valid.
1.
An inability to discern warming or cooling over one year is of no concern.
2.
Observed global warming of 0.8deg.C over one century is of no concern.
3.
Possible future global warming of more than 2.0 deg.C over the next century may be of concern.
4.
Global warming of 0.2 deg. C over a decade can be observed if it happens.
5.
An inability to discern warming or cooling over the most recent (at least) 16 years indicates that possible global warming of more than 2.0 deg.C over the next century is very improbable so is of no concern.
Idiots pretend that something which is too small to be detected is a real and present danger (e.g. see the posts by Philip Shehan in this thread).
Richard

Graham W
January 12, 2013 5:40 am

Thanks Richard. Your responses and particularly points 1 – 5 have made everything crystal clear for me now. This has been a very useful conversation for me anyway, I’m sure everybody else already “got” everything in the first place, but for me it’s been very illuminating.
Thanks all, a very useful and informative website and a pleasant place to debate.

Philip Shehan
January 12, 2013 7:04 am

clivebest:
I do not disagree with much of what you write. But as I pointed out in my post to Werner it is possible to overinterpret the data by going into too fine a detail.
The central issue I am interested in (as discussed in the “AGW Bombshell?” thread is the APPEARANCE (Don’t mean to shout. How do you do italics or bold on this site anyway?) of the temperature record from 1880 to 2007. Specifically whether or not a linear or nonlinear curve best fits the graphical presentation of the entire temperature data set
This does not require a detailed examination of all the factors including solar cycles, aerosols, particulates, greenhouse gas concentrations el nino and la nina events etc contributing to the appearance of the final temperature graph. Nor does it require a theoretical or cause and effect explanation for the curve function, linear or otherwise, chosen to fit the data. (There is no reason to assume a priori that a linear fit, any more than a nonlinear function describes the underlying physical reality of the temperature data. Linear fits are easy to do and given the noise levels of data sets of a century or a few decades or less they give an acceptable quick and dirty visual summary of the trend, so we all use them.)
In my post to Werner I argue and present links to graphs which indeed show that short term linear fits to 15 or 10 year sections of the long term temperature data can go every which way (like a yo -yo as he says in his reply) and are therefore not a good guide to the appearance of a fit covering the whole data set.
And Werner, in response to your question:
It sounds like the La Nina in 1999 balanced out the 1998 El Nino. Then what is wrong with starting a slope in 1997?
Absolutely nothing. Your explanation about the la nina 1999 event balancing out the el nino of 1998 is probably correct, and is precisely why you need to look at the long term where such events balance out as much as possible. Starting with 1997 would just be another short term section indicating nothing about the whole data set. I can add it to my earlier plot. As 1997 also includes the southern hemisphere el nino summer of 1997-98, the linear fits are unsurprisingly very similar:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1993/plot/hadcrut3gl/from:2000.3/trend/plot/hadcrut3gl/from:1993/trend/plot/hadcrut3gl/from:1998/trend/plot/hadcrut3gl/from:1996/trend/plot/hadcrut3gl/from:1999/trend/plot/hadcrut3gl/from:1997/trend

D Böehm Stealey
January 12, 2013 7:27 am

Werner Brozek,
Isn’t it satisfying seeing Shehan post a chart that refutes his ridiculous claim that global warming is accelerating?

herkimer
January 12, 2013 7:32 am

clive best
I notice that there are no typical historical significant temperature dips during the two cool cycles in your projected curve to 2100, just flat temperature periods basically. Hence you show continuous temperature rise to 2060.The hadcrut historical curve shows significant temperature dips of about 0.4C on a smoothed overlay basis or about 0.2 to 0.3 on an average decadal basis for both past cool phases. Why do you think this kind of temperature drops will not happen again. Our latest decadal global temperature trend already shows that we are starting to drop temperatures like we did in the past . I think the impact of Co2 is being over estimated by most forecasters .That is why they all show the temperatures rising when in fact they are dropping.. Also the sun seems to have a 90-100 year cycle where the output drops as we have seen at the start of each of the last 3 centuries . This can further disrupt the typical 60 year cycle which i think is caused primarily by the global SST ocean cycles. If you plot global SST and global surface air temperature on a decadal basis , this becomes clear… I think it is the sun that gives the steady rise in global temperatures of about 0.7 C per century [ involving the sunspot cycles by yet to be explained mechanism] The sun also gives the energy to the oceans via normal irradiance mode which give it back in a lagged 60 year cycle. Any comments?

Reply to  herkimer
January 12, 2013 12:24 pm

herkimer,
I take a sort of middle road here. The is a sound physics basis for an enhanced CO2 greenhouse effect. You can calculate how much radiation escapes to space in the main CO2 absorption band ~15 microns and how it reduces slightly by adding more CO2 in the atmosphere. The net result is (that if nothing else changes) the temperature of the surface would increase (in deg.C) as approximately as ~ 1.6ln(C/C0). This is roughly +1C if CO2 reaches 600ppm or +2C if (impossibly?) CO2 reached 1200ppm.
Of course there are other factors apart from CO2 effecting climate such as solar cycles etc. There is also evidence from the Central England Temperature series (HADCET) of a slow recovery from the little ice age from 1650 of ~ 0.026 deg.C/decade.
So I simply looked at the data and “assumed” that there is ONLY a logarithmic dependence on CO2. As others have already noticed, you then need a 60 year oscillation to explain past temperature data. Fitting HADCRUT3 data to measured Mauna Loa CO2 data and a 60 year oscillation gives a rather good fit. Then extrapolating to the future results in the curves – and assuming emissions scenario B1 results in about 1.5 C warming from 1850 -> 2100.
This is simply curve fitting assuming that AGW is real. If we accept the LIA recovery hypothesis then about 0.6C of this rise is natural. So the oscillation superimposed on this logarithmic increase results on flat temperatures rather than decreasing temperatures from 2000 – 2030.
Now it may be the case that clouds. solar cycles etc. are also important effects in the long term. In fact I strongly suspect that the overall effect of a 70% ocean surface on Earth acts to stabilize temperatures to any external “forcing” whether that be meteorite impacts, super volcanoes, solar variation or whatever. Otherwise the oceans would have boiled away billions of years ago.

Philip Shehan
January 12, 2013 11:42 am

From another thread:
Quoting from the paper:
“3.1 Time series properties of the data
Informal inspection of Fig. 1 suggests that the time series properties of greenhouse gas forcings (panels a and b) are visibly different to those for temperature and solar irradiance (panel c). In panels a and b there is evidence of acceleration, whereas in panel c the two time series appear more stable.”
Informal inspection of the temperature data of panel c does show acceleration, matching that of the greenhouse gas forcing plots in a and b. The temperature rise appears less dramatic due to different scaling factors used in the 3 plots…
In support of my eyeballing of the accelerating nature of the data from 180 to 2007 (based on over 3 decades experience in examining such graphs):
http://www1.picturepush.com/photo/a/11901124/img/Anonymous/hadsst2-with-3rd-order-polynomial-fit.jpeg
Compare this to the data set Stealy presents and the linear fit:
http://www.woodfortrees.org/plot/hadcrut3vgl/compress:12/offset/plot/hadcrut3vgl/from:1850/to:2010/trend/offset

Reply to  Philip Shehan
January 12, 2013 1:30 pm

Philip,
Using IPCC’s new interpretation of calculus – temperatures are now decelerating !!
see here….

January 12, 2013 1:51 pm

clivebest says:
“…temperatures are now decelerating !!”
You know that. I know that. Most everyone here knows that. The IPCC knows that. The Met Office knows that. But Shehan just won’t listen. ☺

Philip Shehan
January 12, 2013 3:56 pm

clivebest, Again I don’t see any point of difference between us on the data, or the interpretation over the short time period you mention.
As I wrote earlier:
“Philip Shehan says:
January 12, 2013 at 7:04 am
clivebest:
I do not disagree with much of what you write. But as I pointed out in my post to Werner it is possible to overinterpret the data by going into too fine a detail.
The central issue I am interested in (as discussed in the “AGW Bombshell?” thread is the APPEARANCE (Don’t mean to shout. How do you do italics or bold on this site anyway?) of the temperature record from 1880 to 2007. Specifically whether or not a linear or nonlinear curve best fits the graphical presentation of the entire temperature data set…
Boehm in his hyperventalation is fixated on short term periods of the 1880-2007 data set. I am not.
Thoughtful of him to post his portrait though.

Werner Brozek
January 12, 2013 4:02 pm

Philip Shehan says:
January 12, 2013 at 11:42 am
In support of my eyeballing of the accelerating nature of the data
Please explain the following. Over the entire life of RSS, there is an increase in average temperature. But the derivative, or rate of change is 0 since 1979 (actually -0.000134889 per year
). But since December 1996, the temperature slope is flat and the derivative is -0.000805452 per year. I know this is very small, but certainly no acceleration.
http://www.woodfortrees.org/plot/rss/from:1979/plot/rss/from:1979/trend/plot/rss/from:1979/derivative/plot/rss/from:1979/derivative/trend/plot/rss/from:1996.9/trend/plot/rss/from:1996.9/derivative/trend

herkimer
January 12, 2013 4:17 pm

clive best
Thanks for your reply. I understand where you are coming from. In my judgement CO2 is not a major climate forcer as many have commented and argued on the various other WUWT blogs. I can see us all blogging on this topic again in a year or two as the global temperatures continue a slow and steady decline of global temperature anomalies to around 0.0 C by about 2030 and the Met Office and IPCC are forced to further lower their decadal projections. When solar cycles and ocean SST cycles are both declining and in sync, as they are doing now, it can only lead to one result as they have done countless times before — lower global temperatures as the major sources of heat are turned down. Rising CO2 levels will do very little when confronted with these two prime climate factors. Even the temporary impact of El Ninos may not be enough as we saw during the period 1880-1910 when despite 4 El Ninos, the global temperature anomaly continued to decline after a short blip until 1910 and even into the early 1920’s. Good to have your input into this debate, Clive