Benchmarking IPCC's warming predictions

By Christopher Monckton of Brenchley

The IPCC’s forthcoming Fifth Assessment Report continues to suggest that the Earth will warm rapidly in the 21st century. How far are its projections short of observed reality?

A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate.

The objective is to compare the IPCC’s projections with observed temperature changes at a glance.

The IPCC’s interval of temperature projections from 2005 is taken from the spaghetti-graph in AR5, which was based on 34 models running four anthropogenic-forcing scenarios.

clip_image002

Curiously, the back-projections for the training period from 2005-2013 are not centered either side of the observational record (shown in black): they are substantially above outturn. Nevertheless, I have followed the IPCC, adopting the approximate upper and lower bounds of its spaghetti-graph.

The 34 models’ central projection (in yellow below) is that warming from 2005-2050 should occur at a rate equivalent to approximately 2.3 Cº/century. This is below the IPCC’s long-established 3 Cº centennial prediction because the models expect warming to accelerate after 2050. The IPCC’s upper-bound and lower-bound projections are equivalent to 1.1 and 3.6 Cº/century respectively.

clip_image004

The temperature scale at left is zeroed to the observed temperature anomaly for January 2005. Offsets from this point determine the slopes of the models’ projections.

Here is the outturn graph. The IPCC’s projections are shown in pale blue.

clip_image006

The monthly global mean UAH observed lower-troposphere temperature anomalies (vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt) are plotted from the the beginning of the millennium in January 2001 to the latest available month (currently April 2013).

The satellite record is preferred because lower-troposphere measurements are somewhat less sensitive to urban heat-island effects than terrestrial measurements, and are very much less likely to have been tampered with.

January 2001 was chosen as a starting-point because it is sufficiently far from the Great El Niño of 1998 to prevent any distortion of the trend-line arising from the remarkable spike in global temperatures that year.

Since the 0.05 Cº measurement uncertainty even in satellite temperature anomalies is substantial, a simple least-squares linear regression trend is preferred to a higher-order polynomial fit.

The simplest test for statistical significance in the trend is adopted. Is the warming or cooling trend over the period of record greater than the measurement error in the dataset? On this basis, the zone of insignificance is shown in pink. At present the trend is at the upper bound of that zone and is thus barely significant.

The entire trend-line is beneath the interval of IPCC projections. Though this outcome is partly an artefact of the IPCC’s unorthodox training period, the slope of the linear trend, at just 0.5 Cº/century over the past 148 months, is visibly below half the slope of the IPCC’s lower-bound estimate of 1.1 Cº/century to 2050.

The principal result, shown in the panel at top left on the graph, is that the 0.5 Cº/century equivalent observed rate of warming over the past 12 years and 4 months is below a quarter of the 2.3 Cº/century rate that is the IPCC models’ current central projection of warming to 2050.

The only moment when the temperature anomaly reached the IPCC’s central estimate was at the peak of the substantial el Niño of 2010.

The RSS dataset, for which the April anomaly is not yet available, shows statistically significant cooling since January 2001 at a rate equivalent to 0.6 Cº/century.

Combining the two satellite temperature datasets by taking their arithmetic mean is legitimate, since their spatial coverage is similar. Net outturn is a statistically insignificant cooling at a rate equivalent to 0.1 Cº/century this millennium.

The discrepancy between the models’ projections and the observed outturn is startling. As the long period without statistically-significant warming (at least 17 years on all datasets; 23 years on the RSS data) continues, even another great el Niño will do little to bring the multi-decadal warming rate up to the IPCC’s least projection, which is equivalent to 1.1 Cº/ century to 2050.

Indeed, the maximum global warming rate sustained for more than a decade in the entire global instrumental record – equivalent to 1.7 Cº/century – is well below the IPCC’s mean projected warming rate of 2.3 Cº/century to 2050.

This discrepancy raises serious questions about the reliability of the models’ projections. Since theory would lead us to expect some anthropogenic warming, its absence suggests the models are undervaluing natural influences such as the Sun, whose activity is now rapidly declining following the near-Grand Maximum of 1925-1995 that peaked in 1960.

The models are also unable to predict the naturally-occurring changes in cloud cover which, according to one recent paper echoing a paper by me that was published three years ago, may have accounted for four and a half times as much warming from 1976-2001 as all other influences, including the influence of Man.

Nor can the models – or anyone else – predict el Niños more than a few months in advance. There is evidence to suggest that the ratio of el Niño to la Niña oscillations, which has declined recently, is a significant driver of medium-term temperature variation.

clip_image008

It is also possible that the models are inherently too sensitive to changes in radiative forcing and are taking insufficient account of the cooling effect of non-radiative transports

Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.

Without the Bode equation, net feedbacks may well be vanishingly different from zero, in which event the warming in response to a CO2 doubling, which is about the same as the centennial warming, will be equivalent to the IPCC’s currently-predicted minimum warming rate, equivalent to 1.1 Cº/century.

Be that as it may, as the above graph from the draft Fifth Assessment Report shows, in each of the four previous IPCC Assessment Reports the models have wildly over-projected the warming rate compared with the observed outturn, and, as the new outturn graph shows, the Fifth Assessment Report does the same.

I should be interested in readers’ reactions to the method and output. Would you like any changes to the monthly graph? And would it be worthwhile to circulate the monthly-updated graph widely to the news media as an answer to their dim question, “Why don’t you believe in global warming?”

Because there hasn’t been any to speak of this millennium, that’s why. The trouble that many of the media have taken to conceal this fact is shameful. This single, simple monthly graph, if widely circulated, will make it very much harder for them to pretend that the rate of global warming is accelerating and we are to blame, or that the “consensus” they have lazily accepted is trustworthy.

The climate scare has only lasted as long as it has because the truth that the models have failed and the world has scarcely warmed has been artfully hidden. Let it be hidden no longer.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

105 Comments
Inline Feedbacks
View all comments
High Treason
May 5, 2013 12:48 am

All the insane carbon mitigation measures are based on the worst case scenarios. Strange how they are routinely WRONG. You would think that Tim Flannery made all these predictions.

Margaret Hardman
May 5, 2013 12:49 am

The final paragraph is interesting, especially compared to the graphs you adduce as your evidence. I suggest a closer look as the two assertions do not match.

May 5, 2013 12:55 am

I do not like your final “Warming was over-predicted” graph. To the average layman, if they saw that on a TV screen, it looks like there is fairly reasonable agreement between the observations and the models. Might be better to extend the x-axis (and projections) out to 2100 and the Y axis to +4 or +6 C, and do away with the gray colored area. Need to keep the graph simple and clear if you expect the everyday, average person with little-to-no scientific training to get the point of the graph in 10 seconds or less.

May 5, 2013 1:09 am

By extending it to 2100 you could then also block-shade in the graph at a temperatures increase of between 2 and 3 degrees in light yellow (as a indicating a possible cause for concern) and the area above 3 degree in light red as probably a real cause for concern). The area of the graph under 2C can be shaded in light green indicating “little cause for concern”. I think changes like this would get the point across to the masses much more readily than the original “as is”.

Moe
May 5, 2013 1:11 am

Oh dear the WMO has categorised 2012 as one of the hottest years on record

May 5, 2013 1:14 am

Here is another version:
http://www.vukcevic.talktalk.net/GR1.htm

The Ghost Of Big Jim Cooley
May 5, 2013 1:33 am

I have to agree with alcheson (above).

Tonyb
Editor
May 5, 2013 1:34 am

Vuk
I think your graph is clearer than the one used in the article. If its for the media any graphic needs to be very obvious in putting over its point
Tonyb

David L.
May 5, 2013 2:20 am

As someone who’s job is to conduct stability studies on new Big Pharma drugs and predict shelf life out to 2 years, I’d say at best we’re on track for 0.5C warming by 2050.
I love how their models have year-to-year granularity. They can’t predict next year’s data but they know there will be a little uptick around 2033 (model with highest projections), for example.

knr
May 5, 2013 2:25 am

The key idea here is ‘going to’ , with usefully comes with a ever extendable time line and can therefore never been wrong . Its not a scientific approach of course , but then this is not science in the first place.

ancientmariner
May 5, 2013 2:48 am

Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?

johnmarshall
May 5, 2013 2:51 am

The whole ”carbon” problem was drempt up to cover a supposed shortfall of insolation due to a total misunderstanding of reality. The warmist reality is a flat earth collecting energy 24/7, ie., no day/night intervals just total daylight/energy input. Real reality is a rotating sphere collecting energy for 12 of the 24 hour day. Insolation being more than enough for the average temperature of +15C. There is no need of the failed GHE theory therefore no ”carbon” problem.

Dusty
May 5, 2013 3:06 am

By inspection linear regression is the wrong model for the observations; as presented the graph will not pursuade the layman that the IPCC is wrong.

Parthlan
May 5, 2013 3:12 am

Personally I think the chart suggested by vukcevic at May 5th 2013 1.14am makes a clearer statement eliminating IPCC back casting. But publication of either or another would be a good idea

honestyoz
May 5, 2013 3:39 am

Right hand side of the dotted line opposite “historical” I would have “hysterical”.
Why not, it true, doesn’t detract from the science, will give the editors a byline and the punter will get it.

Greg Goodman
May 5, 2013 3:43 am

Christopher Monckton says “Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.
Without the Bode equation ….
I should be interested in readers’ reactions…”
Two vague mentions to two papers one by yourself and another one: it would be good to properly reference both. If it’s not verifiable, it’s not science . etc.
” allegedly net-positive temperature feedbacks”: my understanding was that this happened by “parameterisations” (aka guesses) of cloud cover, I was unaware of the use of the Bode equation in all this. Where can information on how this is used be found? It is more likely that it is being misapplied or twisted to give a desired result. I think Roy Spencer showed that it only needs a 2% error in cloud change to equal CO2 forcing. No way can anyone claim these “parametrisation” are anywhere near that accurate.
“… while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.”
It is not confined to electronics, there are a lot of reasons why we should be taking a systems engineering approach to analysing climate and climate data rather econometrics/statistics. At least both should be applied to see if either are of use.
Here one example of engineering look at just what kind of “accelerated melting” is happening in the Arctic. There are strong an obvious oscillatory patterns.
http://climategrog.wordpress.com/wp-admin/post.php?post=216&action=edit
Some of the frequencies can be tied back to periods also found in SST , other are present in SSN. There is certainly more to this than CO2 plus random ‘red’ noise.
So if you have a reference for you point about Bode equation, it would be better to reference it. May be it needs an engineer to look and see how it is being misapplied.

May 5, 2013 3:45 am

ancientmariner:
Your post at May 5, 2013 at 2:48 am asks

Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?

The models are completely off then.
This is because the “committed warming” has not happened.
The explanation for this is in IPCC AR4 (2007) Chapter 10.7 which can be read at
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
It says there

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.
This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.
So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.
Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).
This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.
Richard

Greg Goodman
May 5, 2013 3:46 am

Oops wrong link for Artic plot :
http://climategrog.wordpress.com/?attachment_id=216

Jim Cripwell
May 5, 2013 3:47 am

Lord Moncton writes “A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate”
I am afraid you have the wrong target. The problem is not the media. The problem is the learned scientific societies, headed by the Royal Society, The American Physical Society, and the World Meteorological Organization. As long as these learned bodies continue there overwhelming support for the hoax of CAGW, then the media can, and will, quite legitimately, continue to print pro-CAGW nonsense.
I, for one, would appreciate it if you would use your considerable influence to attack the right targets. And in the case of the UK, your prime target should be the venerable Royal Society.

GabrielHBay
May 5, 2013 4:16 am

I also strongly believe that we should compare apples with apples. In other words, real life development with the appropriate model scenario. To even have the model projections which are based on totally different emission scenarios than actually came to pass on the presentation is just a major confusing factor to the uninitiated, apart from being unscientific. The message is not clear enough, especially considering the intended audience. I would not present this graph to anyone whom I wanted to persuade of anything.

1 2 3 6
Verified by MonsterInsights