Benchmarking IPCC's warming predictions

By Christopher Monckton of Brenchley

The IPCC’s forthcoming Fifth Assessment Report continues to suggest that the Earth will warm rapidly in the 21st century. How far are its projections short of observed reality?

A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate.

The objective is to compare the IPCC’s projections with observed temperature changes at a glance.

The IPCC’s interval of temperature projections from 2005 is taken from the spaghetti-graph in AR5, which was based on 34 models running four anthropogenic-forcing scenarios.


Curiously, the back-projections for the training period from 2005-2013 are not centered either side of the observational record (shown in black): they are substantially above outturn. Nevertheless, I have followed the IPCC, adopting the approximate upper and lower bounds of its spaghetti-graph.

The 34 models’ central projection (in yellow below) is that warming from 2005-2050 should occur at a rate equivalent to approximately 2.3 Cº/century. This is below the IPCC’s long-established 3 Cº centennial prediction because the models expect warming to accelerate after 2050. The IPCC’s upper-bound and lower-bound projections are equivalent to 1.1 and 3.6 Cº/century respectively.


The temperature scale at left is zeroed to the observed temperature anomaly for January 2005. Offsets from this point determine the slopes of the models’ projections.

Here is the outturn graph. The IPCC’s projections are shown in pale blue.


The monthly global mean UAH observed lower-troposphere temperature anomalies ( are plotted from the the beginning of the millennium in January 2001 to the latest available month (currently April 2013).

The satellite record is preferred because lower-troposphere measurements are somewhat less sensitive to urban heat-island effects than terrestrial measurements, and are very much less likely to have been tampered with.

January 2001 was chosen as a starting-point because it is sufficiently far from the Great El Niño of 1998 to prevent any distortion of the trend-line arising from the remarkable spike in global temperatures that year.

Since the 0.05 Cº measurement uncertainty even in satellite temperature anomalies is substantial, a simple least-squares linear regression trend is preferred to a higher-order polynomial fit.

The simplest test for statistical significance in the trend is adopted. Is the warming or cooling trend over the period of record greater than the measurement error in the dataset? On this basis, the zone of insignificance is shown in pink. At present the trend is at the upper bound of that zone and is thus barely significant.

The entire trend-line is beneath the interval of IPCC projections. Though this outcome is partly an artefact of the IPCC’s unorthodox training period, the slope of the linear trend, at just 0.5 Cº/century over the past 148 months, is visibly below half the slope of the IPCC’s lower-bound estimate of 1.1 Cº/century to 2050.

The principal result, shown in the panel at top left on the graph, is that the 0.5 Cº/century equivalent observed rate of warming over the past 12 years and 4 months is below a quarter of the 2.3 Cº/century rate that is the IPCC models’ current central projection of warming to 2050.

The only moment when the temperature anomaly reached the IPCC’s central estimate was at the peak of the substantial el Niño of 2010.

The RSS dataset, for which the April anomaly is not yet available, shows statistically significant cooling since January 2001 at a rate equivalent to 0.6 Cº/century.

Combining the two satellite temperature datasets by taking their arithmetic mean is legitimate, since their spatial coverage is similar. Net outturn is a statistically insignificant cooling at a rate equivalent to 0.1 Cº/century this millennium.

The discrepancy between the models’ projections and the observed outturn is startling. As the long period without statistically-significant warming (at least 17 years on all datasets; 23 years on the RSS data) continues, even another great el Niño will do little to bring the multi-decadal warming rate up to the IPCC’s least projection, which is equivalent to 1.1 Cº/ century to 2050.

Indeed, the maximum global warming rate sustained for more than a decade in the entire global instrumental record – equivalent to 1.7 Cº/century – is well below the IPCC’s mean projected warming rate of 2.3 Cº/century to 2050.

This discrepancy raises serious questions about the reliability of the models’ projections. Since theory would lead us to expect some anthropogenic warming, its absence suggests the models are undervaluing natural influences such as the Sun, whose activity is now rapidly declining following the near-Grand Maximum of 1925-1995 that peaked in 1960.

The models are also unable to predict the naturally-occurring changes in cloud cover which, according to one recent paper echoing a paper by me that was published three years ago, may have accounted for four and a half times as much warming from 1976-2001 as all other influences, including the influence of Man.

Nor can the models – or anyone else – predict el Niños more than a few months in advance. There is evidence to suggest that the ratio of el Niño to la Niña oscillations, which has declined recently, is a significant driver of medium-term temperature variation.


It is also possible that the models are inherently too sensitive to changes in radiative forcing and are taking insufficient account of the cooling effect of non-radiative transports

Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.

Without the Bode equation, net feedbacks may well be vanishingly different from zero, in which event the warming in response to a CO2 doubling, which is about the same as the centennial warming, will be equivalent to the IPCC’s currently-predicted minimum warming rate, equivalent to 1.1 Cº/century.

Be that as it may, as the above graph from the draft Fifth Assessment Report shows, in each of the four previous IPCC Assessment Reports the models have wildly over-projected the warming rate compared with the observed outturn, and, as the new outturn graph shows, the Fifth Assessment Report does the same.

I should be interested in readers’ reactions to the method and output. Would you like any changes to the monthly graph? And would it be worthwhile to circulate the monthly-updated graph widely to the news media as an answer to their dim question, “Why don’t you believe in global warming?”

Because there hasn’t been any to speak of this millennium, that’s why. The trouble that many of the media have taken to conceal this fact is shameful. This single, simple monthly graph, if widely circulated, will make it very much harder for them to pretend that the rate of global warming is accelerating and we are to blame, or that the “consensus” they have lazily accepted is trustworthy.

The climate scare has only lasted as long as it has because the truth that the models have failed and the world has scarcely warmed has been artfully hidden. Let it be hidden no longer.


newest oldest most voted
Notify of
High Treason

All the insane carbon mitigation measures are based on the worst case scenarios. Strange how they are routinely WRONG. You would think that Tim Flannery made all these predictions.

Margaret Hardman

The final paragraph is interesting, especially compared to the graphs you adduce as your evidence. I suggest a closer look as the two assertions do not match.

I do not like your final “Warming was over-predicted” graph. To the average layman, if they saw that on a TV screen, it looks like there is fairly reasonable agreement between the observations and the models. Might be better to extend the x-axis (and projections) out to 2100 and the Y axis to +4 or +6 C, and do away with the gray colored area. Need to keep the graph simple and clear if you expect the everyday, average person with little-to-no scientific training to get the point of the graph in 10 seconds or less.

By extending it to 2100 you could then also block-shade in the graph at a temperatures increase of between 2 and 3 degrees in light yellow (as a indicating a possible cause for concern) and the area above 3 degree in light red as probably a real cause for concern). The area of the graph under 2C can be shaded in light green indicating “little cause for concern”. I think changes like this would get the point across to the masses much more readily than the original “as is”.


Oh dear the WMO has categorised 2012 as one of the hottest years on record

The Ghost Of Big Jim Cooley

I have to agree with alcheson (above).


I think your graph is clearer than the one used in the article. If its for the media any graphic needs to be very obvious in putting over its point

David L.

As someone who’s job is to conduct stability studies on new Big Pharma drugs and predict shelf life out to 2 years, I’d say at best we’re on track for 0.5C warming by 2050.
I love how their models have year-to-year granularity. They can’t predict next year’s data but they know there will be a little uptick around 2033 (model with highest projections), for example.


The key idea here is ‘going to’ , with usefully comes with a ever extendable time line and can therefore never been wrong . Its not a scientific approach of course , but then this is not science in the first place.


Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?


The whole ”carbon” problem was drempt up to cover a supposed shortfall of insolation due to a total misunderstanding of reality. The warmist reality is a flat earth collecting energy 24/7, ie., no day/night intervals just total daylight/energy input. Real reality is a rotating sphere collecting energy for 12 of the 24 hour day. Insolation being more than enough for the average temperature of +15C. There is no need of the failed GHE theory therefore no ”carbon” problem.


By inspection linear regression is the wrong model for the observations; as presented the graph will not pursuade the layman that the IPCC is wrong.


Personally I think the chart suggested by vukcevic at May 5th 2013 1.14am makes a clearer statement eliminating IPCC back casting. But publication of either or another would be a good idea


Right hand side of the dotted line opposite “historical” I would have “hysterical”.
Why not, it true, doesn’t detract from the science, will give the editors a byline and the punter will get it.

Greg Goodman

Christopher Monckton says “Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.
Without the Bode equation ….
I should be interested in readers’ reactions…”
Two vague mentions to two papers one by yourself and another one: it would be good to properly reference both. If it’s not verifiable, it’s not science . etc.
” allegedly net-positive temperature feedbacks”: my understanding was that this happened by “parameterisations” (aka guesses) of cloud cover, I was unaware of the use of the Bode equation in all this. Where can information on how this is used be found? It is more likely that it is being misapplied or twisted to give a desired result. I think Roy Spencer showed that it only needs a 2% error in cloud change to equal CO2 forcing. No way can anyone claim these “parametrisation” are anywhere near that accurate.
“… while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.”
It is not confined to electronics, there are a lot of reasons why we should be taking a systems engineering approach to analysing climate and climate data rather econometrics/statistics. At least both should be applied to see if either are of use.
Here one example of engineering look at just what kind of “accelerated melting” is happening in the Arctic. There are strong an obvious oscillatory patterns.
Some of the frequencies can be tied back to periods also found in SST , other are present in SSN. There is certainly more to this than CO2 plus random ‘red’ noise.
So if you have a reference for you point about Bode equation, it would be better to reference it. May be it needs an engineer to look and see how it is being misapplied.

Your post at May 5, 2013 at 2:48 am asks

Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?

The models are completely off then.
This is because the “committed warming” has not happened.
The explanation for this is in IPCC AR4 (2007) Chapter 10.7 which can be read at
It says there

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.
This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.
So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.
Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).
This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.

Greg Goodman

Oops wrong link for Artic plot :

Jim Cripwell

Lord Moncton writes “A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate”
I am afraid you have the wrong target. The problem is not the media. The problem is the learned scientific societies, headed by the Royal Society, The American Physical Society, and the World Meteorological Organization. As long as these learned bodies continue there overwhelming support for the hoax of CAGW, then the media can, and will, quite legitimately, continue to print pro-CAGW nonsense.
I, for one, would appreciate it if you would use your considerable influence to attack the right targets. And in the case of the UK, your prime target should be the venerable Royal Society.


I also strongly believe that we should compare apples with apples. In other words, real life development with the appropriate model scenario. To even have the model projections which are based on totally different emission scenarios than actually came to pass on the presentation is just a major confusing factor to the uninitiated, apart from being unscientific. The message is not clear enough, especially considering the intended audience. I would not present this graph to anyone whom I wanted to persuade of anything.

Kevin Hearle

The graphics are fine if you are mathematically educated but if you want to get a message across in the press to the 99% of the public then you need a very good illustrator to take the graphics and convert them to illustrations without loss of integrity of the message. An illustrator will get them camera ready for publication in the daily’s.


As Tonyb said, Vuk’s graphic is much clearer.


For Gods sake leave the good Lord Alone hes made an extraordinary effort to explain the REAL data as it is and all you can do is nit pick shame on you lot! LOL

Evert Jesse

Now this is the message I would like to see in the 50 to 1 movie. Just show that the boffins are disconnected from reality. Many people would accept this much more easily than cost-benefit analyses.


vukcevic says, May 5, 2013 at 1:14 am:
“Here is another version:
Indeed very neat and tidy.
To show how global surface temperatures have actually progressed, though, better discard the HadCRUt4 graph and bring the HadCRUt3 back out instead. If you then adjust this down by 0.064 degrees post 1998 (to amend the never corrected or even addressed, but easily documented, artificial jump in mean temperature level that occurred from one month to the next in 1997/98, following a switch by the Hadley Centre from one source of SST data to another, at the seam between the two), you will end up with the following graph (1970-2012) (I took the liberty of using your figure, hope that’s OK):
Incidentally, this ‘pristine’ version of the HadCRUt3 curve correlates the temperature evolution of the surface of the globe to an almost uncanny degree of perfection (the only real difference being the amplitudes) with that of the lower troposphere above it (as per the RSS tlt dataset):

Ben D.

Luv u Vuk…

Jim McCulley

Christopher, thanks. What you wrote about ENSO was great:
“Nor can the models – or anyone else – predict el Niños more than a few months in advance. There is evidence to suggest that the ratio of el Niño to la Niña oscillations, which has declined recently, is a significant driver of medium-term temperature variation.”

michael hart

The gray area is just put in to deceive the unwary. If predictions look like breaching the confidence-level bounds, then draw in the lower confidence-level bounds! But try not to make it obvious what you have done. A standard trick.
Roy Spencer has a nice graph:
The short article he wrote with it is delightfully easy to read and understand:

Bill Illis

One could eliminate some of the RCP’s to simplify the graphs.
The 2.6 (3.0) scenario is just not going to happen. We are almost there already and this scenario envisions huge emissions reductions. RCP 8.5 is also very unrealistic and we are not on track to come even close to this scenario (although it does appear in many climate science papers – scare factor I guess).
The comments above that no chart will convince the committed is apt. For the most part, they only believe a data presentation if it has a line going up and they automatically don’t believe anything that shows their belief system might be in error.
It is better to think of these charts as for objective people only. Forget the committed. They will change when they decide to change.


I agree with others to leave out the hindcast portion. The casual reader of MSM will assume those were forecasted values and there was good predictive values during that period.

Always.. Always, I am taken to school in such marvelous ways with so many of your posts and that of your Guests, in this case The Esteemed Viscount Monckton. I rail quite often and If not Ad Nausea about the actual “data” supporting none of the Algorite alarmist claims, but yet like an angry baby not wanting to eat its mushed peas, the same farcical, the world is going to end last Tuesday nonsense is cried loud and repeatedly, and I imagine with a similar runny nose to boot.(No doubt from this chilly spring)


May 5, 2013 at 5:03 am:
“Incidentally, this ‘pristine’ version of the HadCRUt3 curve correlates the temperature evolution of the surface of the globe to an almost uncanny degree of perfection (the only real difference being the amplitudes) with that of the lower troposphere above it (as per the RSS tlt dataset):

BTW, the ‘corrected’ HadCRUt3gl also matches very well indeed global SSTs (as represented by the satellite-based Reynolds OI.v2 dataset, 1982-2012):


Speaking as a non-scientist, I do not think vukcevic’s graphs portrays the problem as well as Lord Monckton’s graph. People will see Vuk’s temp line is still in the pink (95% certainty). Also,the elimination of projections to the left does not show the extent of error over time. That is an important part of message. The suggestions by alcheson would make Monckton’s graph communicate clearly to any reader. Tiptop.

Tom J

Ok, I’m privy to some super top secret information. After long, thoughtful consideration that took maybe two seconds I’ve decided to reveal it to you. Ready?
There is other life in the universe!
And, in fact, these highly advanced life forms have visited this planet. Super secretly. You see, space travel exceeding ‘light-speed’ does indeed exist. Believe it or not, they actually do call it ‘warp’ speed. What a coincidence.
Believe it or not, there’s not just one highly intelligent life form out there. There’s several. A long, long time ago they formed an intergalactic space council. And they’ve been debating on whether to introduce themselves to us. And, invite us in to a seat on the intergalactic space council. But they devised a test to see whether our governing structures were intelligent enough to be worthy to join them.
Now they know that time is not a constant. But they also know that a time travel machine cannot exist. Therefore one cannot know the future. So they cleverly created a situation to see if we could realize that. As you can see from those IPCC scenarios above (predicting temperatures from expectations of human activities waaaay into the future) our governing bodies flunked that part of the test. Big time!
Then they wanted to see if our governing structures would take totally chaotic random noise and believe they could actually tease out a signal from it without any possibility of self-serving political or personal bias. Well, from looking at the IPCC graphs, guess what? That was flunked too. Big time!
There were numerous other features of the test. And all I can say is: Big time! Big time! Big time! Big time! Big time! Big time! Big time! Big time! …….


I like the idea of putting these graphs, or something similar out into the popular media arena. However, I am not sure the general public is the right audience to make a difference. It seems to me that the group of people who are the difference makers are those educated and sophisticated people who believe in CAGW because they believe the scientific authorities who tell them it is true. This group includes academics whose field of study is not science, but who have incorporated a version of science into their instruction, rather indiscriminately, and do not believe or give any credence to the non-academic who might challenge it.
My anecdotal evidence for this is the 4 years I recently spent at the university, which included 3 years in law school. Overwhelmingly, the coursework was based on the assumption that the global warming theory was unarguably true, and that everything else flowed from there. This is true regardless of the subject of the course.
One project I worked on was geared entirely toward figuring out how to convince farmers to change their crop selection so as to best minimize global warming–specifically how to convince them to grow the crops that could be used for cellulosic ethanol. (Never mind that there is no commercially viable cellulosic ethanol plant around.) Farmers who based crop selection on winter soil moisture or commodity markets or both, and those who viewed the weather and climate as cyclical, were belittled in project meetings as uneducated rubes. Those who espoused any sort of religious belief (“It’s been this way since God made the world”) were even more scorned by the project researchers as worse than uneducated. The “team members” were otherwise nice people.
Sorry I am so cynical.

Christopher Monckton – This is a good idea, but I think your graphs are too complex to get across to the general public, whose attention span is now measured in microseconds.
Suggested guideline: If the graph is too complex for a roadside ad, then it is too complex.
I would suggest something dead simple. Graph heading “Global Warming?”. Two values graphed, (1) average prediction of the models labelled “Predicted”, (2) average satellite measurement labelled “Measured”. Prediction runs from un-cherry-picked date of the AR4 prediction to say 2030 (measured starts earlier and obviously ends 2013). Footing “Global Hogwash!”, No error bars, error ranges, variations, etc, and probably use annual data rather than monthly so that there is less noise. But with the graph, you could provide the explanation and links to more sophisticated graphs and, importantly, the data, so that the proper scientific perspectives are available to those who wish to investigate them.

kadaka (KD Knoebel)

Dusty said May 5, 2013 at 3:06 am:

By inspection linear regression is the wrong model for the observations; as presented the graph will not pursuade the layman that the IPCC is wrong.

Strange, I can just look at that “outturn” graph (figure 3) that used the linear regression, see “The IPCC thinks this would have happened”, see the gap between it and “This was the reality”, and know the IPCC was wrong.
Did I become so much smarter than a layman that now such a graph automatically makes sense to me, or are you actually telling me that a layman, by present standards, is abysmally ignorant and unintelligent compared to me?

Peter Shaw

These graphics are generally lucid and balanced to me (a former scientist who has had to present information to business executives).
They avoid “dumbing down”, and are less cluttered than the vukcevic graphic preferred by some (I suspect professional) persons.
You don’t characterise the IPCC range limits. If these are (as I suspect) ensemble limits, they indicate “beyond reasonable doubt”, which the general public understands.
Also, from sport, they have sense of “that ball’s going out of play” (as this one undoubtedly will). Your first chart captures this.
If you have to use confidence limits, the betting public understands odds (20:1) better than p = 0.05.
I hope this helps; the current standard of graphics on world media is generally woefully low, so needs good examples.

Lew Skannen

As David L mentions above I have also noticed that model estimates that are out of the ball park still somehow claim year to year granularity. There must be some equations churning out this pap. Do we ever get to see these models workings??

Once the black line in Vukcevic’s chart falls out of the light red area, here’s how WUWT should headline it:
The “97%” were 95% Certain–but 100% Wrong.


Roy Spencer’s graph is by far the easiest to view and understand. See above. Dim journos and the public should easily understand his graphs shown clearly below the models.


Why I have a hard time believing any climate forecasts …
Today’s weather forecasts are produced from an ensemble of weather models that apparently take in far more variables and data than the climate models and so far here in the Colorado mountains, you can toss a coin and get the same results as our accurate weather forecasts. If they can’t forecast tomorrow’s weather very accurately how do they expect me to believe climate forecasts 20 or 30 years out?

Thanks, Christopher, Lord Monckton.
I would show your graphic in my pages. I would like it better though, if it did not contain the blue, red and black boldface conclusions; these should be in the translatable text caption for the graph, to be read by Google and other search engines.
With so much text in the graphic itself I would have to be redundant in the English version caption and a lot of English would have to be shown in the Spanish pages, like

I suggest that to circulate the IPCC projections every month would be counterproductive as it would continue to give publicity and some validity to their warming projections as if they were relevant serious science.If the current cooling were to continue on the monthly updates the Alarmist establishment and their supporting MSM propagandists would then merely counter with their epicycle like arguments to suggest that ,even though you don’t see it , it is really there in the oceans or is disguised by China burning coal .They also would be quick to say that blizzards and cold, droughts and floods or whatever weather actually occurs is due to warming or climate change as they now prefer.
The realist side should hammer away at the fundamental gross errors of reason and logic which are the core of the models. They are simply structured incorrectly by assuming that CO2 is the main climate driver..CO2 follows temperature and therefore cannot rationally be the climate driver .This absurdity is then compounded by adding the warming effect of the major greenhouse gas – water vapour as a feedback and counting its contribution in calculating the sensitivity to CO2. Whether this is stupidity and incompetence or deliberate deception is for the modellers to decide.
What should be publicized and checked against the incoming data are different projections made by quite different approaches.I ,with of course all due modesty ,suggest that the projections made in my post “Global Cooling Methods and Testable Decadal Predictions “on my site at
would be a useful place to start.
Here is a summary
1 Significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5 Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and help maintain crop yields .
9 Warning !! There are some signs in the Livingston and Penn Solar data that a sudden drop to the Maunder Minimum Little Ice Age temperatures could be imminent – with a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
I also posted on the same blog in 2010 a “Thirty year climate forecast” and in June 2012 a two year update.on that forecast. So far it is looking much much better than anything the IPCC models have produced – not that that is much of a standard to compete with.
The IPCC models should be ignored as being irrelevant.

Jeff Norman

I agree that Dr. Spencer’s graphical presentation is the clearest, though I think it would benefit with a vertical line separating the forecast from the hindecast. Otherwise people might think they got the early 90’s cool spike right.
Tom J is sort of right, it is a test of intelligence. If we pass, we get to continue on much the same as we have been. If we fail our civilization collapses in the next glaciation period.


I suspect we will be living with the IPCC idiocy and the bunk-buddy-reviewers at IPCC endorsed journals like the JGR circus idiocy for a few decades yet.
On a prudent level the US can withhold funding to the UN and restrict travel and publishing funds in national grants, i.e. travel and publishing charges will need approval based on justification by the funding agency with the Office of Inspector General oversight and acknowledgement as a requirement.


Here is an updated summary of various forecasts of global temperature anomaly [hadcrut3] for the end of 2017 [after next 5 years]
The data has been calculated or eyeballed from data or graphs available on the internet.
JAMES HANSON 1.4 C for A option,
1.2 C for B option
0.6 C for C option
44 MODELS 0.9 C models median [range 0.49 to 1.4 C] GLOBAL LT TEMPERATURE 44 LATEST MODELS [R.SPENCER]
38 MODELS 0.75 C models median (range 0.25 to 1.2 C) global CMIPS RCP45 38 MODELS
IPCC MODELS 0.49 C [range 0 to 0.85C] per AR5 11 MONCKTON ARTICLE MAY 5, 2012
MET OFFICE [UK] 0.430 C [0.28 C to 0.59 C] WAS 0.76 C previously
G.ORSSENGO 0.226 C [0.1C to 0.55C lower and upper limit] STATISTICAL MODEL BASED ON GMTA HADCRUT gl3
0.0 C BASED ON 1880-1915 PAST TREND
0.4 C BASED ON 1945-1977 PAST TREND

Ian W

This should be put across much Viscount Monckton already has – “Good News!! – there is no global warming and satellite measurements show that warming is not in any way a world ending threat, indeed the slight warming will be beneficial!!”
Whatever graphic is decided upon – it must be a valid comparison of the CO2 emissions as they have been which is higher than the ‘business as normal’ model of the IPCC. All the other IPCC models based on CO2 emissions being reduced or held at 1995 levels etc, should be removed as they are not pertinent and just confuse the issue.
Then the message is: your jobs are being sent to China and your fuel and energy costs are going up based on a failed prediction of global warming.
People should stop talking in AGW newspeak stupidities like ‘climate denier’ – the entire ‘climate change’ argument is based on CO2 warming the atmosphere and causing feedbacks that further warm the atmosphere aka Global Warming – and that entire claim has been proven WRONG there is no dangerous warming – yet the politicians are still taxing you, destroying and exporting jobs based on this incorrect claim. The subtext could then go on to question the motives and/or mental capacity of the politicians and the various federal agencies such as the EPA who must know the good news that the forecasts of catastrophe are all turning out to be hopelessly wrong.