Benchmarking IPCC's warming predictions

By Christopher Monckton of Brenchley

The IPCC’s forthcoming Fifth Assessment Report continues to suggest that the Earth will warm rapidly in the 21st century. How far are its projections short of observed reality?

A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate.

The objective is to compare the IPCC’s projections with observed temperature changes at a glance.

The IPCC’s interval of temperature projections from 2005 is taken from the spaghetti-graph in AR5, which was based on 34 models running four anthropogenic-forcing scenarios.

clip_image002

Curiously, the back-projections for the training period from 2005-2013 are not centered either side of the observational record (shown in black): they are substantially above outturn. Nevertheless, I have followed the IPCC, adopting the approximate upper and lower bounds of its spaghetti-graph.

The 34 models’ central projection (in yellow below) is that warming from 2005-2050 should occur at a rate equivalent to approximately 2.3 Cº/century. This is below the IPCC’s long-established 3 Cº centennial prediction because the models expect warming to accelerate after 2050. The IPCC’s upper-bound and lower-bound projections are equivalent to 1.1 and 3.6 Cº/century respectively.

clip_image004

The temperature scale at left is zeroed to the observed temperature anomaly for January 2005. Offsets from this point determine the slopes of the models’ projections.

Here is the outturn graph. The IPCC’s projections are shown in pale blue.

clip_image006

The monthly global mean UAH observed lower-troposphere temperature anomalies (vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc.lt) are plotted from the the beginning of the millennium in January 2001 to the latest available month (currently April 2013).

The satellite record is preferred because lower-troposphere measurements are somewhat less sensitive to urban heat-island effects than terrestrial measurements, and are very much less likely to have been tampered with.

January 2001 was chosen as a starting-point because it is sufficiently far from the Great El Niño of 1998 to prevent any distortion of the trend-line arising from the remarkable spike in global temperatures that year.

Since the 0.05 Cº measurement uncertainty even in satellite temperature anomalies is substantial, a simple least-squares linear regression trend is preferred to a higher-order polynomial fit.

The simplest test for statistical significance in the trend is adopted. Is the warming or cooling trend over the period of record greater than the measurement error in the dataset? On this basis, the zone of insignificance is shown in pink. At present the trend is at the upper bound of that zone and is thus barely significant.

The entire trend-line is beneath the interval of IPCC projections. Though this outcome is partly an artefact of the IPCC’s unorthodox training period, the slope of the linear trend, at just 0.5 Cº/century over the past 148 months, is visibly below half the slope of the IPCC’s lower-bound estimate of 1.1 Cº/century to 2050.

The principal result, shown in the panel at top left on the graph, is that the 0.5 Cº/century equivalent observed rate of warming over the past 12 years and 4 months is below a quarter of the 2.3 Cº/century rate that is the IPCC models’ current central projection of warming to 2050.

The only moment when the temperature anomaly reached the IPCC’s central estimate was at the peak of the substantial el Niño of 2010.

The RSS dataset, for which the April anomaly is not yet available, shows statistically significant cooling since January 2001 at a rate equivalent to 0.6 Cº/century.

Combining the two satellite temperature datasets by taking their arithmetic mean is legitimate, since their spatial coverage is similar. Net outturn is a statistically insignificant cooling at a rate equivalent to 0.1 Cº/century this millennium.

The discrepancy between the models’ projections and the observed outturn is startling. As the long period without statistically-significant warming (at least 17 years on all datasets; 23 years on the RSS data) continues, even another great el Niño will do little to bring the multi-decadal warming rate up to the IPCC’s least projection, which is equivalent to 1.1 Cº/ century to 2050.

Indeed, the maximum global warming rate sustained for more than a decade in the entire global instrumental record – equivalent to 1.7 Cº/century – is well below the IPCC’s mean projected warming rate of 2.3 Cº/century to 2050.

This discrepancy raises serious questions about the reliability of the models’ projections. Since theory would lead us to expect some anthropogenic warming, its absence suggests the models are undervaluing natural influences such as the Sun, whose activity is now rapidly declining following the near-Grand Maximum of 1925-1995 that peaked in 1960.

The models are also unable to predict the naturally-occurring changes in cloud cover which, according to one recent paper echoing a paper by me that was published three years ago, may have accounted for four and a half times as much warming from 1976-2001 as all other influences, including the influence of Man.

Nor can the models – or anyone else – predict el Niños more than a few months in advance. There is evidence to suggest that the ratio of el Niño to la Niña oscillations, which has declined recently, is a significant driver of medium-term temperature variation.

clip_image008

It is also possible that the models are inherently too sensitive to changes in radiative forcing and are taking insufficient account of the cooling effect of non-radiative transports

Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.

Without the Bode equation, net feedbacks may well be vanishingly different from zero, in which event the warming in response to a CO2 doubling, which is about the same as the centennial warming, will be equivalent to the IPCC’s currently-predicted minimum warming rate, equivalent to 1.1 Cº/century.

Be that as it may, as the above graph from the draft Fifth Assessment Report shows, in each of the four previous IPCC Assessment Reports the models have wildly over-projected the warming rate compared with the observed outturn, and, as the new outturn graph shows, the Fifth Assessment Report does the same.

I should be interested in readers’ reactions to the method and output. Would you like any changes to the monthly graph? And would it be worthwhile to circulate the monthly-updated graph widely to the news media as an answer to their dim question, “Why don’t you believe in global warming?”

Because there hasn’t been any to speak of this millennium, that’s why. The trouble that many of the media have taken to conceal this fact is shameful. This single, simple monthly graph, if widely circulated, will make it very much harder for them to pretend that the rate of global warming is accelerating and we are to blame, or that the “consensus” they have lazily accepted is trustworthy.

The climate scare has only lasted as long as it has because the truth that the models have failed and the world has scarcely warmed has been artfully hidden. Let it be hidden no longer.

0 0 votes
Article Rating
105 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
High Treason
May 5, 2013 12:48 am

All the insane carbon mitigation measures are based on the worst case scenarios. Strange how they are routinely WRONG. You would think that Tim Flannery made all these predictions.

Margaret Hardman
May 5, 2013 12:49 am

The final paragraph is interesting, especially compared to the graphs you adduce as your evidence. I suggest a closer look as the two assertions do not match.

May 5, 2013 12:55 am

I do not like your final “Warming was over-predicted” graph. To the average layman, if they saw that on a TV screen, it looks like there is fairly reasonable agreement between the observations and the models. Might be better to extend the x-axis (and projections) out to 2100 and the Y axis to +4 or +6 C, and do away with the gray colored area. Need to keep the graph simple and clear if you expect the everyday, average person with little-to-no scientific training to get the point of the graph in 10 seconds or less.

May 5, 2013 1:09 am

By extending it to 2100 you could then also block-shade in the graph at a temperatures increase of between 2 and 3 degrees in light yellow (as a indicating a possible cause for concern) and the area above 3 degree in light red as probably a real cause for concern). The area of the graph under 2C can be shaded in light green indicating “little cause for concern”. I think changes like this would get the point across to the masses much more readily than the original “as is”.

Moe
May 5, 2013 1:11 am

Oh dear the WMO has categorised 2012 as one of the hottest years on record

May 5, 2013 1:14 am

Here is another version:
http://www.vukcevic.talktalk.net/GR1.htm

The Ghost Of Big Jim Cooley
May 5, 2013 1:33 am

I have to agree with alcheson (above).

Tonyb
Editor
May 5, 2013 1:34 am

Vuk
I think your graph is clearer than the one used in the article. If its for the media any graphic needs to be very obvious in putting over its point
Tonyb

David L.
May 5, 2013 2:20 am

As someone who’s job is to conduct stability studies on new Big Pharma drugs and predict shelf life out to 2 years, I’d say at best we’re on track for 0.5C warming by 2050.
I love how their models have year-to-year granularity. They can’t predict next year’s data but they know there will be a little uptick around 2033 (model with highest projections), for example.

knr
May 5, 2013 2:25 am

The key idea here is ‘going to’ , with usefully comes with a ever extendable time line and can therefore never been wrong . Its not a scientific approach of course , but then this is not science in the first place.

ancientmariner
May 5, 2013 2:48 am

Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?

johnmarshall
May 5, 2013 2:51 am

The whole ”carbon” problem was drempt up to cover a supposed shortfall of insolation due to a total misunderstanding of reality. The warmist reality is a flat earth collecting energy 24/7, ie., no day/night intervals just total daylight/energy input. Real reality is a rotating sphere collecting energy for 12 of the 24 hour day. Insolation being more than enough for the average temperature of +15C. There is no need of the failed GHE theory therefore no ”carbon” problem.

Dusty
May 5, 2013 3:06 am

By inspection linear regression is the wrong model for the observations; as presented the graph will not pursuade the layman that the IPCC is wrong.

Parthlan
May 5, 2013 3:12 am

Personally I think the chart suggested by vukcevic at May 5th 2013 1.14am makes a clearer statement eliminating IPCC back casting. But publication of either or another would be a good idea

honestyoz
May 5, 2013 3:39 am

Right hand side of the dotted line opposite “historical” I would have “hysterical”.
Why not, it true, doesn’t detract from the science, will give the editors a byline and the punter will get it.

Greg Goodman
May 5, 2013 3:43 am

Christopher Monckton says “Furthermore, the models, in multiplying direct forcings by 3 to allow for allegedly net-positive temperature feedbacks, are relying upon an equation which, while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.
Without the Bode equation ….
I should be interested in readers’ reactions…”
Two vague mentions to two papers one by yourself and another one: it would be good to properly reference both. If it’s not verifiable, it’s not science . etc.
” allegedly net-positive temperature feedbacks”: my understanding was that this happened by “parameterisations” (aka guesses) of cloud cover, I was unaware of the use of the Bode equation in all this. Where can information on how this is used be found? It is more likely that it is being misapplied or twisted to give a desired result. I think Roy Spencer showed that it only needs a 2% error in cloud change to equal CO2 forcing. No way can anyone claim these “parametrisation” are anywhere near that accurate.
“… while applicable to the process engineering of electronic amplifiers for which it was designed, has no physical meaning in the real climate.”
It is not confined to electronics, there are a lot of reasons why we should be taking a systems engineering approach to analysing climate and climate data rather econometrics/statistics. At least both should be applied to see if either are of use.
Here one example of engineering look at just what kind of “accelerated melting” is happening in the Arctic. There are strong an obvious oscillatory patterns.
http://climategrog.wordpress.com/wp-admin/post.php?post=216&action=edit
Some of the frequencies can be tied back to periods also found in SST , other are present in SSN. There is certainly more to this than CO2 plus random ‘red’ noise.
So if you have a reference for you point about Bode equation, it would be better to reference it. May be it needs an engineer to look and see how it is being misapplied.

richardscourtney
May 5, 2013 3:45 am

ancientmariner:
Your post at May 5, 2013 at 2:48 am asks

Of what relevance is it to show modelled temperature forecast from emissions scenarios that did not come to pass? Should we not compare apples with CGI apples? How far out are the models then?

The models are completely off then.
This is because the “committed warming” has not happened.
The explanation for this is in IPCC AR4 (2007) Chapter 10.7 which can be read at
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html
It says there

The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

In other words, it was expected that global temperature would rise at an average rate of “0.2°C per decade” over the first two decades of this century with half of this rise being due to atmospheric GHG emissions which were already in the system.
This assertion of “committed warming” should have had large uncertainty because the Report was published in 2007 and there was then no indication of any global temperature rise over the previous 7 years. There has still not been any rise and we are now way past the half-way mark of the “first two decades of the 21st century”.
So, if this “committed warming” is to occur such as to provide a rise of 0.2°C per decade by 2020 then global temperature would need to rise over the next 7 years by about 0.4°C. And this assumes the “average” rise over the two decades is the difference between the temperatures at 2000 and 2020. If the average rise of each of the two decades is assumed to be the “average” (i.e. linear trend) over those two decades then global temperature now needs to rise before 2020 by more than it rose over the entire twentieth century. It only rose ~0.8°C over the entire twentieth century.
Simply, the “committed warming” has disappeared (perhaps it has eloped with Trenberth’s ‘missing heat’?).
This disappearance of the “committed warming” is – of itself – sufficient to falsify the AGW hypothesis as emulated by climate models. If we reach 2020 without any detection of the “committed warming” then it will be 100% certain that all projections of global warming are complete bunkum.
Richard

Greg Goodman
May 5, 2013 3:46 am

Oops wrong link for Artic plot :
http://climategrog.wordpress.com/?attachment_id=216

Jim Cripwell
May 5, 2013 3:47 am

Lord Moncton writes “A monthly benchmark graph, circulated widely to the news media, will help to dispel the costly notion that the world continues to warm at a rapid and dangerous rate”
I am afraid you have the wrong target. The problem is not the media. The problem is the learned scientific societies, headed by the Royal Society, The American Physical Society, and the World Meteorological Organization. As long as these learned bodies continue there overwhelming support for the hoax of CAGW, then the media can, and will, quite legitimately, continue to print pro-CAGW nonsense.
I, for one, would appreciate it if you would use your considerable influence to attack the right targets. And in the case of the UK, your prime target should be the venerable Royal Society.

GabrielHBay
May 5, 2013 4:16 am

I also strongly believe that we should compare apples with apples. In other words, real life development with the appropriate model scenario. To even have the model projections which are based on totally different emission scenarios than actually came to pass on the presentation is just a major confusing factor to the uninitiated, apart from being unscientific. The message is not clear enough, especially considering the intended audience. I would not present this graph to anyone whom I wanted to persuade of anything.

Kevin Hearle
May 5, 2013 4:23 am

The graphics are fine if you are mathematically educated but if you want to get a message across in the press to the 99% of the public then you need a very good illustrator to take the graphics and convert them to illustrations without loss of integrity of the message. An illustrator will get them camera ready for publication in the daily’s.

tobyglyn
May 5, 2013 4:26 am

As Tonyb said, Vuk’s graphic is much clearer.
http://www.vukcevic.talktalk.net/GR1.htm

Elizabeth
May 5, 2013 5:00 am

For Gods sake leave the good Lord Alone hes made an extraordinary effort to explain the REAL data as it is and all you can do is nit pick shame on you lot! LOL

Evert Jesse
May 5, 2013 5:01 am

Now this is the message I would like to see in the 50 to 1 movie. Just show that the boffins are disconnected from reality. Many people would accept this much more easily than cost-benefit analyses.

Kristian
May 5, 2013 5:03 am

vukcevic says, May 5, 2013 at 1:14 am:
“Here is another version: http://www.vukcevic.talktalk.net/GR1.htm
Indeed very neat and tidy.
To show how global surface temperatures have actually progressed, though, better discard the HadCRUt4 graph and bring the HadCRUt3 back out instead. If you then adjust this down by 0.064 degrees post 1998 (to amend the never corrected or even addressed, but easily documented, artificial jump in mean temperature level that occurred from one month to the next in 1997/98, following a switch by the Hadley Centre from one source of SST data to another, at the seam between the two), you will end up with the following graph (1970-2012) (I took the liberty of using your figure, hope that’s OK):
http://i1172.photobucket.com/albums/r565/Keyell/GR1_zpsc8aa34df.png
Incidentally, this ‘pristine’ version of the HadCRUt3 curve correlates the temperature evolution of the surface of the globe to an almost uncanny degree of perfection (the only real difference being the amplitudes) with that of the lower troposphere above it (as per the RSS tlt dataset):
http://i1172.photobucket.com/albums/r565/Keyell/Sfcvstlt3_zpsebcad562.png

Ben D.
May 5, 2013 5:03 am

Luv u Vuk…

Editor
May 5, 2013 5:30 am

Christopher, thanks. What you wrote about ENSO was great:
“Nor can the models – or anyone else – predict el Niños more than a few months in advance. There is evidence to suggest that the ratio of el Niño to la Niña oscillations, which has declined recently, is a significant driver of medium-term temperature variation.”

michael hart
May 5, 2013 5:49 am

The gray area is just put in to deceive the unwary. If predictions look like breaching the confidence-level bounds, then draw in the lower confidence-level bounds! But try not to make it obvious what you have done. A standard trick.
Roy Spencer has a nice graph:
http://www.drroyspencer.com/wp-content/uploads/CMIP5-global-LT-vs-UAH-and-RSS.png
The short article he wrote with it is delightfully easy to read and understand:
http://www.drroyspencer.com/2013/04/global-warming-slowdown-the-view-from-space/

Bill Illis
May 5, 2013 5:58 am

One could eliminate some of the RCP’s to simplify the graphs.
The 2.6 (3.0) scenario is just not going to happen. We are almost there already and this scenario envisions huge emissions reductions. RCP 8.5 is also very unrealistic and we are not on track to come even close to this scenario (although it does appear in many climate science papers – scare factor I guess).
http://www.pik-potsdam.de/~mmalte/rcps/graphics/RadiativeForcingRCPs.jpg
The comments above that no chart will convince the committed is apt. For the most part, they only believe a data presentation if it has a line going up and they automatically don’t believe anything that shows their belief system might be in error.
It is better to think of these charts as for objective people only. Forget the committed. They will change when they decide to change.

MinB
May 5, 2013 6:02 am

I agree with others to leave out the hindcast portion. The casual reader of MSM will assume those were forecasted values and there was good predictive values during that period.

May 5, 2013 6:06 am

Always.. Always, I am taken to school in such marvelous ways with so many of your posts and that of your Guests, in this case The Esteemed Viscount Monckton. I rail quite often and If not Ad Nausea about the actual “data” supporting none of the Algorite alarmist claims, but yet like an angry baby not wanting to eat its mushed peas, the same farcical, the world is going to end last Tuesday nonsense is cried loud and repeatedly, and I imagine with a similar runny nose to boot.(No doubt from this chilly spring)

Kristian
May 5, 2013 6:07 am

May 5, 2013 at 5:03 am:
“Incidentally, this ‘pristine’ version of the HadCRUt3 curve correlates the temperature evolution of the surface of the globe to an almost uncanny degree of perfection (the only real difference being the amplitudes) with that of the lower troposphere above it (as per the RSS tlt dataset):
http://i1172.photobucket.com/albums/r565/Keyell/Sfcvstlt3_zpsebcad562.png

BTW, the ‘corrected’ HadCRUt3gl also matches very well indeed global SSTs (as represented by the satellite-based Reynolds OI.v2 dataset, 1982-2012):
http://i1172.photobucket.com/albums/r565/Keyell/SfcvstltampSST_zps06535124.png

jbutzi
May 5, 2013 6:10 am

Speaking as a non-scientist, I do not think vukcevic’s graphs portrays the problem as well as Lord Monckton’s graph. People will see Vuk’s temp line is still in the pink (95% certainty). Also,the elimination of projections to the left does not show the extent of error over time. That is an important part of message. The suggestions by alcheson would make Monckton’s graph communicate clearly to any reader. Tiptop.

May 5, 2013 6:11 am

Ok, I’m privy to some super top secret information. After long, thoughtful consideration that took maybe two seconds I’ve decided to reveal it to you. Ready?
There is other life in the universe!
And, in fact, these highly advanced life forms have visited this planet. Super secretly. You see, space travel exceeding ‘light-speed’ does indeed exist. Believe it or not, they actually do call it ‘warp’ speed. What a coincidence.
Believe it or not, there’s not just one highly intelligent life form out there. There’s several. A long, long time ago they formed an intergalactic space council. And they’ve been debating on whether to introduce themselves to us. And, invite us in to a seat on the intergalactic space council. But they devised a test to see whether our governing structures were intelligent enough to be worthy to join them.
Now they know that time is not a constant. But they also know that a time travel machine cannot exist. Therefore one cannot know the future. So they cleverly created a situation to see if we could realize that. As you can see from those IPCC scenarios above (predicting temperatures from expectations of human activities waaaay into the future) our governing bodies flunked that part of the test. Big time!
Then they wanted to see if our governing structures would take totally chaotic random noise and believe they could actually tease out a signal from it without any possibility of self-serving political or personal bias. Well, from looking at the IPCC graphs, guess what? That was flunked too. Big time!
There were numerous other features of the test. And all I can say is: Big time! Big time! Big time! Big time! Big time! Big time! Big time! Big time! …….

starzmom
May 5, 2013 6:14 am

I like the idea of putting these graphs, or something similar out into the popular media arena. However, I am not sure the general public is the right audience to make a difference. It seems to me that the group of people who are the difference makers are those educated and sophisticated people who believe in CAGW because they believe the scientific authorities who tell them it is true. This group includes academics whose field of study is not science, but who have incorporated a version of science into their instruction, rather indiscriminately, and do not believe or give any credence to the non-academic who might challenge it.
My anecdotal evidence for this is the 4 years I recently spent at the university, which included 3 years in law school. Overwhelmingly, the coursework was based on the assumption that the global warming theory was unarguably true, and that everything else flowed from there. This is true regardless of the subject of the course.
One project I worked on was geared entirely toward figuring out how to convince farmers to change their crop selection so as to best minimize global warming–specifically how to convince them to grow the crops that could be used for cellulosic ethanol. (Never mind that there is no commercially viable cellulosic ethanol plant around.) Farmers who based crop selection on winter soil moisture or commodity markets or both, and those who viewed the weather and climate as cyclical, were belittled in project meetings as uneducated rubes. Those who espoused any sort of religious belief (“It’s been this way since God made the world”) were even more scorned by the project researchers as worse than uneducated. The “team members” were otherwise nice people.
Sorry I am so cynical.

Editor
May 5, 2013 6:56 am

Christopher Monckton – This is a good idea, but I think your graphs are too complex to get across to the general public, whose attention span is now measured in microseconds.
Suggested guideline: If the graph is too complex for a roadside ad, then it is too complex.
I would suggest something dead simple. Graph heading “Global Warming?”. Two values graphed, (1) average prediction of the models labelled “Predicted”, (2) average satellite measurement labelled “Measured”. Prediction runs from un-cherry-picked date of the AR4 prediction to say 2030 (measured starts earlier and obviously ends 2013). Footing “Global Hogwash!”, No error bars, error ranges, variations, etc, and probably use annual data rather than monthly so that there is less noise. But with the graph, you could provide the explanation and links to more sophisticated graphs and, importantly, the data, so that the proper scientific perspectives are available to those who wish to investigate them.

kadaka (KD Knoebel)
May 5, 2013 7:09 am

Dusty said May 5, 2013 at 3:06 am:

By inspection linear regression is the wrong model for the observations; as presented the graph will not pursuade the layman that the IPCC is wrong.

Strange, I can just look at that “outturn” graph (figure 3) that used the linear regression, see “The IPCC thinks this would have happened”, see the gap between it and “This was the reality”, and know the IPCC was wrong.
Did I become so much smarter than a layman that now such a graph automatically makes sense to me, or are you actually telling me that a layman, by present standards, is abysmally ignorant and unintelligent compared to me?

Peter Shaw
May 5, 2013 7:16 am

These graphics are generally lucid and balanced to me (a former scientist who has had to present information to business executives).
They avoid “dumbing down”, and are less cluttered than the vukcevic graphic preferred by some (I suspect professional) persons.
You don’t characterise the IPCC range limits. If these are (as I suspect) ensemble limits, they indicate “beyond reasonable doubt”, which the general public understands.
Also, from sport, they have sense of “that ball’s going out of play” (as this one undoubtedly will). Your first chart captures this.
If you have to use confidence limits, the betting public understands odds (20:1) better than p = 0.05.
I hope this helps; the current standard of graphics on world media is generally woefully low, so needs good examples.

Lew Skannen
May 5, 2013 7:20 am

As David L mentions above I have also noticed that model estimates that are out of the ball park still somehow claim year to year granularity. There must be some equations churning out this pap. Do we ever get to see these models workings??

Roger Knights
May 5, 2013 7:49 am

Once the black line in Vukcevic’s chart falls out of the light red area, here’s how WUWT should headline it:
The “97%” were 95% Certain–but 100% Wrong.

Neville.
May 5, 2013 7:55 am

Roy Spencer’s graph is by far the easiest to view and understand. See above. Dim journos and the public should easily understand his graphs shown clearly below the models.

bean
May 5, 2013 8:02 am

Why I have a hard time believing any climate forecasts …
Today’s weather forecasts are produced from an ensemble of weather models that apparently take in far more variables and data than the climate models and so far here in the Colorado mountains, you can toss a coin and get the same results as our accurate weather forecasts. If they can’t forecast tomorrow’s weather very accurately how do they expect me to believe climate forecasts 20 or 30 years out?

May 5, 2013 8:23 am

Thanks, Christopher, Lord Monckton.
I would show your graphic in my pages. I would like it better though, if it did not contain the blue, red and black boldface conclusions; these should be in the translatable text caption for the graph, to be read by Google and other search engines.
With so much text in the graphic itself I would have to be redundant in the English version caption and a lot of English would have to be shown in the Spanish pages, like http://www.oarval.org/CambioClimaBW.htm

May 5, 2013 8:29 am

I suggest that to circulate the IPCC projections every month would be counterproductive as it would continue to give publicity and some validity to their warming projections as if they were relevant serious science.If the current cooling were to continue on the monthly updates the Alarmist establishment and their supporting MSM propagandists would then merely counter with their epicycle like arguments to suggest that ,even though you don’t see it , it is really there in the oceans or is disguised by China burning coal .They also would be quick to say that blizzards and cold, droughts and floods or whatever weather actually occurs is due to warming or climate change as they now prefer.
The realist side should hammer away at the fundamental gross errors of reason and logic which are the core of the models. They are simply structured incorrectly by assuming that CO2 is the main climate driver..CO2 follows temperature and therefore cannot rationally be the climate driver .This absurdity is then compounded by adding the warming effect of the major greenhouse gas – water vapour as a feedback and counting its contribution in calculating the sensitivity to CO2. Whether this is stupidity and incompetence or deliberate deception is for the modellers to decide.
What should be publicized and checked against the incoming data are different projections made by quite different approaches.I ,with of course all due modesty ,suggest that the projections made in my post “Global Cooling Methods and Testable Decadal Predictions “on my site at
http://climatesense-norpag.blogspot.com
would be a useful place to start.
Here is a summary
1 Significant temperature drop at about 2016-17
2 Possible unusual cold snap 2021-22
3 Built in cooling trend until at least 2024
4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
5 Temperature Hadsst3 moving average anomaly 2100 – 0.5
6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
7 By 2650 earth could possibly be back to the depths of the little ice age.
8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and help maintain crop yields .
9 Warning !! There are some signs in the Livingston and Penn Solar data that a sudden drop to the Maunder Minimum Little Ice Age temperatures could be imminent – with a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.
I also posted on the same blog in 2010 a “Thirty year climate forecast” and in June 2012 a two year update.on that forecast. So far it is looking much much better than anything the IPCC models have produced – not that that is much of a standard to compete with.
The IPCC models should be ignored as being irrelevant.

Jeff Norman
May 5, 2013 9:01 am

I agree that Dr. Spencer’s graphical presentation is the clearest, though I think it would benefit with a vertical line separating the forecast from the hindecast. Otherwise people might think they got the early 90’s cool spike right.
Tom J is sort of right, it is a test of intelligence. If we pass, we get to continue on much the same as we have been. If we fail our civilization collapses in the next glaciation period.

Master_Of_Puppets
May 5, 2013 9:02 am

I suspect we will be living with the IPCC idiocy and the bunk-buddy-reviewers at IPCC endorsed journals like the JGR circus idiocy for a few decades yet.
On a prudent level the US can withhold funding to the UN and restrict travel and publishing funds in national grants, i.e. travel and publishing charges will need approval based on justification by the funding agency with the Office of Inspector General oversight and acknowledgement as a requirement.

herkimer
May 5, 2013 9:15 am

Here is an updated summary of various forecasts of global temperature anomaly [hadcrut3] for the end of 2017 [after next 5 years]
The data has been calculated or eyeballed from data or graphs available on the internet.
FORECASTS TO 2017
JAMES HANSON 1.4 C for A option,
1.2 C for B option
0.6 C for C option
44 MODELS 0.9 C models median [range 0.49 to 1.4 C] GLOBAL LT TEMPERATURE 44 LATEST MODELS [R.SPENCER]
38 MODELS 0.75 C models median (range 0.25 to 1.2 C) global CMIPS RCP45 38 MODELS
IPCC MODELS 0.49 C [range 0 to 0.85C] per AR5 11 MONCKTON ARTICLE MAY 5, 2012
CLIVE BEST 0. 55 to 0 .7 C ADJUSTED BASED ON AIB and B1 SCENARIOS]
MET OFFICE [UK] 0.430 C [0.28 C to 0.59 C] WAS 0.76 C previously
N.SCAFETTA 0.450 C HARMONIC MODEL [RANGE 0.3 to 0.55 C]
P. MICHAELS 0.4 to o.5 C ADJUSTED TREND OF IPCC
TALLBLOKE 0.4 to 0.5 C BASED ON SEA SURFACE TEMP
FRANK LEMKE 0.4 C SELF ORGANIZED PREDICTIVE MODEL [RANGE 0.5 to 0.3C]
G.ORSSENGO 0.226 C [0.1C to 0.55C lower and upper limit] STATISTICAL MODEL BASED ON GMTA HADCRUT gl3
D. EASTERBROOK -0.1C BASED ON 1790-1820 PAST TREND
0.0 C BASED ON 1880-1915 PAST TREND
0.4 C BASED ON 1945-1977 PAST TREND
S-ICHI AKASOFU < 0.5 C [BASED ON PAST TEMPERATURE PATTERN

Ian W
May 5, 2013 9:21 am

This should be put across much Viscount Monckton already has – “Good News!! – there is no global warming and satellite measurements show that warming is not in any way a world ending threat, indeed the slight warming will be beneficial!!”
Whatever graphic is decided upon – it must be a valid comparison of the CO2 emissions as they have been which is higher than the ‘business as normal’ model of the IPCC. All the other IPCC models based on CO2 emissions being reduced or held at 1995 levels etc, should be removed as they are not pertinent and just confuse the issue.
Then the message is: your jobs are being sent to China and your fuel and energy costs are going up based on a failed prediction of global warming.
People should stop talking in AGW newspeak stupidities like ‘climate denier’ – the entire ‘climate change’ argument is based on CO2 warming the atmosphere and causing feedbacks that further warm the atmosphere aka Global Warming – and that entire claim has been proven WRONG there is no dangerous warming – yet the politicians are still taxing you, destroying and exporting jobs based on this incorrect claim. The subtext could then go on to question the motives and/or mental capacity of the politicians and the various federal agencies such as the EPA who must know the good news that the forecasts of catastrophe are all turning out to be hopelessly wrong.

Matthew R Marler
May 5, 2013 9:58 am

Monthly updates? No.
Semi-annual updates? Yes

Matthew R Marler
May 5, 2013 10:03 am

The graphs do not need modification. Some changes will appeal to some readers, others to others. The target audience will learn to read these after they have been sent out consistently for a few years. Do one per page, with an explanatory paragraph under each graph.
I hesitate to give rhetorical advice, but confine political commentary to one short paragraph on p. 5.

Berényi Péter
May 5, 2013 10:48 am

Dear Lord Christopher, you are trying to promote this mistaken notion, that the IPCC made predictions of any kind. No, they have not. These are projections, which is entirely another beast.
Truth value of projections is not established by benchmarking, that is, by comparing them to reality. One either believes or denies them. Then, in an ideal world, deniers are executed on the spot with no due process whatsoever, in a 10:10-ish way.
As soon as it is accomplished, one does a head count which is supposed to find an overwhelming consensus in support of said projection, otherwise fall back to the time tested “no pressure” policy and repeat 10:10 above.
That’s the proper way, and now, it is your turn to confess.
/sarc off

Jim Strom
May 5, 2013 10:56 am

I agree with alcheson. At first glance–which is all you’ll get from many readers–it appeared to support the warmist analysis. It doesn’t, but it would be helpful to make that immediately apparent.

clipe
May 5, 2013 11:03 am

On a related [note],
Congratulations to Mike (“Scottish Sceptic”) Haseler!
http://bishophill.squarespace.com/blog/2013/5/5/ukip-scotlands-climate-spokesman.html

clipe
May 5, 2013 11:05 am

note, not not

J Martin
May 5, 2013 11:25 am

@ Lord Monckton & @ Bob Tisdale

“Nor can the models – or anyone else – predict el Niños more than a few months in advance.”

Sorry to disagree with two such eminent sceptics, but I feel compelled top point out that Landscheidt did indeed manage to predict such things a goodly amount of time before hand.
http://www.john-daly.com/sun-enso/revisit.htm
PS. Many thanks to whoever rescued the John Daly site. An undervalued resource which deserves to be better known.

J Martin
May 5, 2013 11:32 am

Roger Knights said
The “97%” were 95% Certain–but 100% Wrong.

WUWT by all means, followed by the front page of every newspaper in the World.

May 5, 2013 12:15 pm

Since the beginning of the instrument record in 1850 world temperatures have trended up about 3/4 of a degree Celsius and CO2 is up 40%. If the models with their climate sensitivity for CO2 of 3.2°C or so were correct, temperatures should have gone up double that. They didn’t; the models are wrong.
This point is spot on:
“Nor can the models…predict el Niños more than a few months in advance.”
Are they even that good? Anyway, why should we believe a forecast 100 years out that can’t predict el Niños and la Niñas?

NZ Willy
May 5, 2013 12:31 pm

Expect a diversion to maintain the fiction. The diversion has been trial-run using sea level. The tactic is to replace the sea-level metric with a total-volume metric. The diversion was to model that the rising seas will compress the continental margins, and that therefore there is much hidden volume increase — so the charts show increased sea volume, all imaginary of course.
This can be extended to global warming by changing the graphs from average temperature to “total heat content”. The advantage of this new metric is not only that heat can be modelled to be increasing in the crust and oceans, but also (and most importantly) in that no such measure was previously taken, so the regime of adjustments can be extended to the present day — in other words, the new measure allows the past to be adjusted downwards up to the present day. Ta daa, warming is renewed! Just like sea level increases, which stopped a few years ago, have magically been renewed. The media will lap it up as usual.

Theo Goodwin
May 5, 2013 1:01 pm

Bob Tisdale says:
May 5, 2013 at 5:30 am
You deserve considerable credit for the rising interest in ENSO and, more generally, in discrete physical processes in climate science.

Jimbo
May 5, 2013 1:05 pm

The following graph is clear and easy to understand by members of the public, politicians and journalists. They may not like it but there it is. It manages to clearly show 44 climate models, their average and current recent (2012) observations.
http://www.drroyspencer.com/wp-content/uploads/CMIP5-global-LT-vs-UAH-and-RSS.png

bones
May 5, 2013 1:40 pm

Add my vote for Vukcevic’s graph http://www.vukcevic.talktalk.net/GR1.htm The models shouldn’t get any credit for something that they didn’t predict.

Editor
May 5, 2013 2:09 pm

Jimbo – Two problems with the Roy Spencer graph you posted (http://www.drroyspencer.com/wp-content/uploads/CMIP5-global-LT-vs-UAH-and-RSS.png):
1. The pre-AR4 part of the model predictions.are not predictions but hindsighted hindcasts. They are irrelevant and misleading and should not be shown. (Misleading because the hindsighted hindcasts make it look like the models have some skill).
2. One of the 44 models is reasonably close to the measured. It draws the eye thus distorting the view. That’s why only the average of the models should be shown, not the 44.
It could be argued that it is correct to show all the models, and for a reasonably sophisticated audience that would be correct. But In Roy Spencer’s graph, the eye is automatically drawn to that particular model, thus giving it much more weight in the eye of the unsophisticated. Much like Steve McIntyre’s demonstration a while back of how weighted red noise can deliver a ‘hockey-stick’. In other words, this graph, while technically correct, is likely to be misleading.
Roger Knights – “The “97%” were 95% Certain–but 100% Wrong.“. Brilliant. But will the unitiated twig to the “97%”, or even the 95%? Perhaps better:
97% of scientists – 100% wrong!

Tad
May 5, 2013 2:14 pm

The monthly updates to the media are a great idea! Unless something is repeated over and over again, they simply do not absorb it.

May 5, 2013 2:47 pm

Hi everyone
While I was enjoying one of rare nice sunny Sundays in SW London, some of you commented in a complementary way on the graph
http://www.vukcevic.talktalk.net/GR1.htm
It’s origins go to the IPPC, and arrived to WUWT via ‘Mail on Sunday’ in this form
http://i.dailymail.co.uk/i/pix/2013/03/30/article-2301757-1903167F000005DC-59_634x480.jpg
Someone suggested that only prediction and not back-casting should be shown, so I took-up the suggestion and produced the above version.
Thanks to all posters, but my contribution was only minor, and mainly manual, but it made hell of difference to the prediction’s failure perception.

Eggy
May 5, 2013 3:03 pm

Have a CO2 relaunch party. Charcoal BBQ with chargrilled everything. Kids can play Pop The CO2 Balloons And Feed The World. Everyone can laugh and giggle in the CO2 foam. Super fizzy beer and pop will quench thirsts, and all sorts of CO2 based fun could be had. Oh, and roadside advertising. Such fun.

May 5, 2013 3:04 pm

“The RSS dataset, for which the April anomaly is not yet available, shows statistically significant cooling since January 2001 at a rate equivalent to 0.6 Cº/century.”
Yes. http://www.volker-doormann.org/images/rss_vs_solar.gif
The temperature seems to be .475 years later than the ONI index. Subtracting this ONI function from the RSS temperature, the solar tide function remains.
V.

Myrrh
May 5, 2013 3:07 pm

Berényi Péter says:
May 5, 2013 at 10:48 am
Dear Lord Christopher, you are trying to promote this mistaken notion, that the IPCC made predictions of any kind. No, they have not. These are projections, which is entirely another beast.
http://www.ipcc-data.org/ddc_definitions.html
IPCC
Data Distribution Centre
Definition of Terms Used Within the DDC Pages
Location: Definitions
Projection
The term “projection” is used in two senses in the climate change literature. In general usage, a projection can be regarded as any description of the future and the pathway leading to it. However, a more specific interpretation has been attached to the term “climate projection” by the IPCC when referring to model-derived estimates of future climate.
Forecast/Prediction
When a projection is branded “most likely” it becomes a forecast or prediction. A forecast is often obtained using deterministic models, possibly a set of these, outputs of which can enable some level of confidence to be attached to projections.
===========================
“When a projection is branded “most likely” it becomes a forecast or prediction”.
Forecast and prediction synonymous.

AP
May 5, 2013 3:12 pm

Make it simpler. Most journalists only have Bachelor of Arts degrees and in most cases only ever studied humanities subjects, and therefore have no understanding of the scientific method, mathematics, statistics, trendlines, modelling (the mathematical type), feedback loops, etc. Also, put some emotive words in there. They love that stuff, like our supposedly “angry summer”. Maybe call it the “naughty trendline” which is “punishing the naieve model”.

May 5, 2013 3:16 pm

Even the very silly Prof. Tim Flannery—in The Weather Makers: How Man Is Changing the Climate and What It Means for Life on Earth (Melbourne, 2008)—notes that:

the pronouncements of the IPCC do not represent mainstream science, nor even good science, but lowest-common-denominator-science—and of course even that is delivered at glacial speed. [p. 246]

Immediately, however, within the same paragraph, the inconsistent professor observes:

If the IPCC says something, you had better believe it—and then allow for the likelihood that things are far worse that is says they are. [loc. cit.]

AndyG55
May 5, 2013 3:17 pm

“97% of scientists – 100% wrong!”
Please, make that “97% of CLIMATE scientists – 100% wrong!

May 5, 2013 3:28 pm

Thanks, Vukcevic.
Your graph is an improvement on the Mail on Sunday original.

AndyG55
May 5, 2013 3:28 pm

@ vukcevic.. You do good work. 🙂
I do notice that you use the words “official world average temperature.”
I assume that you mean GISS or Hadcrud.
The problem for athe warmists is that by hindcasting to pre-1979 GISS or Hadcrud, they are NOT hindcasting to the real temperatures, but a highly adjusted record, adjusted to make it seem like there was more warming than there actually was (if any).
By doing this, they have not the slightest hope of every making decently correct projections.
Hoisted on their own petard, one might say. or ………………… Karma !! 🙂

John Tillman
May 5, 2013 3:46 pm

Seventy-five out of 77 “active climate scientists” cherry picked from among more than 3000 respondents to a simplistic, biased survey sent to over 10,000 scientists will soon be shown 100% wrong.
Accurate, but not very catchy.

Werner Brozek
May 5, 2013 4:44 pm

If I were to vote, I would vote for Dr. Spencer’s graph.
But I wish to make a note about the other graph with the 75% and 95% lines. Assume the line is at the bottom of the 95% line. Is it then correct to say that there is a 5% chance the IPCC is correct? The reason I am asking is that there is a 2.5% chance the temperatures could be at the upper 95% line and a 2.5% chance it could be at the lower 95% line. So could we say 97.4% of cherry picked climate scientists are 97.5% wrong?

Douglas Proctor
May 5, 2013 4:49 pm

And which of the scenarios starts where we are now and goes to catastrophe?
The only scenarios of concern are those that include the recent past unless all can flip to anything anytime.

Roger Knights
May 5, 2013 4:53 pm

As careeristic climbatologers climb down, one by one, in coming years, WUWT headlines might all have a suffix-phrase attached: “another bottle of beer off the wall.”
Meantime, here’s a start:

“97 bottles of beer on the wall
97 bottles of beer on the wall
If one of those bottles should happen to fall
96 bottles of beer on the wall
(Repeat until down to none, which is where we are heading)”

Bill K
May 5, 2013 5:22 pm

Graphs should also have a CO2 trend line on them to show the disconnect with temperature.
Like the 97% of Climate Scientists were 100 % Wrong.

May 5, 2013 6:33 pm

Commenters above have suggested that Lord Monckton’s graphs are too complex for public viewing. That may be true for science ignorant journalists, but not for the readership at WUWT.
This scientific illiterate has been slowly educated by this, the very, very best, website in the WWW to the point where I am increasingly confident in perusing and dissecting quite complex scientific papers and posts.. This osmosis to my and other brains may be the greatest of Anthony Watts’ many achievements.

Roger Knights
May 5, 2013 6:57 pm

Here’s an improved set of captions for that graph once its black line falls out of pink territory:

Upper caption: “An Inconvenient Goof”
Lower caption: “The 97% Climate Consensus was dead certain—and dead wrong.
Don’t Let Them Fool You Twice.”

Konrad
May 5, 2013 6:58 pm

Well, you all missed it.
Viscount Monckton writes –
“It is also possible that the models are inherently too sensitive to changes in radiative forcing and are taking insufficient account of the cooling effect of non-radiative transports.”
That would have to be the understatement of the century.
Radiative gases are of course critical for continued convective circulation in the troposphere. This has been established science for some time. A simple explanation of the role of radiative gases in convective circulation can be found here –
http://www.st-andrews.ac.uk/~dib2/climate/tropics.html
Without radiative cooling at altitude and convective circulation below the tropopause, our atmosphere would heat dramatically.
So how did the pseudo science that adding radiative gases to the atmosphere causes warming get established? AGW supporter site Scienceofdoom has one answer –
http://scienceofdoom.com/2012/12/23/clouds-water-vapor-part-five-back-of-the-envelope-calcs-from-pierrehumbert/
– which includes the following summary of Pierrehumberts wholly un-empirical 1995 claims –
“So increasing the emissivity from zero (increasing “greenhouse” gases) cools the climate to begin with. Then as the emissivity increases past a certain point the warm pool surface temperatures start to increase again.”
A “certain point” was it? How many ppm is that? Empirical evidence? Not likely. This attempt to write the role of radiative gases in convective circulation out of atmospheric science will not stand up to scrutiny. It is no wonder that AGW supporters keep running back to static atmosphere two shell radiative models to justify their absurd claims. The AGW hypothesis fails for an atmosphere in which the gases are free to move.

TomR,Worc,MA,USA
May 5, 2013 7:16 pm

The “97%” were 95% Certain–but 100% Wrong. Watts up with that? ; ^ )

barry
May 5, 2013 10:04 pm

David L,

I love how their models have year-to-year granularity. They can’t predict next year’s data but they know there will be a little uptick around 2033 (model with highest projections), for example.

The model outputs are expressed in year to year, or month to month changes (depending on the model). In no way do the modelers assert that any particular model, or even the ensemble, is meant to be a prediction of what will actually happen on any given year in the future. It is about climatic averages.

RS
May 5, 2013 10:10 pm

Who cares if the models are wrong, what counts is the answer, the answer to EVERYTHING, a powerful global government with vast powers to redistribute wealth, control all aspects of business and life.
Who cares about the QUESTION, it’s the ANSWER that counts. Obey the elite, they know what’s best for everyone. Especially themselves.

barry
May 5, 2013 10:19 pm

It looks to me like the obs have remained within the envelope of projections. In 1998 global temps went higher than any of the projections. In recent years temps have been near the bottom of the projections (individual model runs).
I also see individual model runs with neutral temp trends over almost 20 years that end up being warm, and cooling trends of 10-15 years here and there that also wind up warmer in the long run. So the models predict ~15-year time periods with no warming that eventually end up with warming as time progresses.
The short-term graph (from 2002) is a ludicrously too-short time period. To be on the safe side, 20-year blocks should be a minimum to winnow statistically significant results (25 years if using satellite data, which has more variance than surface records). Model outputs for global temperature are for the surface temp, not the lower troposphere, so one must use the instrumental records, not the satellite data, to compare applaes with apples.
I see nothing here that convincingly demonstrates observations bust the ensemble results.

Rhys Jaggar
May 5, 2013 11:12 pm

The IPCC’s approach is completely flawed since it has all the veracity of asking 100 blokes in the same room to say how big their penises are, then forming a mean male penile size without bothering to get them all to drop their trousers and apply a ruler to their assertions.
Anyone who knows anything about group think, psychology and the like knows that no man will say in public to another bunch of blokes that they have a miniscule penis and few will admit to having a smaller than average one.
Equally, no scientist will say they don’t think much warming will happen if they are talking amongst grant awarding bodies. It’s not the way to get funded in the current regime and their Vice Chancellors or other senior University worthies will be on their case if they don’t keep shovelling millions into the University coffers.
In my opinion, the modellers are all equivalent to ambitious young Tories wanting to get on in the party. As a result, nationalisation is per se evil, management is never to be criticised and workers are acceptable collateral damage.
You don’t need to be Einstein to see that this is bullshit.
I could use the Labour Party as a similar example, please don’t ascribe my example to implying what my political views are. I merely illuminate by highlighting situations where you can’t be dispassionate if you want preferment. Climate modelling is a scenario which fits that reality.

May 6, 2013 12:40 am

How about a chart that illustrates the entire 160 year instrument record,100 years of model projections and that 3.2°C CO2 Climate Sensitivity?
http://oi56.tinypic.com/f3tlb6.jpg

BrendanC
May 6, 2013 2:07 am

It is totally impossible to prove (using valid physics) that climate is in any way inherently “sensitive to changes in radiative forcing” as is claimed in this post. Radiation cannot force changes in surface temperature. All that radiation from a cooler atmosphere can do to a warmer surface is slow that one-third of its rate of cooling which is itself due to radiation. The back radiation does this by supplying electromagnetic energy for most of the radiation being emitted from the surface. This portion of the radiation is thus not transferring any thermal energy from the surface.
When you understand this, then you can understand the original NASA net energy diagram which showed (in terms of incident Solar radiation) 19% absorbed by the atmosphere and clouds on the way in, then 51% absorbed by the surface with the remaining 30% reflected. Then, the 51% exits the surface with 23% by evaporative cooling thence latent heat, 7% by conduction and thence rising convection, 6% direct to space and 15% radiated and then absorbed by the atmosphere.
So the atmosphere absorbs more thermal energy from radiation on its way in (19%) than on its way out (15%) and hence there is more of an umbrella effect than a blanket effect.
Now, that’s not all. We find that there is an underlying thermal gradient which has formed autonomously over many years due to the effect of gravity upon individual molecules in free flight between collisions. This is indisputable, because the only way that the Second Law of Thermodynamics can establish the required thermodynamic equilibrium with maximum available entropy is for there to be no entropy gradient. But there cannot be isothermal conditions and no entropy gradient in a sealed, insulated cylinder in a vertical plane. Although there has been a post on here some time back saying that a wire outside a cylinder proves the gradient cannot happen, the logic in that was false simply because a thermal gradient also develops in the wire. It doesn’t matter if the gradients are different, there will evolve a thermodynamic equilibrium (without any cyclic energy flow) in the total system – cylinder and wire.
Until people can understand what happens on the planet Uranus, they will probably not understand what happens on Earth. Uranus receives hardly any Solar radiation, and far less at the base of its troposphere where the temperature is about 320K. Yet there is radiative equilibrium and thus no evidence of any cooling off process, or radioactive decay or fission or whatever. What happens is that, when the atmosphere absorbs new energy (such as at dawn) that energy can actually move in all directions, including up the thermal gradient toward the surface. This is because the thermodynamic equilibrium is disturbed. In fact any convection, up or down or sideways happens because of an extra supply of energy, such as when the surface transmits energy by conduction to the atmosphere on Earth.
This the only way that sufficient energy gets to heat the Uranus atmosphere at that depth. And it keeps on heating more at greater depths, following the autonomous thermal gradient which occurs in solids, liquids and gases, though weather conditions or extreme absorption (such as in the stratosphere) can over-ride the slow diffusion process that establishes the atmospheric gradient in calm conditions.
Only if you understand (and accept) that the gravitationally induced gradient does occur – only then will you understand how temperatures on Uranus, Venus and Earth reach the observed values. Even temperature in Earth’s crust and mantle continue to follow this upward gravitational gradient which varies with the specific heat.
However, water vapor can reduce the gradient, not so much by the release of latent heat, but by the transfer of heat to higher cooler layers by way of radiation. This has an obvious levelling effect, working against the gravity gradient.
Finally, the whole picture comes into focus when you realise that this gradient (in combination with insolation levels) actually predetermines the thermal plot in the troposphere, and thus the surface temperature. Hence, we would expect a supporting temperature which would be lower in more moist regions. The Sun could never have raised the surface temperature to what we observe without such a supporting temperature which slows the rate of cooling in the early hours before dawn. We know this happens, because the surface does not keep cooling at the same rate all night. Furthermore, actual climate data does in fact show that moist regions have lower daily maximum and minimum temperatures than dry ones with similar latitude and altitude – contrary to what is often claimed. But if you think about it, according to the greenhouse effect, water vapour is supposedly doing nearly all of that 33 degrees of warming. So it ought to be doing much more warming in some moist regions than in dry ones. We don’t find this happening, and so the greenhouse effect is not functioning as proposed and, in fact, does not control mean surface temperatures at all.

Myrrh
May 6, 2013 2:16 am

Konrad says:
May 5, 2013 at 6:58 pm
Well, you all missed it.
Viscount Monckton writes –
“It is also possible that the models are inherently too sensitive to changes in radiative forcing and are taking insufficient account of the cooling effect of non-radiative transports.”
That would have to be the understatement of the century.
Radiative gases are of course critical for continued convective circulation in the troposphere. This has been established science for some time. A simple explanation of the role of radiative gases in convective circulation can be found here –
http://www.st-andrews.ac.uk/~dib2/climate/tropics.html
Without radiative cooling at altitude and convective circulation below the tropopause, our atmosphere would heat dramatically.
So how did the pseudo science that adding radiative gases to the atmosphere causes warming get established? AGW supporter site Scienceofdoom has one answer –
http://scienceofdoom.com/2012/12/23/clouds-water-vapor-part-five-back-of-the-envelope-calcs-from-pierrehumbert/
– which includes the following summary of Pierrehumberts wholly un-empirical 1995 claims –
“So increasing the emissivity from zero (increasing “greenhouse” gases) cools the climate to begin with. Then as the emissivity increases past a certain point the warm pool surface temperatures start to increase again.”
A “certain point” was it? How many ppm is that? Empirical evidence? Not likely. This attempt to write the role of radiative gases in convective circulation out of atmospheric science will not stand up to scrutiny. It is no wonder that AGW supporters keep running back to static atmosphere two shell radiative models to justify their absurd claims. The AGW hypothesis fails for an atmosphere in which the gases are free to move.

==============================
Even their, AGW, static atmosphere doesn’t exist – the reason they don’t have convection, and consequently no weather at all, is because they have substituted the imaginary “ideal” gas pre Van der Waals, for real gas. Really, actually, created an “atmosphere” from ideal gas scenario without volume. They do not have anything to convect and their ideal gases are zooming off into outer space. There’s nothing static about that..
Their fictional ideal gas molecules are without mass, so they have no gravity in their world which is what gives gases relative weight- all their ideal gases go directly from the Earth’s surface to empty space.
They are climate scientists with no climate.
Their AGW gases are non-condensable – of course they’re not, they have no real gas molecules only the imaginary ideal gas hard dots of no mass nothing diffusing instantly by their own molecular momentum zipping at great speeds through empty space miles apart from each other and mixing thoroughly by bouncing off each other in elastic collisions – they don’t have gases rising and sinking in air as they expand when heated and condense when cooled which is how we get our winds and weather.
Their imaginary ideal gases are not bouyant in air – of course they’re not, they don’t have any air for a start, but their gases are not bouyant in air because their gases are hard dots of nothing with no volume to expand so they cannot become less dense and lighter than air and so rise and so cannot be bouyant in air.
Their AGW ideal gases have no attraction, they are hard dots of nothing with no mass not real molecules of gas – so they have no rain in their carbon cycle which is the attraction of water and carbon dioxide – all natural clean unpolluted rain is carbonic acid.
It is pointless explaining to them in terms of the real physical world around us, because they don’t have any of this. Your explanations don’t make sense in their ideal gas world. Their gases can’t do what your gases do.
There is no internal logic in their fisics, it cannot be called physics because their AGW world is purely imaginary, because in their empty space atmosphere scenario without gravity and gases without volume and attraction, all their ideal carbon dioxide continue to diffuse at great speeds into outer space, so cannot accumulate in their empty space atmosphere for the hundreds and thousands of years they claim it does.
They pretend to talk ‘physics’ giving example their ideal gases bouncing off the sides of a container creating pressure – where is their invisible container around the Earth keeping in their ideal gas dots of no mass nothing for gravity to pull in?
Is this the same invisible barrier at the TOA which they say stops the direct heat of thermal longwave infrared from the Sun?
[Their Sun either doesn’t produce any longwave infrared, which is the Sun’s thermal energy in transfer by radiation, radiant heat, or, their Sun’s direct radiant heat is stopped by some invisible barrier like the glass of a greenhouse.]
The cooling by the non-radiative gases in the real world which is practically the whole of atmosphere of real nitrogen and oxygen which is our real gas air is how we get our winds, as hot air rises and cold air sinks.
Volumes (packets) of real gas air expand when heated and so less dense is lighter than air under gravity and rise, and spontaneously volumes of colder air heavier and denser under gravity will sink and flow beneath the less dense.
Air will rise taking heat away from the surface where the heated molecules are creating areas of low pressure because less dense and sink from colder areas of high pressure which they create by condensing when cold becoming more dense and heavier, in conjunction with gravity giving them weight relative to each other; so: Winds Flow from High to Low.
This is bog standard basic meteorology in the real world. Built on the understanding of the actual properties and processes of real gases under gravity.
AGWScienceFiction’s Greenhouse Effect doesn’t have real gases. It is an imaginary world, a fiction.
What Monckton isn’t taking into account is that water in the atmosphere is also a real gas and its properties in convection of cooling far exceed any radiative properties it has, and greater at cooling than convection of air, and, that oxygen and nitrogen are the real thermal blanket around our Earth trapping heat.
Water has a great heat capacity, which means it stores, traps, a great deal of heat before changing temperature.
So, water heated at the surface evaporates taking huge amounts of heat away from the surface as it becomes even more lighter than the real air around it under gravity, as it expands and less dense rises.
In the cooler heights this heat energy laden water vapour releases its heat, heat flows spontaneously from hot to cold, and condenses back to liquid water and ice, and so colder and heavier than air precipitates out as rain.
AGWScienceFiction has excised the Water Cycle, the cooling cycle of the real Earth.
They have no way to get the clouds they keep rabitting on about ..
Now, it is important to note here exactly what AGWSF has done by sleight of hand to create its Greenhouse Effect Illusion in its claim that “ir imbibing greenhouse gases mostly water and carbon dioxide warm the Earth 33°C from the -18°C it would be without them.”
That -18°C figure is from real world traditional physics and is the temperature of the Earth without any atmosphere at all, not, “without these AGW greenhouse gases”, but without the rest of the whole atmosphere which is practically all nitrogen and oxygen.
The comparison in real traditional physics is with the Moon without an atmosphere, and the comparable figure for the Moon is around -23°C.
AGWSF has committed science fraud here by claiming the -18°C figure relates to absence only of their “greenhouse gases” leaving “the rest of the atmosphere in place”.
Earth with atmosphere: 15°C
Earth without any atmosphere at all: -18°C
Moon without any atmosphere: -23°C
Now, here’s the interesting bit of why they took out real gases from their ’empty space atmosphere:
In the real world physics, with the rest of the atmosphere in place, the Earth with atmosphere of real gas with volume weight and attraction of mainly nitrogen and oxygen would be 67°C if water was absent.
Think deserts.
The Earth with its real gas heavy atmosphere of nitrogen and oxygen weighing down on us 14lb a square inch is what is really acting as a thermal blanket around the Earth – these are real greenhouse gases warming the real world preventing the extremes of temperature cold of the Moon without this atmosphere.
So, without the “AGW ir imbibing greenhouse gases of mainly water”, the Earth would be 52°C hotter..,
not “33°C colder”
AGWScienceFiction has changed the meaning of “greenhouse gases” and changed the meaning of “greenhouse”.
The Earth’s real Greenhouse is the whole of its atmosphere of real gases around it which both warm and cool the Earth preventing the extremes of the Moon without an atmosphere. Just like a real greenhouse, which is why the analogy was first used in traditional physics. Real greenhouses both heat and cool to give optimum growing conditions for the plants, AGWSF has changed this to mean “only warm”.
Earth with atmosphere: 15°C
Earth without any atmosphere: -18°C
Earth with atmosphere in place but without water: 67°C
AGWSF have taken out the Water Cycle which in the real world cools the Earth bringing the temperature down from 67°C it would be without it to 15°C.
The conclusion is obvious, the “AGW greenhouse ir imbing gas warming of 33°C” is an illusion.
Created by sleight of hand science fraud changes to real physics, manipulating properties and processes.

peter_dtm
May 6, 2013 4:41 am

barry says:
May 5, 2013 at 10:19 pm
It looks to me like the obs have remained within the envelope of projections. In 1998 global temps went higher than any of the projections. In recent years temps have been near the bottom of the projections (individual model runs).
/end quote
barry EVERYTHING before 2005 is NOT PROJECTIONS.
it is HISTORY, known at the time, so it is MEANINGLESS to look at the pre 2005 values for ANY indication as to how well the models work. You may prefer to think of the pre 2005 values as ‘training’ for the models to see if they can recreate what was known to have ALREADY happened
See Vulvic’s version with the training runs removed and only the PROJECTIONS shown.

peter_dtm
May 6, 2013 4:58 am

barry says:
May 5, 2013 at 10:19 pm
It looks to me like the obs have remained within the envelope of projections. In 1998 global temps went higher than any of the projections. In recent years temps have been near the bottom of the projections (individual model runs).
/end quote
just another thought for you
given that the prev2005 data is comparing the models against known facts; and these known facts were used to ‘train’ the models; wouldn’t you expect them to produce 100% match ?
I occasionally use simple models in my job, most of these are abject failures, totally u/s (unsuitable for use).

Richard M
May 6, 2013 5:39 am

I suspect the diagram would look a lot better if only the actual emission scenario model runs were used. This would eliminate 75% of the noise as well as many of the lower lines that make it look like the models were closer to reality than they were.
Why bother showing lines from emissions scenarios that have not and will not happen?

rilfeld
May 6, 2013 6:07 am

The Hurricane makes landfall in a single location. For each, I hope that location is not my house. After landfall, the models are discarded. However, the data points are added, and over time the models have improved. This suggests at least one branch of climate science where “making spaghetti” is properly done. Perhaps, in a future, more enlightened era,the temperature folks could learn some methodology from their wind tracking colleagues (who also note no increase recently in storm frequency or severity).

sagi
May 6, 2013 7:45 am

@ vukcevic
Nice graphic! My only suggestion would be to substitute “actual predictions” for “true predictions” in the side note.
“True” may imply that these predictions are accurate or correct to some of our media people here in the States.

barry
May 6, 2013 8:29 am

peter_dtm,
barry EVERYTHING before 2005 is NOT PROJECTIONS.
it is HISTORY, known at the time, so it is MEANINGLESS to look at the pre 2005 values for ANY indication as to how well the models work.
Yes, ‘projections’ is the wrong term, but the pre-2005 model runs are not based on temperature data. They had a much better idea of the forcings, of course, but the hindcasting is still an estimate based on the models. You can see that there are different model runs, can’t you? You’re not looking at the instrumental record there, you’re looking at the same types of models that make the projections, just with some more certain data (not actual temps).
I wonder why Monckton compares trends since 2001 with the post-2005 model ensemble? Time-frames are too short to get meaningful results. Linear trend since 2001 is definitely not statistically significant: 0.047C/decade +/- 0.281. The uncertainty is 6 times larger than the trend!
It is absolutely vital, rather, to look at how models do at hindcasting, because there we have something concrete to test the models against. Hindcasting is one of the best possible ways to improve models.
In any event, individual model runs (projections) show 10 – 20 year neutral trends with long-term warming, and the recent temps are still within the envelope. What we’re currently seeing shows up (at various times) in model runs than wind up with warming over the long-term, so unless anyone was expecting models to predict the temperature for every year accurately, then the obs don’t bust the models. Yet.

Beta Blocker
May 6, 2013 9:53 am

Barry said: May 6, 2013 at 8:29 am
It is absolutely vital, rather, to look at how models do at hindcasting, because there we have something concrete to test the models against. Hindcasting is one of the best possible ways to improve models.

The earth has been warming since the end of the Little Ice Age, with localized accelerations and plateaus along the way, just as the earth has done many times before, and will do again.
So the question must be asked ….. are the “improvements” being applied to the models through hindcasting giving us a better understanding of how the earth’s climate system actually works, or are these “improvements” giving us instead nothing more than a better fit to a known curve, but without necessarily telling us anything more useful than we had before about what is actually happening up there in the atmosphere and down there in the ocean?

BillK
May 6, 2013 11:06 am

Plant a Seed of Doubt in Their Minds
The true believers and the average person who has not been following Climate Change generally will not read anything that might upset their world view. There is too much conflicting information, too many competing scientists, theories and studies. Therefore I have condensed the strongest argument into a short letter which is below. I have had amazing sucess with opening minds with this simple message which plants a seed of doubt in their mind and leads them to the Economist article. A simple graph like Vuk’s will also work and even better if it has a CO2 line on it.
Climate Sensitivity May Have Been Overestimated.
The Economist Magazine has a new article on Climate Sensitivity that is a must read.
See http://www.economist.com/news/science-and-technology/21574461-climate-may-be-heating-up-less-response-greenhouse-gas-emissions
The top climate scientists in the world have acknowledged that the global temperatures are trending way below their forecasts despite higher CO2 releases. The Climate Sensitivity to changes in CO2 may have been over estimated. This means that something may be wrong with the theories in the computer models.
Anthropogenic Global Warming (AGW) has three main theories that each depend on the previous theory. The first theory is that the first doubling of CO2 will cause about 1 C of warming due to back radiation from the increased CO2. Subsequent doublings have minimal effect due to the logarithmic decline in back radiation.
The second theory is called the amplification or positive feedback theory. The 1 C warming should cause higher humidity and more low clouds which should trap more heat. The problem is that clouds can also reflect sunlight or condense into precipitation which will cause cooling. The net effect may even be negative so the models may be way off. The article refers to various new peer reviewed studies that now estimate climate sensitivity to be less than 2 C.
The third theory is that the estimated warming will large enough to be bad. The world has warmed about .8 C so climate sensitivity estimates of a total of 2 C are very unlikely to lead to extreme weather as there is no scientific mechanism for CO2 to influence the climate without warming. Mild warming has many benefits like less fuel use, less cold deaths (see Europe for last 2 winters), minor sea level rise and easier lives. Mild warming combined with higher CO2 concentrations also increases crop yields and greens the earth.
This will be great news for the world if the Climate Crisis has been over estimated and overstated. The 150 billion dollars that the world has spent to date is gone (not counting 100’s of billions on wind and solar) but the world may not have to spend the trillions that scientists and politicians forecasted. The Climate Sensitivity Questions need to be resolved as quickly as possible but we may have to wait for actual temperatures to be the judge.

barry
May 6, 2013 4:27 pm

Beta Blocker,

So the question must be asked ….. are the “improvements” being applied to the models through hindcasting giving us a better understanding of how the earth’s climate system actually works, or are these “improvements” giving us instead nothing more than a better fit to a known curve…

The answer to your questions are in the model inputs. Something that you can easily look up if you’re curious. i wouldn’t presume to school anyone on that.
Notice that none of the hindcasts ‘predicted’ the 1998 anomaly. That falls outside the envelope of the hindcast spread. Clearly they aren’t just punching in the temperature data. So, if they’re tweaking some parametrics on turbulent phenomena that aren’t well-constrained by physics in order to get a better fit, and they test those phenomena with multiple runs and changing other parameters, then this would be a good way to bound a component of the system when the physics do not well contain it. Obviously, they cannot account for every molecule in the climate system, so they must generalise phenomena. There is nothing wrong with training in this way if it is done honestly to improve the models (and of course it is – or what would be the point?).
I’m not sure what the LIA has to do with anything. The Earth’s climate is not some piece of elastic that bounces back from every perturbation. There must be causes that change the global temperature, and these are the forcings that are investigated. Attributing cause is a major part of the research, obviously.

eyesonu
May 6, 2013 8:13 pm

This thread went totally off the rails or OT if you will.
But let me say to Christopher Monckton, thank you. That was a very well put letter and hopefully the powers that be may grasp what you have presented.

dp
May 6, 2013 9:27 pm

What might the mathematical symbol be for faith, and is it treated as a function or an operator? If there is nonesuch may I suggest an addition sign modified with an exaggerated descender.

dp
May 6, 2013 11:11 pm

I think too there should be boundaries of incredibility beyond which plots are discontinued for being meaningless. In the noodle plot above that happened during the training period.

David
May 7, 2013 6:15 am

So – how can the ‘training period’ graphs lie 90% above the actual outturn..?
‘Rubbish in; rubbish out…’

Beta Blocker
May 7, 2013 8:28 am

Beta Blocker said: …… So the question must be asked ….. are the “improvements” being applied to the models through hindcasting giving us a better understanding of how the earth’s climate system actually works, or are these “improvements” giving us instead nothing more than a better fit to a known curve?
Barry said: ………. The answer to your questions are in the model inputs. ……… they test those phenomena with multiple runs and changing other parameters, then this would be a good way to bound a component of the system when the physics do not well contain it. Obviously, they cannot account for every molecule in the climate system, so they must generalise phenomena. ….. I’m not sure what the LIA has to do with anything. The Earth’s climate is not some piece of elastic that bounces back from every perturbation. There must be causes that change the global temperature, and these are the forcings that are investigated. Attributing cause is a major part of the research, obviously.

Barry, my point is this:
The GCMs are composed of layer upon layer of assumptions about how the earth’s climate system operates, assumptions which are translated into model inputs, computational algorithms, internal data transformations, and eventually, the model run final outputs.
If we were to examine every input, every initialized parameter, every computational algorithm, and every internal data transformation, could we honestly say that we have enough observational knowledge about how the climate system actually works to believe that we have all the elements needed to accurately model the earth’s climate system, and that each element is reasonably within the bounds of what might occur in the physical operative system — individually as single elements, and collectively as an aggregate of the operative real-world physical processes?
If we haven’t yet reached critical mass in having enough of a basic understanding of the actual climate system as it works in the physical world to properly construct a computational model for research purposes, then my fundamental question remains …… what do we gain in terms of a better understanding of the real-world climate system if we spend the greatest part of our time and energy making changes to computer models in order to fit a curve — changes to inputs, process algorithms, and data transformations which may or may not have anything in common with what actually happens in the operative physical system?

barry
May 7, 2013 4:43 pm

Beta Blocker,

what do we gain in terms of a better understanding of the real-world climate system if we spend the greatest part of our time and energy making changes to computer models in order to fit a curve — changes to inputs, process algorithms, and data transformations which may or may not have anything in common with what actually happens in the operative physical system?

But you focus on only one part of the enedeavour. Most of the modeling in GCMs is physics, not statistics based. The equations for hydrological operation, conservation of mass and energy, radiative transfer (generalised) and many basic physical processes underpin the models, and these are not left untouched but refined as understanding of the physics – not curve fittng – grows. For some components where the physics is not well-bounded, yes the processes are parametrised by referring to obs – not trends, by the way, just the average behaviour. No processes are trained to match long-term trends. It’s not a curve-fitting exercise in that regard.
Why should there be anything wrong in trying to imrove the models in every way that is reasonable? Why spurn experimental science? Why are we so absolute about the physics in climate science, but not in the other sciences (eg medicine)? In truth, there is no model in any science that is a perfect mirror of nature. What we gain by parametrising certain phenomena in climate models is a short-cut to representing the climate system when we can’t account for every molecule.
Let me make a very important point again – the models are not trained to trends. That is a common misconception. The ‘curve-fitting’ is done to approximate average conditions, not trends over time.