AR5 Chapter 11; Hiding the Decline (Part II)

Guest post by David M. Hoffer

In my first two articles on the leaked AR5 Chapter 11 (near-term projections) I looked at the caveats with which the IPCC is now surrounding their projections, and the lengths to which they are going to preserve the alarmist narrative. The caveats go to such ridiculous lengths that there is actually a quote suggesting that reality may well be within, above, or below the range projected by the models. Falsify that! To maintain the alarmist narrative , they characterize record ice extent in the Antarctic as a “slight increase” and make no mention in the executive summary of the projection buried deep in the report that tropical cyclones may decrease in frequency by as much as one third by 2100.

But what of their temperature projections? Do they say how much they expect it to warm up in the next few decades? They do. But these are the high stakes projections for the IPCC because, unlike most of their projections, these ones will be falsified (or not) within the life times of most of this readership. True to form, they’ve surrounded their temperature projections with caveats while taking an interesting approach to maintaining the alarmist narrative.

The projection is for between 0.4 and 1.0 degrees of warming for the period 2016-2035 compared to the period 1986-2005. Now normally when the IPCC gives a range, we expect that their “best guess” is in the centre of the range. But oddly we find this phrase in Chapter 11:

[…] it is more likely than not that actual warming will be closer to the lower bound of 0.4°C than the upper bound of 1.0°C

In fact, they go out of their way elsewhere to suggest that the most likely outcome will be about 0.2 degrees per decade. With 2035 only a smidge over two decades away, how do they justify an upper bound 2.5 times their most likely scenario? While delving into this, I came across some rather interesting information. Here’s the graphs they provide with their projections for the beginning of the reference period (1986-2005) through to the year 2050:

image

Figure 11.33: Synthesis of near-term projections of global mean surface air temperature. a) 4 Projections of global mean, annual mean surface air temperature (SAT) 1986–2050 (anomalies relative to 1986–2005) under all RCPs from CMIP5 models (grey and coloured lines, one ensemble member per model), with four observational estimates (HadCRUT3: Brohan et al., 2006; ERA-Interim: Simmons et al., 2010; GISTEMP: Hansen et al., 2010; NOAA: Smith et al., 2008) for the period 1986–2011 (black lines); b) as a) but showing the 5–95% range for RCP4.5 (light grey shades, with the multi-model median in white) and all RCPs (dark grey shades) of decadal mean CMIP5 projections using one ensemble member per model, and decadal mean observational estimates (black lines). The maximum and minimum values from CMIP5 are shown by the grey lines. An assessed likely range for the mean of the period 2016–2035 is indicated by the black solid bar. The ‘2°C above pre-industrial’ level is indicated with a thin black line, assuming a warming of global mean SAT prior to 1986–2005 of 0.6°C. c) A synthesis of ranges for the mean SAT for 2016–2035 using SRES CMIP3, RCPs CMIP5, observationally constrained projections (Stott et al., 2012; Rowlands et al., 2012; updated to remove simulations with large future volcanic eruptions), and an overall assessment. The box 1 and whiskers represent the likely (66%) and very likely (90%) ranges. The dots for the CMIP3 and CMIP5 estimates show the maximum and minimum values in the ensemble. The median (or maximum likelihood estimate for Rowlands et al., 2012) are indicated by a greyband.

Is the first graph serious? 154 data plots all scrambled together are supposed to have some meaning? So I started to focus on the second graph which is presented in a fashion that makes it useful. But in examining it, I noticed that something is missing. I’ll give everyone 5 minutes to go back and see if they can spot it for themselves.

Tick

Tick

Tick

Did you spot it?

They hid the decline! In the first graph, observational data ends about 2011 or 12. In the second graph though, it ends about 2007 or 8. There are four or five years of observational data missing from the second graph. Fortunately the two graphs are scaled identically which makes it very easy to use a highly sophisticated tool called “cut and paste” to move the observational data from the first graph to the second graph and see what it should have looked like:

image

Well oops. Once on brings the observational data up to date, it turns out that we are currently below the entire range of models in the 5% to 95% confidence range across all emission scenarios. The light gray shading is for RCP 4.5, the most likely emission scenario. But we’re also below the dark gray which is all emission scenarios for all models, including the ones where we strangle the global economy.

It gets worse.

I did a little back of the envelope math (OK, OK, a spreadsheet, who has envelopes anymore these days?) and calculated that, assuming a linear warming starting today, we’d need to get to 1.58 degrees above the reference period to get an average of +1.0 over the course of the reference period itself. If my calcs are correct, extrapolating a straight line from end of current observations through 1.6 degrees in 2035 ought to just catch the top of that black bar showing the “Likely Range” in the centre of the graph:

image

Hah! Nailed it!

But now it is even worse for the IPCC. To meet the upper bound of their estimated range, the IPCC would need warming that (according to their own data) is below projections for all their models in all emission scenarios to suddenly increase to a rate higher than all their projections from all their models across all emission scenarios. In brief, the upper range of their estimate cannot be supported by their own data from their own models.

In fact, just based on their own graph, we’ve seen less than 0.4 degrees over the last 26 years or so, less than 2 degrees per century. That brown line I’ve drawn in represents a warming trend beginning right now and continuing through 2035 of 6 degrees per century, triple recent rates. Since the range in their own graph already includes scenarios such as drastic reductions in aerosols as well as major increases in CO2, there simply is no justification in their own data and their own models to justify an upper bound of 1.0 degrees.

That’s not to say it is impossible, I suppose it is possible. It is also possible that I will be struck by lightning twice tomorrow and survive, only to die in airplane crash made all the more unlikely by the fact that I’m not flying anywhere tomorrow, so that plane will have to come and find me. Of course with my luck, the winning Powerball ticket will be found in my wallet just to cap things off.

Is it possible? Sure. Is it likely?

Not according to their own data and their own models. The current version of IPCC AR5 Chapter 11 takes deception (intended or otherwise) to new heights. First, by hiding the fact that observational data lies outside the 95% confidence range of their own models, and second by estimating an upper range of warming that their own models say is next to impossible.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
BanKimoon

Just send us $3 Trillion by March and we’ll fix the graph. Thanks.

Well, if you inspect the 2nd Figure you should notice that it shows ‘decadal means’ and as such is not supposed to show anything for the last five years, which is the reason the observed curve does not go up to the present year. This is as it should be. The first Figure shows clearly that all their projections are on the high side of observations, so they are doing too good.

so they are not doing too good

Steve (Paris)

I wonder how The Team will explain this away?

John Mason

In a normal world with honest peer review, that whole section would be ‘rejected’!
Now that these models have been dis-proven using their own pages with the un-snipped graph, they’ll have time to ‘correct’ the propaganda before the official release.
Oh, wait, that never stopped the Paleo Climate graph that showed that Temp rises in the historical record always preceded C02 rises from being used in the Inconvenient Truth either.
If we are at the cusp of another cooling phase as many solar watchers believe, by the time the final report is ready, the sniping action to ‘hide the decline’ should make any news journalist ready to jump at the story of the century.
Oh, wait, that hasn’t been working either.
We live in interesting times.

HaroldW

The second graph is labelled “decadal means”, presumably computed as a centered 10-year average. The most recent 10-years average would be centered at 2008; at the time of preparation of the graph it would have been around 2007.5.
You can tell it’s not the same “observations” curve in the second graph because it’s much smoother than the annual mean plotted in the first graph.

eqibno

They will likely “decline” to address these issues.

Hank Zentgraf

Not qualified to check your math, but I await the IPCC response.

John

David what a great catch. These guys never cease to amaze me they are so corrupt it is scary. Perhaps your catch is just one more reason The Age published an actual sceptic article today. See it at Jo Nova.

MikeTheDenier

DaveA

They must be feeling tempted to hit the reset button and start the model runs further up.

DocMartyn

I never saw the missing line, thanks for bring this to our attention. They use the same tricks time after time.

Mickey Reno

Interesting and damning. It’s way past time to stop all this silliness and disband the IPCC.
A couple of corrections
– “…extarpolating a straight line…” should be “extrapolating a straight line…”
– “That brown line I’ve drown in …” should be “That brown line I’ve drawn in…”

Sean Houlihane

I guess this shows how tricky these models can be. They still seem to ge taking models where time starts at abut 1998, the point where models match observations. In order to reach the high calue, the models which are outliers today need to remain as valid as the others – in effect implying that the uncertainty in the observations can justify the divergence. This is the problem with people not understanding what different model runs correspond to and how they can usefully be combined.
I guess that the stats to reconcile the models and observations is pretty complex stuff, and it is not trivial to come up with a good ‘best estimate’ (even accepting that it is likely to be inaccurate). Maybe one method would be to do a PC analysis and see it pick the lowest trending model runs as favourable, but that doesn’t really take us any further forward in rationalising the ‘likely range’.
This worst-case analysis is fundamentally nonsense in my view. Planning for worst case just isn’t done in practice, and even when you try, missing one trivial connection invalidates the whole approach (see recent financial collapse of banks for details). We’re planning for 1:1,000,000 scenario when the models are wrong 1:10,000. (or even more often if you accept the lukewarm cannon)

Bill Illis

I think the RCP 6.0 scenario is actually the most realistic. The 6.0 stands for the forcing in W/m2 that will eventually be reached (in 2130), which is higher than the standard 3.7 W/m2 we hear about for a doubled CO2 forcing but this occurs by 2060 and then continues rising as CO2 is expected to continue rising until at least 2100. Then there is also the CH4, N20 and other forcings to add in.
RCP 4.5 assumes that the forcing will max out at 4.5 W/m2 in 2070 and then stay there which is probably not realistic. RCP 6.0 and 4.5 are basically the same out to 2060 and then 6.0 is the one that gets to +3.25C of warming by 2100, the standard IPCC forecast, so try to use the 6.0 scenario.
Just a note about Forcing, Concentration and Emission data availability in the scenarios.
One can get the forcing, concentration and emissions data for all scenarios here. Nice easy to use Excel spreadsheets by year from 1765 to 2500. (If you want to use them some time in the future, you should probably save them now because this data has been moved around and put under strict download restrictions by the IPCC previously – I’m not sure how long this page will be up).
http://www.pik-potsdam.de/~mmalte/rcps/index.htm
And the Climate Explorer has put up a nice page of the multi-model means of the 4 scenarios here (just a note that the data should start in 1861 while it says 1860 on the downloads, just move it forward one year).
http://climexp.knmi.nl/cmip5_indices.cgi?id=someone@somewhere

Richard M

I’d suggest it’s even worse than Dave indicated. The temperature data is likely over-stated by poorly thought out adjustments and UHI. The “real” numbers are probably lower than shown when corrected for UHI and poor siting making the task of reaching the claimed warming even more difficult.

Paul Schnurr

I don’t understand the wisdom of including model runs in the “spaghetti” that are significantly at variance with observations. Doesn’t that prove that they are incorrect? Of course that would reduce the plate of spaghetti to only a few left-over strands.

HaroldW says:
December 30, 2012 at 5:27 am
You can tell it’s not the same “observations” curve in the second graph because it’s much smoother than the annual mean plotted in the first graph.
Obviously a 10-yr smoothed curve will be a lot smoother that the unsmoothed one.

Richard M

The concept of “decadal means” is simply another technique to hide the decline. It allows the early 21st century warmth to contaminate the cooler years that follow. This can be useful for noisy data but tends to hide cyclic data until well past the cycle changes. In this case the warmth from the warm PDO is still factored into the latest years even after it changed modes.

Ian H

The CAGW story is now a teetering house of cards. It WILL collapse. That is no longer in question. The only question is when. It would have collapsed already were it not being propped up by the complete unwillingness of the media to report honestly on what is going on.
The fact that CAGW is a bankrupt theory is now an open secret. At some point this story is going to go mainstream. It just needs is for someone in the media to break ranks. A single well made and honest documentary would do it. Could be a Pulitzer prize in it for someone. Come on guys. Somebody is going to break this story. It could be you! Who will take the risk. Who will claim the prize. This is a massive story that needs to be told. Are you journalists or what?
When this does break it could get very ugly indeed. The fortunes of green parties all over the world are likely to take a hit. Science too will take a hit. Serves us right I guess for being too chicken to reign these cowboys in. And the scientists most directly fronting this need to watch out. When the public finally understands what has been going on they are going to be pretty pissed. You just have to look at those poor geologists locked up in Italy for making foolish authoritative statements about earthquakes to see the kind of thing that can happen in that situation.

I could be mistaken, but isn’t it a bit the same discussion as here?:
http://wattsupwiththat.com/2012/12/18/dr-david-whitehouse-on-the-ar5-figure-1-4/
so far I am concerned, there seems nothing hidden there;

Oh dear more Denial rubbish daring to question those above your pay grade who you are not worthy enough to lick the boots of. Have you not seen the pie chart on SkS? Is that not proof enough to refute this big oil funded climate crimines (death to deniers etc.)? Seriously how can you ignore a pizza pie?
Again the Kraken and his effect on climate has not been factored in and as we all know the Kraken makes the effect of CO2 but a drop in the ocean. The Kraken will make temps rise 10c in a week let alone this namby pamby IPCC decadal nonsense who are not alarming people enough as evidenced by the lack of people in the streets screaming that the sky is falling and that there is a mouse on the moon eating all the cheese.
The only decline is in our collective sanity.
/sarc

herkimer

It is clear to anyone looking at the IPCC graphs that all of the predicted curves are going to be well above the observed data. As the global cooling continues for the next several decades , the divergence between the two will be so great that it will be embarrassing to even show it. My guess is that the observed point will be around 0.2C and their predicted mid line [white line] will be 0.7C by 2030. Their predicted upper range will be 1.5c by 2030 The longer they try to hide their flawed science and simply wrong predictions , the worse it will look for them. You will note that they no longer show the curves at 2100 because by then the divergence could look even worse.How this kind of bad science is portrayed as climate science must be an embarrassment to all scientists. Where are all the scientific bodies now who rubber stamped this flawed science before. There is nothing wrong with making mistakes. Every scientist makes them some time in their career, they admit their error , learns from the mistake and goes on to make better science . The error is made worse when you try to hide your mistakes and still claim this as “solid “science This was the center piece of their flawed agw science

Steve from Rockwood

It would be interesting to know why the IPCC thinks the temperature increases will be on the low side of their projections. What is happening in their minds to lead to such a statement?

it might be a good idea to wait for Hoffer’s answer to lsvalgaard December 30, 2012 at 5:24 am;

Roger Longstaff

Excellent post! It articulates what many of us have being saying for years – climate models are completely useless, and always will be!
Like the alchemists of old, all that the climate modellers will find is fool’s gold – for exactly the same reasons.

Tom Jones

The current version of IPCC AR5 Chapter 11 takes deception (intended or otherwise) to new heights. First, by hiding the fact that observational data lies outside the 95% confidence range of their own models, and second by estimating an upper range of warming that their own models say is next to impossible.
You just have to be dumbstruck. The public cannot but feel that they are being had, or at least that the IPCC is trying hard. But this underlies what the PR strategy has to be, given the facts. Change the subject. If you have to talk about global warming, talk statistics and estimates. Global weirding is always good.

TImothy Sorenson

@Isvalgaard, the IPCC models are expect annual mean temperatures NOT decadal means. It is clear they choose to plot a decadal mean curve against the models to make it look better when the means graphs should not be compared against the models.

John Archer

lsvalgaard (December 30, 2012 at 5:25 am) says:

so they are not doing too good

Yep — they’ve run plum outta luck coz they ain’t doing too well either. 🙂
By the way, I’m delighted at their pain. But then that’s the kind of man I am — selfless to a fault, that’s me.

DirkH

The people who still contribute to the IPCC, has anyone measured their average IQ as time goes by? Because I would expect a serious decline. Any intelligent person would these days avoid that career, they must be scraping the bottom of the barrel, brain-wise. Even intelligent crooks would much rather go into the CO2AGW-induced crony capitalism schemes.
So what kind of academic lowlife is still pursuing that GCM career? Neither honest nor efficient I fear. Schneider took the secret to that with him into his grave.

Ed Reid

Mr. Hoffer,
” In brief, the upper range of their estimate cannot be supported by their own data from their own models.”
One “nitpick”. The outputs from models are not data.

mpainter

Hoffer’s synthesis shows that the warming is projected at an absurd rate, exposing the fallacy of IPCC methods of science. This trick of initiating a warming projection from a point six years previous to the release of the report (due in 2014) allows them to project an absurd rate of warming, with outdated graphics that hides the absurdity. This is a new twist on an old trick: now we have “hide the *incline*”. Very nifty- should get them another Nobel Prize. David Hoffer has exposed more “legerdemain” by the IPCC authors.

DirkH

lsvalgaard says:
December 30, 2012 at 5:24 am
“Well, if you inspect the 2nd Figure you should notice that it shows ‘decadal means’ and as such is not supposed to show anything for the last five years, which is the reason the observed curve does not go up to the present year. This is as it should be. ”
BEST also used a decadal mean in their non-peer reviewed pre release to the media, which conveniently obscures the temperature plateau of the last 16 years.
It’s all the rage in warmist circles. Expect 20 year means next.

Phil's Dad

I would like to see more on why the IPCC themselves now state that reality is likely to be at the lower end of the projections “ it is more likely than not that actual warming will be closer to the lower bound ” and what Working Group 2 think that will mean in terms of “catastrophy”.

Larry Ledwick (hotrod)

It would of course be much easier to get the public’s attention on this issue if the majority of them got any useful instruction in school regarding science and math, and even simple graphing. Many of these stunts are classic “how to lie with statistics” tricks. Things that shoddy sale agents do all the time to mislead buyers of new cars and other products, but our educational system no longer teaches simple graphing and analysis. The kids just punch numbers into a graphical calculator and accept blindly what shows up on the screen. No critical thinking involved, never asking “does this make sense?”
We need a course in public schools something along the lines of “Limitations of data, and data presentation”. This is where you could draw together topics like significant digits, precision, uncertainty etc. Teach the kids to read the fine print on contracts, and note the deceptive practices used in advertising to hide what is really going on.
Larry

Keith Guy

I notice that in the second diagram the IPCC provide a line which they purport to be the temperature at 2 degrees C above pre-industrial levels. I wonder which pre-industrial period this is. Obviously not the medieval warm period.

Kev-in-Uk

Perhaps David missed the ‘decadal means’ annotation? But, in truth, his analysis is essentially still valid. They almost certainly chose to use a decadal mean chart to ‘smooth’ out the curve, but also provide a perfect excuse to ‘ignore’ the last 10 years of (flatlining) temperature data?
Istill get pissed off that they produce these charts without including the previous historical temperature data – if this graph was extended back to 1900 or so, the uppy/downy natural climate variability would be clearly evident and would instantly suggest the climate sensitivity to CO2 is not nearly as high as they think.

richard verney

Steve from Rockwood says:
December 30, 2012 at 6:27 am
It would be interesting to know why the IPCC thinks the temperature increases will be on the low side of their projections. What is happening in their minds to lead to such a statement?
////////////////////////////////////////////////////////////////////////
Your observation is worth emphasing. I consider it to be material.
Perhaps it is, on the part of the ‘scientists’, a realisation (which presently will not be admitted) that sensitivity to CO2 (at current concentrations) is significantly less than the ‘accepted’ figure/range and that natural variability plays a larger role in all of this than was ‘accepted’ to be the case when the early reports were written.
If one accepts the satellite data (then subject to fudging/corrections for aerosols) there is 33 years worth of data that shows no discernible CO2 induced temperature fluctuation. One would have to conclude that no CO2 signal can be read (with present instrument sensitivity) and/or that natural vaiation swamps any CO2 signal which may otherwise be contained within the data.
Whilst there are some issues with the satellite data, an impartial observer would prefer this source of data to the land based data sets which have obvious problems (such as siting issues, UHI and the like). Whilst climate ‘scientists’ may not openly acknowledge that the satellite data is the best evidence of recent temperature data, one can only presume that in their gut they know this to be the case and the satellite data is very damning.
It is doubtful that one will need to wait until 2035 to see the jig is up. The next decade is likely to be a game changer especially given the economic downturn in the West and the fact that emissions in developing countries will rise unabated, It will become more and more difficult to blame the West (and in any event the West will have no money to pay for mitigation/adaption/cliamte change projects oon developing nations).
A lot of questions will be asked within the next decade and I do not expect the blame game to be a pretty sight given that basis school boy errors have been made at all stages, not simply in the science, but also in historical perspective (including whether a warming of a few degrees would be a very good thing rather than a bad thing) and in the political response (such as the rush for wind and solar, and hindering shale and nuclear etc).
Hindsight is illuminating and with the clarity of vision that this provides, the ‘scientists’ who promoted it and/or gave it a free ride, and the politicians who jumped on the bandwagon will have nowhere to hide. Of course, it is impossible to turn back the clock and unpicl the harm that all of this has caused.

GaryP

I see others have already spotted the change from annual to decadal means. However, this merely is how they accomplished their “clever trick to hide the decline” There is absolutely no reason to use ten year averaging on the measured data. That data is not swinging widely so their is no reason to smooth it beyond the annual averaging. It is perfectly valid to show the annual mean of the real data compared to decadal mean of the models when the purpose is to show trends across the models as an extrapolated prediction. There is no reason other than to deceive for their “statistical trick.”

Man Bearpig

Steve from Rockwood says:
December 30, 2012 at 6:27 am
It would be interesting to know why the IPCC thinks the temperature increases will be on the low side of their projections. What is happening in their minds to lead to such a statement?
————————————-
Excellent point Steve, I was going to ask a similar question.
Even though there is no evidence that Man Made contribution to GHGs has decreased (in fact the opposite is true), why have the IPCC predictions have moved dramatically southward ?
There have been no serious volcanic eruptions to make temperatures stagnate or fall, ENSO is about mid range but temperatures seem to be dropping slightly .. Can anyone on either side (or both preferably) offer an explanation as to why this is ?

John Archer

Ed Reid (December 30, 2012 at 7:33 am) says:

One “nitpick”. The outputs from models are not data.

True, but then most of their data aren’t data either.
Still, it’s important to split hairs!
Just kidding. You’re right. 🙂

Man Bearpig

DirkH says:
December 30, 2012 at 7:35
It’s all the rage in warmist circles. Expect 20 year means next.
—————————–
Yes, it is strange to chose a frequency that is beyond that of the reports. I could understand this a bit more if it were trying to show some historic analyses of noisy stochastic data, but for ‘global’ temperature a running 13 month average would be sufficient – unless of course the global temperature is so noisy and stochastic, but no it can’t be otherwise the error margin would be too great and temperatures to 2 decimal places would be invalidated by the error margin.
Besides that point it is now a mute argument anyway it’s just more straw stuffed into the same old shirt. Whatever way the alarmists want to try to iron this crease out, the analysis seems to be correct.

Green Sand

DirkH says:
December 30, 2012 at 7:35 am
“It’s all the rage in warmist circles. Expect 20 year means next.”

—————————————————–
Already got some quoting 30 year means. They think it negates the fact that the 30 year trend has been reducing for at least 9 years.

As explained in my paper:
Scafetta N., 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. Journal of Atmospheric and Solar-Terrestrial Physics 80, 124-137.
http://www.sciencedirect.com/science/article/pii/S1364682611003385
which is discussed also on this blog
http://wattsupwiththat.com/reference-pages/research-pages/scafettas-solar-lunar-cycle-forecast-vs-global-temperature/
the GCM models used by the IPCC are not able to reproduce the natural climate oscillations (periods at about 9.1, 10-11, 20, 60 yr etc) which have a likely astronomical origin. As a consequence they completely mistake the climate attribution problem by overestimating the anthropocentric effect.
The steady temperature since 2000 is due to the cooling phase of the 60 and 20 year cycles that compensate some GHG warming.
Now it is one year that my paper was published and my forecast correctly predicted the temperature pattern. See here
http://people.duke.edu/~ns2002/#astronomical_model_1
To be correct the model calculations were done in such a way that the forecast started in 2000, but of course only the correct prediction since Nov/2011 matters.
Please inform the IPCC that their models are flawed and peer reviewed literature have already demonstrated it.

In addition to the previous post,
note that in the figure published in my original paper, which is here,
http://wattsupwiththat.files.wordpress.com/2012/02/scafetta_figure-original1.png
there are two model curves the black and the blue.
The blue curve use a calibration period 1850-1950, Thus, the period after 1950 is actually a partial forecast and predicted quite well the temperature patterns observed from 1950 to now.

Leif (and others) are correct, I missed the change to decadal means in the second graph. I knew the second graph had been smoothed in some way different from the first one, but I never really felt the need to figure out what it was because, at day’s end, it makes no difference.
If they truncated the data by leaving it out, the result is deceptive. If they hid it by using decadal means, then it is deceptive, it simply is a different means to accomplish the deception.
I challenge them or anyone else to provide an explanation of why the top graph is presented the way it is and the bottom graph differently.
1. The top and bottom graphs are identical in size and scaling. What purpose can be ascribed to smoothing the data in the second one differently from the first one?
2. How does one justify decadal means on a graph with only 30 years of data?
3. On a graph with only 30 years of data, how does one justify not using the actual data itself (annual or even monthly). They can plot 154 model outputs on top of each other but four temperature series with 30 data points each they smear together into decadal means?
All of which changes nothing in my final analysis. They need rates of warming to TRIPLE starting RIGHT NOW to get tot he top end of their range. That’s 6 degrees per century for the next 2 decades plus. To be fair, they blame aerosol reductions for some of this, and natural variation for some. They pretty much have to. We’re only at 40% of one doubling of CO2 now, we might hit 50% or 55% by 2035. For the bulk of that 6 degrees per century to be blamed on CO2, we’d need an astronomical sensitivity to CO2 orders of magnitude higher than any of their estimates.
Bottom line? The temperature NOW is outside the 95% confidence range for all models and all scenarios.

Bob Koss

That top graphic didn’t even use actual observations. They used “observational estimates”.

. . . with four observational estimates (HadCRUT3: Brohan et al., 2006; ERA-Interim: Simmons et al., 2010; GISTEMP: Hansen et al., 2010; NOAA: Smith et al., 2008) for the period 1986–2011 (black lines); . . .”

From the dates on the papers above, none of the referenced papers are capable of having observations for the end of the period in 2011. Additionally, ERA-Interim is a reanalysis of observations. Appears to me to be an analysis which fills in the blanks and fudges observations where necessary.

David Harrington

Surprised? Nope

Andrejs Vanags

I actually find it encouraging that instead of showing predictions starting from ~2010 actual temperatures, they show predictions starting from actuals from the late 80’s, and display how the models diverge from actuals over time.
It would be so easy to ignore historical data if they really wanted to ‘deceive’ and just show predictions starting from ~2010 (claiming better models, lack of needed historical inputs for backfitting, etc), but they are not doing so.