An animated analysis of the IPCC AR5 graph shows 'IPCC analysis methodology and computer models are seriously flawed'

This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony

I shot an arrow into the air,

It fell to earth, I knew not where;

For, so swiftly it flew, the sight

Could not follow it in its flight.

I breathed a song into the air,

It fell to earth, I knew not where;

For who has sight so keen and strong,

That it can follow the flight of song?

– Henry Wadsworth Longfellow

Guest Post by Ira Glickstein.

The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.

When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).
IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012

The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.

The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.

Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.

The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.

The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.

Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.

IPCC PREDICTIONS FOR 2015 – AND IRA’S

The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:

  • IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
  • IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
  • IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
  • IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
  • Ira Glickstein’s Central Prediction for 2015: 0.46˚C

Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.

IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG

As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!

Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).

Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.

Ira Glickstein

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

117 Comments
Inline Feedbacks
View all comments
TimC
December 20, 2012 10:30 pm

Dr Glickstein (and mikerossander): many thanks and (referring to your December 20, 7:38 pm comment) I can no longer fault your analysis. Particularly, I agree that there is likely to be a (probably self-serving) bias to the models which to me also suggests “the dice are loaded”, leading to a higher probabilty of error than the original 1/10 for any single run – but not so high as 1/10,000 for 4 truly independent events.
Apropos of nothing, what actually first came to my mind (as a lawyer here in the UK for many years) was the notorious Sally Clark case here in the UK. She was wrongly convicted of the murder of two of her sons both of whom died suddenly within a few weeks of birth. A paediatrician gave evidence that in his opinion the chance of two children from her well-off background suffering sudden infant death syndrome was 1 in 73 million, taken by squaring 1/8500 (his estimate of the likelihood of a single cot death occurring in similar circumstances). The jury convicted on that evidence, despite the judge giving a warning of the possible “prosecutor’s fallacy”. She was imprisoned for life and had served 4 years when it emerged that the pathologist failed to disclose microbiological reports implying the possibility that at least one of her sons had died of (unlinked) natural causes. Her convictions were then overturned but (having lost two sons, then having been wrongly convicted of their murders) she never recovered – she died just 4 years later, aged 42. A very sad case.

JazzyT
December 21, 2012 1:07 am

The title of the post,
“An animated analysis of the IPCC AR5 graph shows ‘IPCC analysis methodology and computer models are seriously flawed”
raises a question: Flawed for what purpose?
Obviously, current global temperatures are below what the models would have led us to believe. But the models can’t predict specific ENSO events in advance, or long-term solar output trends, at all. People who work with them, or are used to examining their output, know this, and can allow for the fact that unexpected ENSO events or solar forcings will give a real-life result that the models didn’t predict. But when the model results are presented to non-specialists, it’s hard to avoid this point being lost.
Foster and Rahmstorf have taken a stab at adjusting the temperature history, for ENSO/solar/volcanic histtory, with the aim of isolating the CO2 effects. They used a multivariate regression analysis, so the accuracy of their results will depend on whether the factors they examined affecting temperature (CO2, ENSO, solar output, and aerosols) leave out any significant contributors, and the extent to which their effects can, for the metrics they chose, be thrown together as linear, independent influences on temperature.
Models do include ENSO events at random, and it would be interesting to see what predictions came out when selecting runs with a strong El Nino bias in the late 1990s, and a strong La Nina bias recently. What I’d really like to see would be some models run using the known ENSO history and solar influences, for hindcasting. That would give a better idea of how well the models work, and what we might expect under various scenarios for future ENSO and solar influences.

Roger Longstaff
December 21, 2012 2:29 am

unha says: December 20, 2012 at 7:39 pm:
Thank you for raising the exponential rate of error accumulation in GCM time step integrations.
When I could not understand how climatologists thought that they could get sensible data from GCMs I did some checking and found out that the models use low pass filters between integration steps in order to preserve conservation of energy, mass and momentum, and to maintain “stability”. Even worse, they use pause/reset/restart techniques when physical laws are violated, or the “climate trajectory” breaches boundary conditions.
All of this tells me that what they are trying to do is mathematically impossible.

mpainter
December 21, 2012 3:56 am

davidmhoffer says: December 20, 2012 at 8:56 pm
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be?
=========================================
Exactly. A frank admission of the inadequacy of the models. Tinker here, and – oops! Contradicts the cite above from Ch. One, as given by Dr. Glickstein. The crack of Doom? Or a case of indiscipline?

herkimer
December 21, 2012 6:11 am

An analysis of past climate history shows that during the period 1870 to 1910, the global air temperatures and the global ocean surface temperatures both declined as the sunspot number declined. From 1910 to 1940 all three again moved up together. From 1940 to 1970’s, the global ocean surface temperatures declined as they entered their cool mode and wiped out the global surface temperatures rise from continuing solar sunspot increase. From 1980 to 2000 all three variables again moved up in unison.. During the last decade or 2000-2010, all three climate variables are again going down as global cooling again gets underway. This declining pattern is likely to continue until 2030 at least . It would appear that the decadal average yearly sunspot number level of about 30-45 seems to be the tipping point where any level below this figure causes global cooling and above this figure causes global warming unless ocean cycles happen to be out of sync and over ride any warming [ like 1950’s-1970’s]. Most recently we have been running at an average yearly decadal sunspot number of 29.2 over the last 10 years. This low figure clearly explains why there has been no warming for the last 16 years and why instead we are starting to see global cooling like during the past the period of 1880-1910. Not enough solar energy is being put into the planet to cause any warming.
The average yearly sun spot numbers during the Dalton Minimum decades [ 1790 to 1837], a period of much colder temperatures like the period 1880-1910 were 27.5, 16.5, 19.3 and 39 . So there is some convincing evidence that low solar sunspot numbers and declining global temperatures are directly linked.and we are already in cooling phase like we had before

herkimer
December 21, 2012 6:23 am

IRA
IPCC has been completely wrong about their winter climate predictions .
UNITED STATES
The winter temperatures for Contiguous United States has been dropping since 1990 at -0.26 F per decade [per NCDC]
The annual temperature for Contiguous United States has been dropping since 1998 at -0. 80 F per decade[ per NCDC]
Basically US winter temperatures have been flat with no warming for 20 years
CANADA
The annual temperature departure form 1961-1990 averages has been flat since 1998
The winter temperature anomaly has been rising mostly due to the warming of the far north and Atlantic coast only
8 of the 11 climate regions in other parts of Canada showed declining winter temperature departures since 1998
During the 2011/2012 winter the Canadian Arctic showed declining winter temperature departures
Yet the IPCC assessment for North America was:
All of North America is very likely to warm during this century, and the annual mean warming is likely to exceed the global mean warming in most areas. In northern regions, warming is likely to be largest in winter, and in the southwest USA largest in summer. The lowest winter temperatures are likely to increase more than the average winter temperature in northern North America
EUROPE
The winter temperature departures from 1961-1990 mean normals for land and sea regions of Europe have been flat or even slightly dropping for 20 year or since 1990
Yet the IPCC assessments of projected climate change for Europe was:
Annual mean temperatures in Europe are likely to increase more than the global mean. The warming in northern Europe is likely to be largest in winter and that in the Mediterranean area largest in summer. The lowest winter temperatures are likely to increase more than average winter temperature in northern Europe
It is not happening

GregF
December 21, 2012 7:55 am

Ira,
I’m in general a sceptic and very much find the graph you pulled from the report highly confusing. I think the poor quality of the graph has led you to totally mis-read it and worse to misapply it.
For AR4 as an example, the starting point for the hindcast / forecast is clearly 1990.
If you want to eliminate that hindcast portion of the AR4 fan, then you need to start your AR4 line from the middle of the AR4 2007 hindcast for 2007. Then connect that line to the center of the 2012 forecast. The slope of that line would be totally different from the slope of the line you got by mixing a 2007 actual temp with a 2012 temp findcast/forecast with a 1990 starting point.
As it currently is, I think your entire blog post should be withdrawn as simply being a misinterpretation of a really poorly done graph.
[GregF, you are entitled to your opinion. However, I find it somewhat ridiculous that the middle of the AR4 prediction fan (the brown and rust-colored band) is so far above the actual observations for the year before AR4 was issued as well as for the year AR4 was issued. It seems to me that a prediction should start with a known situation and predict the future from that point. Nevertheless, thanks for your input. – Ira]

davidmhoffer
December 21, 2012 9:10 am

Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>
I’m only part way through it, but there are a few more beauts in there. One is that they predict 0.4 to 1.0 degrees of warming for 2016-2035 compared to 1986-2005, and they expect to be at the low end of that range. For starters, we are right now today at +0.2 compared to 1986-2005, so they only need +0.2 by 2016-2035 to hit their projection range. But they then hedge their bets further by stating that this is all based on the assumption that there will be rapid decreases in aerosol emissions over the next few years. No justification for the assumption that I can find, and it makes little sense to make such an assumption given the rapidly industrializing economies in China, India and Brazil which will ramp up emissions far beyond what we can reduce them in the western world. Talk about a get out of jail free card! Nor can I find (so far anyway) who much of the warming they project is due to the decrease in aerosols that they project, so how much is actually left to attribute to CO2 is currently a mystery to me.
But here’s one that got the expletive’s going big time:
“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005”
Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!
And you have just got to love this one on surface ozone:
“There is high confidence that baseline surface ozone (O3) will change over the 21st century, although projections across the RCP, SRES, and alternative scenarios for different regions range from –4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
Are they kidding? They are highly confident that it will be either higher, or lower, or about the same, but not exactly the same?
The more of it I read, the sadder it gets.

Tim Clark
December 21, 2012 12:03 pm

{ davidmhoffer says:
December 21, 2012 at 9:10 am }
RE:
–4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
LOL…Nice catch. But the real question is……(drumroll)
{ There is high confidence }
The best they can do is “high confidence”. I think it’s “very highly likely”, or maybe “almost certainly”, or even so far as, dare I go there, “irrefutably robust”.
;<)

JazzyT
December 21, 2012 11:13 pm

Ira Glickstein, PhD says:
December 21, 2012 at 7:57 am

It seems to me that we “non-specialists” who are not invested in the meme of human-caused Global Warming are more attuned to the abject failures of the IPCC models.

Well first, the bit about “non-specialists” was not intended as a jab at anyone, and I regret it if that’s how it came through through.
But I’ll try to clarify what I meant. Suppose a model prediction persistently fails to match reality within a stated tolerance. (I say “persistently” because one excursion could be a statistical fluke.) Now, if the model diverges from reality because processes that were modeled were gave incorrect answers, then the model is not working. However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.
Is this what has been happening? I don’t know. ENSO processes can”t be predicted, so they are modeled randomly. The real-life events of a super El Nino in 1998 and double-dip La Ninas recently tend to flatten out temperatures. These won’t match the mean of model runs using random ENSO processes, some of which would raise the trend and others lower, or flatten it. Weak solar output over this cycle and the last contribute more to temperature flattening. How would the temperature curve have looked without these? Would it have matched the model predictions?
There’s been one statistical attempt to deal with all these processes, that could not have been included in model predictions (because they’re unpredictable). But that gives the best fit to the data, which is not necessarilty the most physically plausible interpretation. That’s why I’d like to see some model runs that actually include the ENSO and solar events of the last 15 years, as they actually happened. That would have a lot to say about how well the model is working in general.
Now, the climate modelers understand these issues very well. They may be exposed to the risk of confusing models with reality, but they do know what’s in the models and what isn’t. When I see a peer-reviewed article about models, the language seems appropriately cautious, trying to state simplifying assumptions and areas of uncertainty. When it gets into the IPCC scientific summary, it gets compressed and these caveats lose detail. In the summary for policymakers, these technical details are likely to be left out. By the time it has been digested by the mass media, possibly several times, and passed on to people who have no reason worry about how the models work. At this point, they see the prediction, but none of the caveats.
So, the divergence of models from reality is clearly due, partly, to things that just weren’t modeled. But the predictions, as communicated to the public, didn’t include that as a possibility. So, if you want to define a model at each stage–modelers, two (or three) layers of IPCC, and one or more runs through the mass media–well, the end prediction could be called a model too. And, the predictions that came out at the end certainly didn’t work. And that’s a problem. How much of it was in the code and how much in the communication–that’s what I’d like to find out.

Martin Lewitt
December 22, 2012 7:58 am

JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled. Models may not seem that far wrong when consideration is given to the things that could not be modeled in advance, but they can achieve that by just following the trend linearly for awhile. They diverge from that in longer range projections, and are not credible when we know they have “matched” the climate incorrectly. They have documented correlated errors larger than the phenomenon of interest.

JazzyT
December 23, 2012 2:42 am

Ira Glickstein, PhD says:
December 22, 2012 at 8:08 am

[Quoting me]

However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.

In other words, “the operation was a success but the patient died.” :^)

This happens sometimes. But if the patient died in a traffic accident as their spouse was driving them home from the hospital, it would take a rather brazen lawyer to sue the surgeon for malpractice. :->
But we’re on the same page as far as what’s in and out of the models.
I couldn’t help noticing something else, and I’m surprised I didn’t see it come up in the thread. With the arrow metaphor, of course, we score a hit when the arrow hits the target. The target, in this case, could be the actual temperature…or, you could say that the temperature was the bulls-eye, and the scoring rings extend to the edge of the error bars. But 2012 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011. Two of the arrows, SAR and AR4, actually hit 2011, not in the bulls-eye, by any means, but still in scoring range. It’s the same for 2010. The arrows would probably not hit the error bars for 2012 once those are available, but insisting on using 2012, and disregarding the two previous years would invite a charge of cherry-picking.
Others have covered things like picking the starting point, how to get the slope, etc. I’ll only add that I’m old enough to have learned how to do a linear fit to the data by eye (and, in fact, they still have students do this at least once or twice in a college physics lab, to make the students interact with their data). When I do that, I get a slope that is, by eye, slightly lower than that of FAR, higher than SAR, lower than TAR, and distinctly lower than AR4.
But it seems strange to compare the slope for the entire series with the slopes for each model. Why would each model’s predictions for the future be tested against the past? It seems that you’d want four slopes for measured data that start at the time of each model’s predictions. But then, AR4s and TARs would be completely impractical due to the short time intervals, and TAR could be dodgy as well.
If you want to do this again when 2012 data are complete, well, those are the issues I noticed, which others would surely notice if this is released to a wider audience. Now they’re in the same pile as everyone else’s comments; some stuff from that pile will probably be useful for the next version.

Roger Knights
December 23, 2012 5:18 am

davidmhoffer says:
December 21, 2012 at 9:10 am
Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>

“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005″

Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!

Wouldn’t it be a hoot if that’s what actually happens! (I suspect the Pranksters on Olympus are thinking the same way.)

Roger Knights
December 23, 2012 5:30 am

JazzyT says:
But 2012 2011 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011 2010.

As per my comment upthread:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

herkimer
December 23, 2012 5:48 am

Ira
“Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality.”
I n my opinion, there is nothing wrong with scientists doing model work to understand the climate. Personally i think one is trying to model something that has too many variables that cannot be predicted or modeled completely. However where I have a more serious concern is when unproven and purely experimental models are portrayed as solid science and are thrust on the public domain to shape public policy . This very expensive , wasteful and burdensome on the society . These models should remain as experimental only until there is sufficient evidence that they have high level of success. . In my judgment , we are decades away from that point when it comes to climate.

herkimer
December 23, 2012 6:52 am

There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage. Yet when it comes to climate science we are doing exactly the opposite . We are into the implementation and construction stage when it comes to energy changes , environmental actions and public policy while the models are still in the concept and unproven stage . So the whole planet is now like big experiment where these scientists are allowed to play around with public resources , energy options and taxpayers money based only on questionable science and unproven models most of which have been seriously wrong predicting just the first few years ahead .Successful hind casting of models does not prove the model as it is too easy to feed fudge factors and twig the model to give you a known answer,Successfully predicting a decades into the future is the only true test in my opinion..

Gail Combs
December 23, 2012 7:36 am

herkimer says: December 23, 2012 at 6:52 am
There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage…..
>>>>>>>>>>>>>>>>>>>>>>>>>
And any company that has it’s head on straight gathers all of its technical personnel together to have a go at ripping to shreds the design while in the pilot stage BEFORE it gets expensive.
This is what the most successful company I worked for did with very good results. Sadly it is not common because of the delicate sensibilities of the scientists/engineers who head projects and who can not stand criticism. It takes a brave soul to present his ‘baby’ to the critiquing wolves.

Gail Combs
December 23, 2012 11:41 am

Ira Glickstein, PhD says:
December 23, 2012 at 10:51 am
….As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefited no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
>>>>>>>>>>>>>>>>>>>>>>>>>
Too bad the run-of-the-mill taxpayer who is being scammed can not see that. One wonders just how bad the backlash will be when realization hits. Given the acceptance of the Banker bailout fiasco by those who were conned it looks like everyone will take it lying down or maybe not….
I think a friend’s four year old had the right idea when she said she wanted to grow-up to be a government. (She now works in DC)

herkimer
December 23, 2012 12:42 pm

This entire exercise I think has also been made considerably worse by having the scientific and political mandates together at UN /IPCC where the political objective to collect money and distribute the wealth dictates scientific mandate and clouds the scientific objectivity. Things are being rushed where there is no reason to rush as we now see that the warming will not be anywhere near the rate predicted. We have the time to do things right with the right science

Rob Nicholls
December 23, 2012 1:47 pm

I’m assuming that the data points in the graphic only go as far as 2011 (?)
How was the increase in temperature of between 0.12 and 0.16 degrees C, between 1990 and 2011, calculated in the animated graphic? It appears to me that this was done using only the first and last data points in the chart (1990 and 2011). If so, then I don’t think this isn’t the best method for estimating the increase in temperature. I think linear regression would be better as it uses all of the data points, and thus reduces the influence of year-to-year variability.
Using annual global combined (land and ocean) surface temperature anomaly data from 3 data sets (GISS, HardCrut4, NOAA/NCDC) I calculated the slope of the regression line between 1990 and 2011, and estimated the increase in temperature in degrees C between 1990 and 2011 to be 0.33 for HadCrut4, 0.33 for NOAA/NCDC, and 0.37 for GISS.
Admittedly, the estimates obtained above are most likely to be too high, as the slope of the regression line would be steepened due to mount Pinatubo erupting in 1991, so I did 2 very simple alternative analyses to adjust for this:
Firstly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 as the average of the anomalies for 1990 and 1995. When I did this, the increase in temperature in degrees C between 1990 and 2011 was estimated to be 0.23 for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS.
Secondly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 using simple linear interpolation (from the temperature anomalies for 1990 and 1995). This gave idenitical results to 2 decimal places (i.e. 0.23 degrees C for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS).
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.