An animated analysis of the IPCC AR5 graph shows 'IPCC analysis methodology and computer models are seriously flawed'

This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony

I shot an arrow into the air,

It fell to earth, I knew not where;

For, so swiftly it flew, the sight

Could not follow it in its flight.

I breathed a song into the air,

It fell to earth, I knew not where;

For who has sight so keen and strong,

That it can follow the flight of song?

– Henry Wadsworth Longfellow

Guest Post by Ira Glickstein.

The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.

When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).

IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012

The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.

The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.

Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.

The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.

The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.

Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.

IPCC PREDICTIONS FOR 2015 – AND IRA’S

The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:

  • IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
  • IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
  • IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
  • IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
  • Ira Glickstein’s Central Prediction for 2015: 0.46˚C

Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.

IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG

As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!

Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).

Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.

Ira Glickstein

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Pat Frank

It’s very important in this debate to not accept IPCC outputs at face value. Doing so yields far too much ground.
None of the IPCC predictions include physically valid error bars. Therefore: none of the IPCC predictions are predictions. Those T vs time projections are physically meaningless.
We’ve all known for years that models are unreliable. Demetris Koutsoyiannis’ papers showed that unambiguously.
For example: Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis (2010) A comparison of local and aggregated climate model outputs with observed data Hydrological Sciences Journal, 55 (7), 1094–1110.
Abstract: We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections do not correspond to reality any better.
I’ve not checked yet, but would be unsurprised if that paper does not appear in the AR5 SOD reference list.

So all they have to do to make their models work is divide their CO2 sensitivity (fudge factor) by two or three. That still would not explain a probable future downward trend in global temperature.

The FAR, SAR and TAR arrows appear to me shown as landing slightly
above the midpoints of their target ranges. It appears to me that this can
cause appearance of exaggeration of IPCC projections.

I see need for IPCC to adjust itself to some recent helpings of reality, and
their favored scientists to adjust themselves to reality, as opposed to totally
discarding their previous findings.
Let’s see what the next decade or 2 brings. We are going into a combined
minimum of ~60-year and ~210-year solar cycles, likely to bottom-out close
to the minimum of the ~11-year cycle and the ~22-year “Hale cycle”, which
will probably be in the early (possibly mid) 2030’s. It’s looking to me that this
will be a short, steep-&-deep solar minimum as far as ~210-year-class ones
go.
As for effect on global temperature: I expect global temperature sensitivity to
solar activity to be just high enough, and global temperature sensitivity to CO2
to be just low enough, (after applicable feedbacks), that global temperature
will roughly hold steady over the next 20 years. Fair chance, decrease by
1/10 degree C.
I feel sorry for England and nearby parts of “continental Europe”, and
northeastern USA and some nearby parts of Canada. It appears to me that
dips in solar activity, including the otherwise-probably-insignificant ~22-year
Hale cycle, hit these regions hard.

John West

Dr. Ira Glickstein
This is great! If I could suggest a possible improvement on the visualization: a separate “actual” starting at each IPCC release point or perhaps the submission cut of dates. The observed lines would get progressively flatter from FAR to AR4 illustrating the IPCC reports getting farther and farther from reality even to those less scientifically inclined.

Goldie

I do wish people would stop drawing straight lines through this stuff as if it proved anything. What is the likelihood that a system as complex as the Earth’s climate system responds in a linear fashion?

Lew Skannen

“As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!”
A rather radical idea. I can’t see that catching on at the IPCC.

Paul Linsay

Not to belittle Feynman, but he was just explaining how science has been done since Galileo’s time.

The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100. News reader agreed. Hard to imagine a speaker for CO2 reduction representing a leisure industry with higher carbon footprint.
Logic has lost. End of the world, doomsday, repent-the end is nigh has won.

u.k.(us)

The Sirens song…..is for another post.
But, Anthony started it 🙂
It has its parallels.
Sorry all.

thingadonta

Yeah, been reading some alarmist excuses, which essentially state that the predictions of the IPCC in 1990 were right, even though they are now wrong, because once you have made ‘adjustments’ to the temperature trend since 1990 due to the lack of volcanic activity and ENSO, the IPCC predictions of 1990 are spot on.
In other words, what the alarmists are saying is this: I predict the New York Giants will beat the San Francisco 49ers. But when the SF 49ers win, I can say my prediction was correct, because the New york Giants would have won if the 49ners hadn’t scored so many touchdowns, kicked so many goals, and intercepted so many passes.
This is where science has passed into fantasyland.

taxed

l think things are only going to get worse for the IPCC as am seeing increasing signs of climate cooling within the global weather system.
l think the best they can hope for is that the temps will remain flat.

Mike Bromley the Canucklehead

I stand in awe of the IPCC. An organization who, over a period of nearly 25 years, has produced more meaningless fluff than can be imagined. I’d like to say “you just can’t make this stuff up”, but it really looks as if they have. Remember that this so-called ‘global’ warming is 0.16 of a degree. You cannot actually measure this change with instruments; you have to coax it out of data purporting to represent an ‘average’ temperature relative to an arbitrarily-determined baseline (oh, sure, you could argue that the baseline is somehow meaningful, but c’mon! In relation to what?). We are talking billions of dollars and millions of air miles to determine something so tiny? And just how, in the minds of the warm-mongers, can such a small amount of heat translate into such a dramatic scenario of destruction like Unicane Sandy? Or all the other grand leaps in intensity caused by a basically immeasurable change? It boggles the imagination.
Listening to the meme-spouters shriek and wring their hands, while “Prominent Professors” at Berkeley and elsewhere translate this into the stuff of moral decay, makes one wonder: Has academia gone insane? Better yet, haven’t they something better to do than to force-feed us all of this snake oil?
Scepticism about this dog-and-pony show is almost silly if you look at it this way, but sceptics must keep revealing the truth as much as they can…even if it means an apparent waste of time. The alternative is the insidious creeping cancer of control by organizations like the UNFCCC. This cannot be permitted, ultimately. How can it continue….? I am glad that AR5 leaked. It shows, once again, the inner workings of a juggernaut swollen with special interests and agenda scientists, continuing gleefully–despite exposés like Donna’s book–to produce reams of meaningless drivel aimed at the ignorant and fearful.

Common sense and ice-core data are sufficient to demonstrate that CO2 sensitivity MUST be low.
First, in the core data, T always changes direction before CO2 changes. So CO2 cannot be the leading factor.
Second, T always starts to rise when CO2 is at its lowest concentration. Similarly, T always begins tofall when CO2 is at its highest concentration. QED, CO2 can not be the driving factor.
Tmax in interglacials and Tmin in full glacial periods are always about the same values. So the factors that affect T ranges must operate independent of humans, who have only [potentially] had any influence in the last 70 years.
Can we now dispense with this dross and actually focus on real problems ???

hikeforpics

Ha ha – now that graph is a very ‘Inconvenient Truth”
Of course CO2 lagging Temp increases in the ice core graphs in that movie was in truth the movie by the same name ignored since it falsified their basic premise.

Bob says:
December 19, 2012 at 6:43 pm
The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I wouldn’t worry too much about the ski industry, at least in western Canada and Utah, we are having record snowfalls for this time of year. Skiing is as good as mid season already in lots of areas.
http://www.revelstokemountainresort.com/conditions/historical-snowfall

Gunga Din

I’ve noticed some people saying the models are getting better. (Didn’t they just use 2 super computers on the latest and greatest?) That implies the past ones needed improvement. Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.
It seems this whole mess started with Hansen’s predictions. Yet people still cling to them and his solutions to what hasn’t happened as he said it would.
I think I’ll buy a snowblower afterall.

TimC

Dr Glickstein said “The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.”
Steady on: isn’t that an example of “prosecutor’s fallacy”, in treating the small probabilities multiplicatively? Surely it’s more likely that there was just systematic bias in the separate reports (which were of course ultimately under political control).
[Tim, thanks for your comment. I looked up “prosecutor’s fallacy” and it did not seem to me to apply in this case. Consider throwing a single fair die four times. The probability of getting a “1” on any throw is one in six, so the probability of getting four “1” results in a row is 1/(6 x 6 x 6 x 6) = 1/1296. If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten, and so on for the four IPCC Assessment Reports. Please be more specific on where you think I went worng with this simple mathematical reasoning. advTHANKSance.
Of course I know that the IPCC changed their computer models to some extent each time, and the data they used included some new observations, but the fact they missed the mark four times in a row indicates that they have not chaged their underying climate model, based on an over-estimate of climate sensitivity to CO2 levels and an under-estimate of natural cycles of the Earth and Sun. They are wedded to the same -now discredited- climate theory because they are politically motivated (IMHO) to want to believe that human activities, such as our unprecedented burning of fossil fuels and land use that changes the albedo of the Earth, are the main cause of the Global Warming we have experienced over the past century or so. If they change their theory, and accept the Svensmark explanation that solar cycles, not under human control or influence, affect cosmic rays and that cosmic rays affect cloud formation that, in turn, affects net solar radiation absorbed by the Earth/Atmosphere system, they will lose their government funding and their political goals will be frustrated. Ira]

William Tell

I shot an arrow into the air,
It fell to earth, I knew not where;
I lose more damn arrows that way!

G. Karst

If the CO2 glove does not fit… then we must acquit. GK

Justthinkin

At a total lose for words…ERCK…UGH…PFFFT…And W.Tell. it fell into that butt of some greenie screaming for more””””’………. you are being hacked…. or word press needs a betterbuck servers.

northernont

Without the fudged data supporting the alarmist view,,the IPCC becomes irrelevant. Does anybody really think the IPCC will advocate themselves out of existence.

mpainter

Ira Glickstein:
Thanks for this. The models are even worse than I imagined. I note the AR4 projection has the steepest slope of all, as if they hope to make up for lost time. The modelers great strength is that they don’t care how ridculous they appear.

RobW

Sorry if this question is spelled out somewhere but please tell me why the graph of temp v time starts at +0.25degrees instead of the 0 point for 1990?

jayhd

The IPCC and its contributing “scientists” have only been following this corollary of Murphy’s Law – First draw your graph, then plot the data that agrees with the graph. Until recently, I thought only high school students and undergraduates did this.

Werner Brozek

Are you sure you are going to 2012 and not 2011? 2010 was a very warm year and the next one would be 2011. However in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel. You do not say which data set is being used, but the latest 2012 anomaly and the 2011 anomalies for 6 sets are shown below.
2012 in Perspective so far on Six Data Sets
Note the bolded numbers for each data set where the lower bolded number is the highest anomaly recorded so far in 2012 and the higher one is the all time record so far. There is no comparison.

With the UAH anomaly for November at 0.281, the average for the first eleven months of the year is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.281)/11 = 0.156. This would rank 9th if it stayed this way. 1998 was the warmest at 0.42. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.132.
With the GISS anomaly for November at 0.68, the average for the first eleven months of the year is (0.32 + 0.37 + 0.45 + 0.54 + 0.67 + 0.56 + 0.46 + 0.58 + 0.62 + 0.68 + 0.68)/11 = 0.54. This would rank 9th if it stayed this way. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.89. The anomaly in 2011 was 0.514.
With the Hadcrut3 anomaly for October at 0.486, the average for the first ten months of the year is (0.217 + 0.193 + 0.305 + 0.481 + 0.475 + 0.477 + 0.448 + 0.512+ 0.515 + 0.486)/10 = 0.411. This would rank 9th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340.
With the sea surface anomaly for October at 0.428, the average for the first ten months of the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.351 + 0.385 + 0.440 + 0.449 + 0.428)/10 = 0.336. This would rank 9th if it stayed this way. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273.
With the RSS anomaly for November at 0.195, the average for the first eleven months of the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195)/11 = 0.200. This would rank 11th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147.
With the Hadcrut4 anomaly for October at 0.518, the average for the first ten months of the year is (0.288 + 0.209 + 0.339 + 0.526 + 0.531 + 0.501 + 0.469 + 0.529 + 0.516 + 0.518)/10 = 0.443. This would rank 9th if it stayed this way. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399.
On all six of the above data sets, a record is out of reach.
[Werner Brozek: Thanks, you are correct that the base chart shows observed temperature “anomaly” only up to 2011, not 2012. I used 2012 in my annotations with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011. As you point out, “… in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel.” – Ira]

MattS

“IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012”
Not completely accurate, the 4th arrow went so high it didn’t hit anything and is currently chasing the Voyager space probes.

A Crooks

Hey, You couldn’t do the same for the methane type of animation for the methane predictions could you? I think that would be even funnier. Talk about desperation in the face of real data.
Cheers

tokyoboy

I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode, and actually in a faster-than-BAU mode, due to rapid industrialization of China, India etc.
But then, it is unmistakably clear that the two trends are far, far, far apart from each other.
IIRC, Lance Wallace said similarly on another thread today or yesterday.

They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/

Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.
Across the country, 45 people have died due to the cold, and 266 have been taken to hospitals. In total, 542 people were injured due to the freezing temperatures, RIA Novosti reported.
The Moscow region saw temperatures of -17 to -18 degrees Celsius on Wednesday, and the record cold temperatures are expected to linger for at least three more days. Thermometers in Siberia touched -50 degrees Celsius, which is also abnormal for December.

h/t to BobN who pointed me at it…
So about those land temperatures… which way they gonna go?…

john robertson

By the time Hansen and friends massage the Russian and Arctic winter temperatures, 2012 will be a new record high, just ignore the minus sign again or invert the data no problem at all.
Are politicians and bureaucrats capable of remorse?
So much ado over so little, an almost unmeasurable imagined change.

tokyoboy says:
December 19, 2012 at 9:00 pm
I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode,

It would be a good addition.

AndyG55

E.M.Smith says “So about those land temperatures… which way they gonna go?…”
Now that depends on who does the calculations !!
In Hansenworld, for example, freezing causes global tempertures to go upwards. !!!

AndyG55

William Tell says:
“I shot an arrow into the air,”
Hey wait there, I thought you used a cross-bow??
so you should say “I fired a ‘bolt’ into the air”

taxed

E.M. Smith
l don’t think it will be just Russia who will be suffering.
The jet looks to be setting up eastern Canada for some of the same treatment around the 25th-27th Dec. l think its going to be a long hard winter for many in the NH this season.
l hope Climate science will be sitting up and paying attenion to this winter, because its looking like it could be the shape of things to come.

davidmhoffer

Wars prevented: 0
Genocides prevented: 0
Climate catastrophes prevented: 0
The United Nations. Where never before have so many been paid so much to do so little. But they are determined to set a new record next year.

There’s an error in the chart. The oval labeled “2012” should read “2011,” and the heading “1990 to 2012” should read “1990 thru 2011”. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

Rob Dawg

It is important to understand that even if temperatures should suddenly rise and start resembling the predicted values the theory is still wrong. The models have failed. There is no allowance for going back and adjusting values after the fact. My guess is that with a dozen years of new data it is possible to hind cast a close fit but that in doing so future values are in no way worth worrying about.

jorgekafkazar

I suspect the IPCC will repaint the side of the barn to add a bullseye where the arrows hit.
I sneezed a sneeze into the air.
It fell to earth, I know not where.
But cold and hard were the looks of those
In whose vicinity I snoze.

–S. Lee Crump, Boys Life, Aug. 1957

Tom B.

Somehow I thought that most of those predictions were actually a range of predictions, each one based on different levels of projected CO2? Am I confusing this with other projections? If not, can we remove the predictions that were based on reduced CO2 levels and only show the ones that were based on ‘business as usual’ (the closest to the actual record) emissions?

Jimbo

Sorry for re-posting this again but their time for continued failed predictions projections has to run out sooner or later. They can’t keep missing the mark and fail to re-visit the ‘theory’. Remember that we have had 16 years on statistically insignificant warming – unless it begins to rewarms to a significant degree, then what next?

“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

—————————–

“A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature. ”
http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml

—————————–

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”
http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdf
http://www.pnas.org/content/early/2012/11/28/1210514109/
http://landshape.org/enm/santer-climate-models-are-exaggerating-warming-we-dont-know-why

Mindert Eiting

Thanks, Ira. Or do it as follows. Determine the slope of linear regression at which we would have concluded from the data that there was warming, using significance level alpha. Plot the regression line on the figure with colored bands. The colored area below that line relative to the total colored area and divided by 0.9 estimates beta, the probability of a type II error. Both IPCC and skeptics have a right on equal error rates. If beta < = alpha, the model is falsified.

Lance Wallace

Ira–
As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. The reason is that their projections are based on scenarios (estimates of what will happen, such as “business as usual” or CO2 regulation of some sort). So the single estimate you should pick in each case is the one corresponding most closely to the associated scenario. In the case of the first Assessment Report (FAR) that estimate is the uppermost line associated with their “Business as Usual” assumption, since hardly any regulation is evident when one looks at the exponential rise in CO2. In general, probably an estimate close to the highest one in the next three reports is the one that most closely approximates what actually happened.
Picking the middle estimate as though it was the IPCC “best” estimate is actually picking an estimate based on a failed scenario. The entire graph (particularly the addition of the even larger “error bounds” in gray) was prepared by the IPCC to allow them to say their estimates were within the uncertainty bounds. But it is simply another case of hiding the decline (the decline in this case being the refusal of the observed temperature to match the projections.)
Ira has fallen into the trap set by the IPCC. Ira or someone should carry out the program outlined above, which is not quite as easy for the later reports as for FAR.
[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

LazyTeenager

Ira quotes
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
———
Hmmm. Yes if your observations are in fact correct.
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series. If he did a straight line fit without that constraint he would get a very different answer.
Aligning all of the series at some arbitrary time is somewhat arbitrary and is not a sensible way of comparing the various trends.
Maybe Ira needs some statistical expertise. Go and talk to McIntyre. He’ll sort you out.
[LazyTeenager: I do not claim to be any kind of statistical expert, though I do have a working knowledge of statistics from my long career as a system engineer and from my PhD dissertation. However, all the temperature “observations” are on the IPCC base chart and were done by the IPCC researchers and authors. All I did was draw some animated arrows atop the IPCC data. I started my arrows at the center of the Global temperature “anomaly” value as graphed by the IPCC. You say “If [Ira] did a straight line fit without that constraint he would get a very different answer.” I have no idea where one would start a “straight line fit” other than at the starting point of each analysis. Please be more specific about the “very different answer” you expect from a different “straight line fit”. To me, “very different answer” implies that it would show that the IPCC actually hit the mark four times (or even once :^). advTHANKSance. Ira]

Camburn

Lazy Teenager:
Expound please?
Are you having a problem with a liner average starting at the date the report was put into effect?
Maybe you are seeing something here that I missed.

rgbatduke

LazyTeenager is, in this instance, dead right. Given the data in this figure and its error bars, an unconstrained linear fit would not falsify the predictions. One has to hindcast the models to 1980 (when it was almost exactly the same temperature as it was in 1990) to do that.
However, LT (presumably skilled in statistical analysis himself, teenager and all) also knows at a glance that even an unconstrained linear fit is bogus. The “error bars” on the data points are clearly meaningless. The data points themselves are not iid samples drawn from the same process. The shaded regions are bogus — they are nothing like a statistically meaningful confidence interval. The centroids of the shaded regions are not even plotted so that one cannot even determine and compare the linear trend to the presumably nonlinear trends plotted. And if one attempted to fit a nonlinear function to the data using the bogus error bars, one might not get one that has positive curvature at the present time, presenting a real problem for the models!
What, exactly, are these models? They aren’t. They are composite predictions of many models. In fact, they are composite predictions of many runs each of many distinct models. Some of the runs of some of the contributing models no doubt came close to the data (enough to produce their lower-shaded boundaries, presuming that those boundaries aren’t freehand art drawn by someone seeking to create a pretty graphic and were actually produced by some sort of computational process — I leave it to LT to tell me if he thinks that there is the slightest chance that this figure was produced by means of performing an actual objective statistical process of any sort, as it makes precisely the error it accuses Mr. Glickstein of making by starting at the year 1990 with a constrained point). Which models were, to some extent, verified by the data? Why are they not given increased weight in the report? Which models were completely and utterly falsified by the data? Why are they not aggressively omitted and the model predictions retroactively repaired?
LT, Mr. Glickstein is, as you have observed, not a statistics god. However, a large part of statistics isn’t math, it is common sense. It is having the common sense to look at (and, if one is honest, present) the data robustly, not a cherrypicked 12 year segment on a fifteen year graph. I don’t have the energy to grab the graph, overlay it with all 33 years of UAH LTT and/or RSS, and invert the model wedgies into the past, still pegged at 1990, but then, I don’t need to. You know exactly what it would look like. It would be a complete and utter disaster — for the models. Mr. Glickstein has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted.
Do you?
rgb
[rgbatduke: THANKS for your conclusion that “…Glickstein is … not a statistics god. However … [he] has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted. Do you?” – Ira]

chinook

LazyTeenager says:
December 20, 2012 at 2:04 am
I have a sneaking suspicion that Dr. Hansen knows how to correct any observations to fit with his failed models/predictions. There, problem solved!

davidmhoffer

LazyTeenager;
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series.
>>>>>>>>>>>>>>>>>>>>>>>>>
Starting it in the year it started at the temperature it started at is arbitrary? I tried reading what you wrote by examining random words in your comment and it turns out it makes more sense that way than just using arbitrary starting points like the beginning of sentences and following the words in sequential order. Very clever.

Kelvin Vaughan

CET trend since 2006 to November 2012. Minimum Temperature approximately MINUS 1°C. Maximum Temperature approximately MINUS 1.25°C.

eco-geek

Before too long IRAs prediction as well as the IPCC projections will turn out to be far too optimistic (where optimism correlates with rising temperatures). The ocean buffer has had a slight thermal top up after the solar cycle 23 minimum but with the peak of cycle 24 currently upon us this top up will be rapidly exhausted as the solar magnetic fields and solar activity collapse on the downside of 24 and into the all but absent solar cycle 25. This winter will not be so bad and maybe next winter (relatively speaking) in the northern hemisphere but thereafter there will be a major collapse in global temperatures for several decades with harsh winters and collapsing grain harvests. Mean temperatures will fall by 2.5 degrees Celcius in the temperate latitudes and more at higher latitudes by 2021.
It is all going to be very unpleasant as we will be thoroughly unprepared because of Piltdown Mann and the Team.
That is my prediction.
Stay Cool!