An animated analysis of the IPCC AR5 graph shows 'IPCC analysis methodology and computer models are seriously flawed'

This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony

I shot an arrow into the air,

It fell to earth, I knew not where;

For, so swiftly it flew, the sight

Could not follow it in its flight.

I breathed a song into the air,

It fell to earth, I knew not where;

For who has sight so keen and strong,

That it can follow the flight of song?

– Henry Wadsworth Longfellow

Guest Post by Ira Glickstein.

The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.

When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).
IPCC AR5 draft figure 1-4 with animated central Global Warming predictions from FAR (1990), SAR (1996), TAR (2001), and AR5 (2007).

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012

The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.

The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.

Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.

The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.

The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.

Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.

IPCC PREDICTIONS FOR 2015 – AND IRA’S

The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:

  • IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
  • IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
  • IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
  • IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
  • Ira Glickstein’s Central Prediction for 2015: 0.46˚C

Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.

IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG

As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!

Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).

Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.

Ira Glickstein

0 0 votes
Article Rating
117 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
December 19, 2012 5:40 pm

It’s very important in this debate to not accept IPCC outputs at face value. Doing so yields far too much ground.
None of the IPCC predictions include physically valid error bars. Therefore: none of the IPCC predictions are predictions. Those T vs time projections are physically meaningless.
We’ve all known for years that models are unreliable. Demetris Koutsoyiannis’ papers showed that unambiguously.
For example: Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis, and N. Mamassis (2010) A comparison of local and aggregated climate model outputs with observed data Hydrological Sciences Journal, 55 (7), 1094–1110.
Abstract: We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections do not correspond to reality any better.
I’ve not checked yet, but would be unsurprised if that paper does not appear in the AR5 SOD reference list.

December 19, 2012 6:02 pm

So all they have to do to make their models work is divide their CO2 sensitivity (fudge factor) by two or three. That still would not explain a probable future downward trend in global temperature.

December 19, 2012 6:06 pm

The FAR, SAR and TAR arrows appear to me shown as landing slightly
above the midpoints of their target ranges. It appears to me that this can
cause appearance of exaggeration of IPCC projections.

December 19, 2012 6:22 pm

I see need for IPCC to adjust itself to some recent helpings of reality, and
their favored scientists to adjust themselves to reality, as opposed to totally
discarding their previous findings.
Let’s see what the next decade or 2 brings. We are going into a combined
minimum of ~60-year and ~210-year solar cycles, likely to bottom-out close
to the minimum of the ~11-year cycle and the ~22-year “Hale cycle”, which
will probably be in the early (possibly mid) 2030’s. It’s looking to me that this
will be a short, steep-&-deep solar minimum as far as ~210-year-class ones
go.
As for effect on global temperature: I expect global temperature sensitivity to
solar activity to be just high enough, and global temperature sensitivity to CO2
to be just low enough, (after applicable feedbacks), that global temperature
will roughly hold steady over the next 20 years. Fair chance, decrease by
1/10 degree C.
I feel sorry for England and nearby parts of “continental Europe”, and
northeastern USA and some nearby parts of Canada. It appears to me that
dips in solar activity, including the otherwise-probably-insignificant ~22-year
Hale cycle, hit these regions hard.

John West
December 19, 2012 6:23 pm

Dr. Ira Glickstein
This is great! If I could suggest a possible improvement on the visualization: a separate “actual” starting at each IPCC release point or perhaps the submission cut of dates. The observed lines would get progressively flatter from FAR to AR4 illustrating the IPCC reports getting farther and farther from reality even to those less scientifically inclined.

Goldie
December 19, 2012 6:27 pm

I do wish people would stop drawing straight lines through this stuff as if it proved anything. What is the likelihood that a system as complex as the Earth’s climate system responds in a linear fashion?

Lew Skannen
December 19, 2012 6:39 pm

“As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!”
A rather radical idea. I can’t see that catching on at the IPCC.

Paul Linsay
December 19, 2012 6:40 pm

Not to belittle Feynman, but he was just explaining how science has been done since Galileo’s time.

Bob
December 19, 2012 6:43 pm

The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100. News reader agreed. Hard to imagine a speaker for CO2 reduction representing a leisure industry with higher carbon footprint.
Logic has lost. End of the world, doomsday, repent-the end is nigh has won.

u.k.(us)
December 19, 2012 6:44 pm

The Sirens song…..is for another post.
But, Anthony started it 🙂
It has its parallels.
Sorry all.

thingadonta
December 19, 2012 6:46 pm

Yeah, been reading some alarmist excuses, which essentially state that the predictions of the IPCC in 1990 were right, even though they are now wrong, because once you have made ‘adjustments’ to the temperature trend since 1990 due to the lack of volcanic activity and ENSO, the IPCC predictions of 1990 are spot on.
In other words, what the alarmists are saying is this: I predict the New York Giants will beat the San Francisco 49ers. But when the SF 49ers win, I can say my prediction was correct, because the New york Giants would have won if the 49ners hadn’t scored so many touchdowns, kicked so many goals, and intercepted so many passes.
This is where science has passed into fantasyland.

taxed
December 19, 2012 6:52 pm

l think things are only going to get worse for the IPCC as am seeing increasing signs of climate cooling within the global weather system.
l think the best they can hope for is that the temps will remain flat.

December 19, 2012 6:57 pm

I stand in awe of the IPCC. An organization who, over a period of nearly 25 years, has produced more meaningless fluff than can be imagined. I’d like to say “you just can’t make this stuff up”, but it really looks as if they have. Remember that this so-called ‘global’ warming is 0.16 of a degree. You cannot actually measure this change with instruments; you have to coax it out of data purporting to represent an ‘average’ temperature relative to an arbitrarily-determined baseline (oh, sure, you could argue that the baseline is somehow meaningful, but c’mon! In relation to what?). We are talking billions of dollars and millions of air miles to determine something so tiny? And just how, in the minds of the warm-mongers, can such a small amount of heat translate into such a dramatic scenario of destruction like Unicane Sandy? Or all the other grand leaps in intensity caused by a basically immeasurable change? It boggles the imagination.
Listening to the meme-spouters shriek and wring their hands, while “Prominent Professors” at Berkeley and elsewhere translate this into the stuff of moral decay, makes one wonder: Has academia gone insane? Better yet, haven’t they something better to do than to force-feed us all of this snake oil?
Scepticism about this dog-and-pony show is almost silly if you look at it this way, but sceptics must keep revealing the truth as much as they can…even if it means an apparent waste of time. The alternative is the insidious creeping cancer of control by organizations like the UNFCCC. This cannot be permitted, ultimately. How can it continue….? I am glad that AR5 leaked. It shows, once again, the inner workings of a juggernaut swollen with special interests and agenda scientists, continuing gleefully–despite exposés like Donna’s book–to produce reams of meaningless drivel aimed at the ignorant and fearful.

geologyjim
December 19, 2012 6:57 pm

Common sense and ice-core data are sufficient to demonstrate that CO2 sensitivity MUST be low.
First, in the core data, T always changes direction before CO2 changes. So CO2 cannot be the leading factor.
Second, T always starts to rise when CO2 is at its lowest concentration. Similarly, T always begins tofall when CO2 is at its highest concentration. QED, CO2 can not be the driving factor.
Tmax in interglacials and Tmin in full glacial periods are always about the same values. So the factors that affect T ranges must operate independent of humans, who have only [potentially] had any influence in the last 70 years.
Can we now dispense with this dross and actually focus on real problems ???

hikeforpics
December 19, 2012 7:00 pm

Ha ha – now that graph is a very ‘Inconvenient Truth”
Of course CO2 lagging Temp increases in the ice core graphs in that movie was in truth the movie by the same name ignored since it falsified their basic premise.

December 19, 2012 7:05 pm

Bob says:
December 19, 2012 at 6:43 pm
The facts are that the New speakers quote unprecedented heat and continued warming. This year was the warmest in history. Heck I heard a representative of the ski industry bemoan warm weather and attribute it to global warming which if we don’t do something now will go up 4-10 degrees by 2100.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I wouldn’t worry too much about the ski industry, at least in western Canada and Utah, we are having record snowfalls for this time of year. Skiing is as good as mid season already in lots of areas.
http://www.revelstokemountainresort.com/conditions/historical-snowfall

December 19, 2012 7:21 pm

I’ve noticed some people saying the models are getting better. (Didn’t they just use 2 super computers on the latest and greatest?) That implies the past ones needed improvement. Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.
It seems this whole mess started with Hansen’s predictions. Yet people still cling to them and his solutions to what hasn’t happened as he said it would.
I think I’ll buy a snowblower afterall.

TimC
December 19, 2012 7:35 pm

Dr Glickstein said “The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.”
Steady on: isn’t that an example of “prosecutor’s fallacy”, in treating the small probabilities multiplicatively? Surely it’s more likely that there was just systematic bias in the separate reports (which were of course ultimately under political control).
[Tim, thanks for your comment. I looked up “prosecutor’s fallacy” and it did not seem to me to apply in this case. Consider throwing a single fair die four times. The probability of getting a “1” on any throw is one in six, so the probability of getting four “1” results in a row is 1/(6 x 6 x 6 x 6) = 1/1296. If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten, and so on for the four IPCC Assessment Reports. Please be more specific on where you think I went worng with this simple mathematical reasoning. advTHANKSance.
Of course I know that the IPCC changed their computer models to some extent each time, and the data they used included some new observations, but the fact they missed the mark four times in a row indicates that they have not chaged their underying climate model, based on an over-estimate of climate sensitivity to CO2 levels and an under-estimate of natural cycles of the Earth and Sun. They are wedded to the same -now discredited- climate theory because they are politically motivated (IMHO) to want to believe that human activities, such as our unprecedented burning of fossil fuels and land use that changes the albedo of the Earth, are the main cause of the Global Warming we have experienced over the past century or so. If they change their theory, and accept the Svensmark explanation that solar cycles, not under human control or influence, affect cosmic rays and that cosmic rays affect cloud formation that, in turn, affects net solar radiation absorbed by the Earth/Atmosphere system, they will lose their government funding and their political goals will be frustrated. Ira]

William Tell
December 19, 2012 7:38 pm

I shot an arrow into the air,
It fell to earth, I knew not where;
I lose more damn arrows that way!

G. Karst
December 19, 2012 7:44 pm

If the CO2 glove does not fit… then we must acquit. GK

Justthinkin
December 19, 2012 7:58 pm

At a total lose for words…ERCK…UGH…PFFFT…And W.Tell. it fell into that butt of some greenie screaming for more””””’………. you are being hacked…. or word press needs a betterbuck servers.

northernont
December 19, 2012 8:10 pm

Without the fudged data supporting the alarmist view,,the IPCC becomes irrelevant. Does anybody really think the IPCC will advocate themselves out of existence.

mpainter
December 19, 2012 8:20 pm

Ira Glickstein:
Thanks for this. The models are even worse than I imagined. I note the AR4 projection has the steepest slope of all, as if they hope to make up for lost time. The modelers great strength is that they don’t care how ridculous they appear.

RobW
December 19, 2012 8:23 pm

Sorry if this question is spelled out somewhere but please tell me why the graph of temp v time starts at +0.25degrees instead of the 0 point for 1990?

jayhd
December 19, 2012 8:25 pm

The IPCC and its contributing “scientists” have only been following this corollary of Murphy’s Law – First draw your graph, then plot the data that agrees with the graph. Until recently, I thought only high school students and undergraduates did this.

Werner Brozek
December 19, 2012 8:43 pm

Are you sure you are going to 2012 and not 2011? 2010 was a very warm year and the next one would be 2011. However in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel. You do not say which data set is being used, but the latest 2012 anomaly and the 2011 anomalies for 6 sets are shown below.
2012 in Perspective so far on Six Data Sets
Note the bolded numbers for each data set where the lower bolded number is the highest anomaly recorded so far in 2012 and the higher one is the all time record so far. There is no comparison.

With the UAH anomaly for November at 0.281, the average for the first eleven months of the year is (-0.134 -0.135 + 0.051 + 0.232 + 0.179 + 0.235 + 0.130 + 0.208 + 0.339 + 0.333 + 0.281)/11 = 0.156. This would rank 9th if it stayed this way. 1998 was the warmest at 0.42. The highest ever monthly anomaly was in April of 1998 when it reached 0.66. The anomaly in 2011 was 0.132.
With the GISS anomaly for November at 0.68, the average for the first eleven months of the year is (0.32 + 0.37 + 0.45 + 0.54 + 0.67 + 0.56 + 0.46 + 0.58 + 0.62 + 0.68 + 0.68)/11 = 0.54. This would rank 9th if it stayed this way. 2010 was the warmest at 0.63. The highest ever monthly anomalies were in March of 2002 and January of 2007 when it reached 0.89. The anomaly in 2011 was 0.514.
With the Hadcrut3 anomaly for October at 0.486, the average for the first ten months of the year is (0.217 + 0.193 + 0.305 + 0.481 + 0.475 + 0.477 + 0.448 + 0.512+ 0.515 + 0.486)/10 = 0.411. This would rank 9th if it stayed this way. 1998 was the warmest at 0.548. The highest ever monthly anomaly was in February of 1998 when it reached 0.756. One has to back to the 1940s to find the previous time that a Hadcrut3 record was not beaten in 10 years or less. The anomaly in 2011 was 0.340.
With the sea surface anomaly for October at 0.428, the average for the first ten months of the year is (0.203 + 0.230 + 0.241 + 0.292 + 0.339 + 0.351 + 0.385 + 0.440 + 0.449 + 0.428)/10 = 0.336. This would rank 9th if it stayed this way. 1998 was the warmest at 0.451. The highest ever monthly anomaly was in August of 1998 when it reached 0.555. The anomaly in 2011 was 0.273.
With the RSS anomaly for November at 0.195, the average for the first eleven months of the year is (-0.060 -0.123 + 0.071 + 0.330 + 0.231 + 0.337 + 0.290 + 0.255 + 0.383 + 0.294 + 0.195)/11 = 0.200. This would rank 11th if it stayed this way. 1998 was the warmest at 0.55. The highest ever monthly anomaly was in April of 1998 when it reached 0.857. The anomaly in 2011 was 0.147.
With the Hadcrut4 anomaly for October at 0.518, the average for the first ten months of the year is (0.288 + 0.209 + 0.339 + 0.526 + 0.531 + 0.501 + 0.469 + 0.529 + 0.516 + 0.518)/10 = 0.443. This would rank 9th if it stayed this way. 2010 was the warmest at 0.54. The highest ever monthly anomaly was in January of 2007 when it reached 0.818. The anomaly in 2011 was 0.399.
On all six of the above data sets, a record is out of reach.
[Werner Brozek: Thanks, you are correct that the base chart shows observed temperature “anomaly” only up to 2011, not 2012. I used 2012 in my annotations with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011. As you point out, “… in the end, the conclusion is about the same since 2012 is just a bit warmer than 2011 so far, but since the graphs move up as well, the effects almost cancel.” – Ira]

MattS
December 19, 2012 8:54 pm

“IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012”
Not completely accurate, the 4th arrow went so high it didn’t hit anything and is currently chasing the Voyager space probes.

A Crooks
December 19, 2012 8:56 pm

Hey, You couldn’t do the same for the methane type of animation for the methane predictions could you? I think that would be even funnier. Talk about desperation in the face of real data.
Cheers

tokyoboy
December 19, 2012 9:00 pm

I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode, and actually in a faster-than-BAU mode, due to rapid industrialization of China, India etc.
But then, it is unmistakably clear that the two trends are far, far, far apart from each other.
IIRC, Lance Wallace said similarly on another thread today or yesterday.

E.M.Smith
Editor
December 19, 2012 9:04 pm

They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/

Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.
Across the country, 45 people have died due to the cold, and 266 have been taken to hospitals. In total, 542 people were injured due to the freezing temperatures, RIA Novosti reported.
The Moscow region saw temperatures of -17 to -18 degrees Celsius on Wednesday, and the record cold temperatures are expected to linger for at least three more days. Thermometers in Siberia touched -50 degrees Celsius, which is also abnormal for December.

h/t to BobN who pointed me at it…
So about those land temperatures… which way they gonna go?…

john robertson
December 19, 2012 9:29 pm

By the time Hansen and friends massage the Russian and Arctic winter temperatures, 2012 will be a new record high, just ignore the minus sign again or invert the data no problem at all.
Are politicians and bureaucrats capable of remorse?
So much ado over so little, an almost unmeasurable imagined change.

Roger Knights
December 19, 2012 9:43 pm

tokyoboy says:
December 19, 2012 at 9:00 pm
I believe they should compare the trend of “business as usual” scenario, and not that of the “center line”, let alone the lower end, with the measured temp trend. This is because things (esp. CO2 emission) have proceeded at least in a BAU mode,

It would be a good addition.

AndyG55
December 19, 2012 9:52 pm

E.M.Smith says “So about those land temperatures… which way they gonna go?…”
Now that depends on who does the calculations !!
In Hansenworld, for example, freezing causes global tempertures to go upwards. !!!

AndyG55
December 19, 2012 9:56 pm

William Tell says:
“I shot an arrow into the air,”
Hey wait there, I thought you used a cross-bow??
so you should say “I fired a ‘bolt’ into the air”

taxed
December 19, 2012 9:58 pm

E.M. Smith
l don’t think it will be just Russia who will be suffering.
The jet looks to be setting up eastern Canada for some of the same treatment around the 25th-27th Dec. l think its going to be a long hard winter for many in the NH this season.
l hope Climate science will be sitting up and paying attenion to this winter, because its looking like it could be the shape of things to come.

davidmhoffer
December 19, 2012 10:23 pm

Wars prevented: 0
Genocides prevented: 0
Climate catastrophes prevented: 0
The United Nations. Where never before have so many been paid so much to do so little. But they are determined to set a new record next year.

Roger Knights
December 19, 2012 11:02 pm

There’s an error in the chart. The oval labeled “2012” should read “2011,” and the heading “1990 to 2012” should read “1990 thru 2011”. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

Rob Dawg
December 19, 2012 11:13 pm

It is important to understand that even if temperatures should suddenly rise and start resembling the predicted values the theory is still wrong. The models have failed. There is no allowance for going back and adjusting values after the fact. My guess is that with a dozen years of new data it is possible to hind cast a close fit but that in doing so future values are in no way worth worrying about.

jorgekafkazar
December 19, 2012 11:31 pm

I suspect the IPCC will repaint the side of the barn to add a bullseye where the arrows hit.
I sneezed a sneeze into the air.
It fell to earth, I know not where.
But cold and hard were the looks of those
In whose vicinity I snoze.

–S. Lee Crump, Boys Life, Aug. 1957

Tom B.
December 20, 2012 12:50 am

Somehow I thought that most of those predictions were actually a range of predictions, each one based on different levels of projected CO2? Am I confusing this with other projections? If not, can we remove the predictions that were based on reduced CO2 levels and only show the ones that were based on ‘business as usual’ (the closest to the actual record) emissions?

Jimbo
December 20, 2012 1:21 am

Sorry for re-posting this again but their time for continued failed predictions projections has to run out sooner or later. They can’t keep missing the mark and fail to re-visit the ‘theory’. Remember that we have had 16 years on statistically insignificant warming – unless it begins to rewarms to a significant degree, then what next?

“The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
http://www1.ncdc.noaa.gov/pub/data/cmb/bams-sotc/climate-assessment-2008-lo-rez.pdf

—————————–

“A single decade of observational TLT data is therefore inadequate for identifying a slowly evolving anthropogenic warming signal. Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature. ”
http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml

—————————–

“The multimodel average tropospheric temperature trends are outside the 5–95 percentile range of RSS results at most latitudes. The likely causes of these biases include forcing errors in the historical simulations (40–42), model response errors (43), remaining errors in satellite temperature estimates (26, 44), and an unusual manifestation of internal variability in the observations (35, 45). These explanations are not mutually exclusive. Our results suggest that forcing errors are a serious concern.”
http://www.pnas.org/content/early/2012/11/28/1210514109.full.pdf
http://www.pnas.org/content/early/2012/11/28/1210514109/
http://landshape.org/enm/santer-climate-models-are-exaggerating-warming-we-dont-know-why

Mindert Eiting
December 20, 2012 1:33 am

Thanks, Ira. Or do it as follows. Determine the slope of linear regression at which we would have concluded from the data that there was warming, using significance level alpha. Plot the regression line on the figure with colored bands. The colored area below that line relative to the total colored area and divided by 0.9 estimates beta, the probability of a type II error. Both IPCC and skeptics have a right on equal error rates. If beta < = alpha, the model is falsified.

Lance Wallace
December 20, 2012 1:59 am

Ira–
As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. The reason is that their projections are based on scenarios (estimates of what will happen, such as “business as usual” or CO2 regulation of some sort). So the single estimate you should pick in each case is the one corresponding most closely to the associated scenario. In the case of the first Assessment Report (FAR) that estimate is the uppermost line associated with their “Business as Usual” assumption, since hardly any regulation is evident when one looks at the exponential rise in CO2. In general, probably an estimate close to the highest one in the next three reports is the one that most closely approximates what actually happened.
Picking the middle estimate as though it was the IPCC “best” estimate is actually picking an estimate based on a failed scenario. The entire graph (particularly the addition of the even larger “error bounds” in gray) was prepared by the IPCC to allow them to say their estimates were within the uncertainty bounds. But it is simply another case of hiding the decline (the decline in this case being the refusal of the observed temperature to match the projections.)
Ira has fallen into the trap set by the IPCC. Ira or someone should carry out the program outlined above, which is not quite as easy for the later reports as for FAR.
[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

LazyTeenager
December 20, 2012 2:04 am

Ira quotes
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
———
Hmmm. Yes if your observations are in fact correct.
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series. If he did a straight line fit without that constraint he would get a very different answer.
Aligning all of the series at some arbitrary time is somewhat arbitrary and is not a sensible way of comparing the various trends.
Maybe Ira needs some statistical expertise. Go and talk to McIntyre. He’ll sort you out.
[LazyTeenager: I do not claim to be any kind of statistical expert, though I do have a working knowledge of statistics from my long career as a system engineer and from my PhD dissertation. However, all the temperature “observations” are on the IPCC base chart and were done by the IPCC researchers and authors. All I did was draw some animated arrows atop the IPCC data. I started my arrows at the center of the Global temperature “anomaly” value as graphed by the IPCC. You say “If [Ira] did a straight line fit without that constraint he would get a very different answer.” I have no idea where one would start a “straight line fit” other than at the starting point of each analysis. Please be more specific about the “very different answer” you expect from a different “straight line fit”. To me, “very different answer” implies that it would show that the IPCC actually hit the mark four times (or even once :^). advTHANKSance. Ira]

Camburn
December 20, 2012 2:54 am

Lazy Teenager:
Expound please?
Are you having a problem with a liner average starting at the date the report was put into effect?
Maybe you are seeing something here that I missed.

rgbatduke
December 20, 2012 3:15 am

LazyTeenager is, in this instance, dead right. Given the data in this figure and its error bars, an unconstrained linear fit would not falsify the predictions. One has to hindcast the models to 1980 (when it was almost exactly the same temperature as it was in 1990) to do that.
However, LT (presumably skilled in statistical analysis himself, teenager and all) also knows at a glance that even an unconstrained linear fit is bogus. The “error bars” on the data points are clearly meaningless. The data points themselves are not iid samples drawn from the same process. The shaded regions are bogus — they are nothing like a statistically meaningful confidence interval. The centroids of the shaded regions are not even plotted so that one cannot even determine and compare the linear trend to the presumably nonlinear trends plotted. And if one attempted to fit a nonlinear function to the data using the bogus error bars, one might not get one that has positive curvature at the present time, presenting a real problem for the models!
What, exactly, are these models? They aren’t. They are composite predictions of many models. In fact, they are composite predictions of many runs each of many distinct models. Some of the runs of some of the contributing models no doubt came close to the data (enough to produce their lower-shaded boundaries, presuming that those boundaries aren’t freehand art drawn by someone seeking to create a pretty graphic and were actually produced by some sort of computational process — I leave it to LT to tell me if he thinks that there is the slightest chance that this figure was produced by means of performing an actual objective statistical process of any sort, as it makes precisely the error it accuses Mr. Glickstein of making by starting at the year 1990 with a constrained point). Which models were, to some extent, verified by the data? Why are they not given increased weight in the report? Which models were completely and utterly falsified by the data? Why are they not aggressively omitted and the model predictions retroactively repaired?
LT, Mr. Glickstein is, as you have observed, not a statistics god. However, a large part of statistics isn’t math, it is common sense. It is having the common sense to look at (and, if one is honest, present) the data robustly, not a cherrypicked 12 year segment on a fifteen year graph. I don’t have the energy to grab the graph, overlay it with all 33 years of UAH LTT and/or RSS, and invert the model wedgies into the past, still pegged at 1990, but then, I don’t need to. You know exactly what it would look like. It would be a complete and utter disaster — for the models. Mr. Glickstein has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted.
Do you?
rgb
[rgbatduke: THANKS for your conclusion that “…Glickstein is … not a statistics god. However … [he] has the common sense to see that the data and the models are not in good agreement, even in the narrow time frame plotted. Do you?” – Ira]

chinook
December 20, 2012 3:16 am

LazyTeenager says:
December 20, 2012 at 2:04 am
I have a sneaking suspicion that Dr. Hansen knows how to correct any observations to fit with his failed models/predictions. There, problem solved!

davidmhoffer
December 20, 2012 3:17 am

LazyTeenager;
The trouble with Ira’s observation is that he has done a straight line fit with the starting point constrained to be the starting point of the aligned series.
>>>>>>>>>>>>>>>>>>>>>>>>>
Starting it in the year it started at the temperature it started at is arbitrary? I tried reading what you wrote by examining random words in your comment and it turns out it makes more sense that way than just using arbitrary starting points like the beginning of sentences and following the words in sequential order. Very clever.

Kelvin Vaughan
December 20, 2012 3:50 am

CET trend since 2006 to November 2012. Minimum Temperature approximately MINUS 1°C. Maximum Temperature approximately MINUS 1.25°C.

eco-geek
December 20, 2012 4:12 am

Before too long IRAs prediction as well as the IPCC projections will turn out to be far too optimistic (where optimism correlates with rising temperatures). The ocean buffer has had a slight thermal top up after the solar cycle 23 minimum but with the peak of cycle 24 currently upon us this top up will be rapidly exhausted as the solar magnetic fields and solar activity collapse on the downside of 24 and into the all but absent solar cycle 25. This winter will not be so bad and maybe next winter (relatively speaking) in the northern hemisphere but thereafter there will be a major collapse in global temperatures for several decades with harsh winters and collapsing grain harvests. Mean temperatures will fall by 2.5 degrees Celcius in the temperate latitudes and more at higher latitudes by 2021.
It is all going to be very unpleasant as we will be thoroughly unprepared because of Piltdown Mann and the Team.
That is my prediction.
Stay Cool!

Andyj
December 20, 2012 4:25 am

Camburn,
Maybe you are missing something about LazyTeenager.
The arrows do fly straight and true. Published and predicted. What happens between The date the prediction is released based on theory compared to the proof of observation only has one straight line that matters. The temperature line. The most recent release being the most ridiculous.

Bill Illis
December 20, 2012 4:55 am

One can also add the IPCC AR5 multimodel means to the projections as well. They would have had access to temperatures up until 2010 so that is when the projections start. AR5 is almost the same as AR4, there is very little difference.
The Climate Explorer has recently added a nice summary download page for AR5 multi-model means. I use the RCP 6.0 scenario which is the most realistic in terms of where we are going with GHGs. Be sure to set the base period to 1961 to 1990 in order to be able to compare to Hadcrut temperatures for example (everyone is using different baseperiods now so one has to be careful that they are all comparable – someone post this comment over at Skeptical Science since they do not seem to get this idea).
http://climexp.knmi.nl/cmip5_indices.cgi?id=someone@somewhere

Bill Illis
December 20, 2012 4:58 am

Sorry, I should have added that the Climate Explorer’s dataset starts in 1860 (when I think it is actually 1861 – there is a small bug somewhere – just move forward one year).

Frank K.
December 20, 2012 5:50 am

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/
Hi E.M. Smith – I also pointed to this story yesterday in another thread (I saw the story first at Instapundit).
What I find interesting is that CAGW devotees appear to believe that the mean temperature of the Earth is slowly increasing over time, which can be expressed simply as:
T_earth(t) = T_cagw(t) + T_stf(t)
where t is time, T_cagw(t) is the slow increase in mean temperature due to “global warming”, with a time scale on the order of multiple decades, and T_stf(t) are “short term fluctuations” due to ENSO, volcanoes, weather “noise”, and other natural variations. What I don’t understand is that if multidecade-scale “global warming,” as expressed above, does exist, we should NOT be breaking low temperature records established many decades ago in large area, broad regions like Russia. It will be interesting to see if more low temperature records are broken as we move into winter 2013…

lemiere jacques
December 20, 2012 5:52 am

well i am not sure that the debate is right.. I am more in comparing the shape of the curve.
Clearly any single model is not able to fit the data.
Who can explain why they use so many models? what is the meaning of that??? Why is it called uncertainty?

TLM
December 20, 2012 6:48 am

The last entry for the “Observed” data set is 2011 not 2012. Also, the graph does not say which data set “Observed” is. I suspect HadCrut3 or 4 as the HadCrut set has been their preferred one for all previous reports.
Data for the year so far suggests that 2012 will be warmer than 2011 but actually only about the same as 2009. That means the two dots will be at the bottom end of the green shaded area (TAR) and the upper end of their error bars is likely to sneak into the orange AR4 range. Of course the IPCC will say that because the single data point for the Observed 2012 data could have fallen within the bottom of the AR4 predicted range that it is “consistent” with their forecast. Of course they will ignore the fact that the trend in the data is clearly flat compared with the predicted upward trend.
That will, of course, not stop Tamino claiming that he has “pre-bunked” this argument by removing the effect of the dominant La Nina during the period, and then stating that the climate would have warmed. That translates to me as “if the climate had not cooled then it would be warmer than it is now”. The problem for Tamino is that ENSO is not a “cycle” where the warm and cool spells cancel out, it is a random fluctuation and can have a negative or positive trend of its own. Just because ENSO has biased cool in the last 10 years does not mean that it will bias warm to an equal extent in the future and that temperatures will somehow “catch up” through the effect of a series of El Ninos. They might, or they might not, it is a random fluctuation and it will now take a series of quite monster El Ninos to cancel out the last few La Ninas.

Andy W
December 20, 2012 6:49 am

So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us. Perhaps you could use Hansen’s A, B, and C scenarios he once touted 😉
As others have commented here, we should be looking at the BAU predictions the models have made as that is the scenario we are currently living in (in fact, I believe our evil ‘SeeOhToo’ emissions are higher than the BAU scenarios). I’d REALLY love to see you try and reconcile those predictions with the real-world temps!
Over to you Lazy…

Leo Morgan
December 20, 2012 6:59 am

@ fhhaynie
You said: “That still would not explain a probable future downward trend in global temperature.”
As you know, there is no forecast of a near-term downturn in temperature in the purview of mainstream science. Certainly I don’t know of such a forecast, and I am therefore confident that many others who read your contribution will likewise be unaware.
I went to your website and found nothing that lead me to judge that such a decline is likely.
The great thing about WUWT’s (Specifically Anthony Watts’s) determined light moderation stance, is that within reason everybody has a chance to have their say. The heretic, the dissenter, the lone true voice in the crowd, the voice of orthodoxy, the honestly mistaken and the oughtright crackpot all get heard. 
It’s embarrassing to have crackpots interjecting in a discussion. It would be even more embarrassing to exclude honest, possibly even correct viewpoints by wrongly judging them to be crackpot.
With respect, no matter how correct you might actually be, when you allude to a forecast not supported by conventional science, if you don’t give a citation then the reader has little choice but to include you among the crackpots. From visiting your blog, this would be an unfair characterisation of you.
I therefore ask you to always include a citation to your calculations about your expected temperature decline with every post you make that alludes to it, no matter how much you feel we ‘ought’ to know it.
Sincerely,
Leo Morgan

Reply to  Leo Morgan
December 20, 2012 7:42 am

Leo,
That probable downturn may not occur in my life time, but it will happen. We will have another ice age. Also, consider the probability on a short term basis that the last sixteen years of no temperature rise is the top of a temperature cycle that is following a 200 year cycle of solar activity. Time will tell and reveal the true crackpots.

RACookPE1978
Editor
December 20, 2012 7:18 am

Andy W says:
December 20, 2012 at 6:49 am
(replying to LazyTeenager)
So Lazy, are you going to try and show us how the models actually got it right? I’d love to see the twisted mathematics you’re going to employ to convince us.

I think you have that wrong. I really don’t even care anymore “how” his precious models may have accidentally got it right.
Your question actually needs to be: “So Lazy, are you going to try and show us which of the models actually got it right?”
See, we still have not seen ANY of the 23 some odd “officially acceptable models” actually produce even ONE single model run (of the many thousands they supposedly average to get their results) that has “reproduced reality” and predicts/projects/outputs/calculates ANY single 16 year steady temperature period during ANY part of the 225 years between 1975 and 2200.
Its not that the “CAGW modelers” need to produce hundreds (or thousands) of model runs that lay right down the middle of the real world temperatures: clearly there are error bands and the global circulation models will be slightly different each run. Nobody anywhere questions that.
They cannot even produce ONE run of ONE model that fits inside the error band of ONE standard deviation.
But for the IPCC to claim “certainty” (more than 3 standard deviations (of what outputs??? from what sample set ??? using what “data” ???) that their GCM models are correct 100 years in the future – when not even ONE result of 23 models x 1000 runs/model is inside the 16 years of real world measurements between 1996 and 2012 is ludicrous!

Roger Knights
December 20, 2012 7:24 am

Beth Cooper says: November 5, 2011 at 11:16 pm
Oh the rate of warmin’s slowin’
And the skepticism’s growin’
And the snow it keeps on snowin’
And the data it is showin’
Which way the wind is blowin….

G. Karst
December 20, 2012 7:53 am

E.M.Smith says:
December 19, 2012 at 9:04 pm
They are about to miss even more (further?)
http://rt.com/news/russia-freeze-cold-temperature-379/
Russia is enduring its harshest winter in over 70 years, with temperatures plunging as low as -50 degrees Celsius. Dozens of people have already died, and almost 150 have been hospitalized.
The country has not witnessed such a long cold spell since 1938, meteorologists said, with temperatures 10 to 15 degrees lower than the seasonal norm all over Russia.

It only makes logical sense: most of the world’s warming happened in the northern latitudes, so it shouldn’t be a surprise when cooling is realized in this same locale. Unfortunately, these same areas are the global breadbaskets. GK

Roger Knights
December 20, 2012 8:04 am

Lance Wallace says:
December 20, 2012 at 1:59 am
Ira– As Tokyoboy (9 PM above) and Roger Knights (9:43) point out, picking the middle point of each set of IPCC projections is not correct. . . .

To which Ira responded:

[Lance Wallace, Tokyoboy, and Roger Knights: Of course you are correct that, had I chosen the “business as usual” scenario predictions which correspond to the actual rise in CO2, my animated arrows would have had a higher slope and the separation of the IPCC from reality would have been greater. I used the central IPCC predictions (which correspond to the centers of the colored “whiskers” at the right of the chart) to avoid being accused of “cherry picking”. In other words, if the IPCC is off the mark based on my central predictions, they would have been even more off the mark had I used “business as usual”. Ira]

However, Lance Wallace mis-reported what my criticism was, which was quite different and which must be addressed:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

[Roger Knights: Thanks, you are correct about the oval. I should have moved it and the arrow heads to the right by one year. Please see my embedded reply to Werner Brozek (December 19, 2012 at 8:43 pm) that I used 2012 instead of 2011 “… with the hope that, when the official AR5 is released in 2013, they will include an updated version of this Figure 1-4 with 2012 observed data. Please notice that I drew my black arrow through the higher of the two black temperature observations for 2011, which kind of allows for 2012 being a bit warmer than 2011. – Ira]

TimC
December 20, 2012 8:44 am

Dr Glickstein – many thanks for your comment immediately following mine above at 7:35 pm.
I quite agree that if you take truly random events such as throwing dice, the probability of throwing the same number N times will be 1/(6^N).
However, what I have problems with is where you say “If a prediction based on a given theory and associated computer model is supposed to be 90% certain, the probability it is wrong is one in ten. If the same theory and computer model is run again several years later, the chance that both are wrong is one in ten times ten …”.
The same theory and model implies the same result, if you use the same starting and boundary conditions. Even with different starting conditions I don’t think you can regard any two runs as truly random – so I personally have doubts that the probabilities can simply be multiplied in the way you suggest (one in ten times ten, etc).
But I will be happy to be corrected, if my grasp of probability theory here is wrong …!

mpainter
December 20, 2012 9:08 am

Don’t anyone hold their breath, waiting for LT to respond. He never does. His strength is that he doesn’t mind being wrong. Nonetheless, he serves a good purpose in parroting the dubious scientists who brought us AGW, and so exposing their dubious science to public inspection.

mikerossander
December 20, 2012 11:40 am

Tim’s critique about the “prosecutor’s fallacy” (Dec 19 7:35 pm) is correct (and the rebuttal unfortunately is not). Four incorrect predictions, each with a 90% confidence (and therefore, a 10% chance of being wrong), does not lead to a 1 in 10,000 chance of all four being wrong. The fallacy is that the predictions are not independent events – that is, they are not separate throws of the dice.
If, for example, the 10% uncertainty includes some component of systemic error and that systemic error is propogated through all four trials, the calculated error considering all four trials may still be as high as the original 10%.
To go back to the rebuttal’s dice example, there is a one in six chance of rolling a “1” and a one in 6^4 chance of rolling four “1”s in a row if you have no prior knowledge or reason to suspect that the dice are unevenly balanced. Once you have four “1”s in a row, you have competing hypotheses, however – a) that you’re really unlucky or b) that the dice are skewed. Now you need to assess the probability of systemic error and recalculate. That is, given that you know that trial A was exceeded, what is the probability that trial B will be exceeded.
Unless you pick the extremes of either 0 or 100% component of skewing, the final properly-multiplied error of all four reports considered as a unit will be less than one in ten but substantially greater than one in ten thousand.

Andy W
December 20, 2012 12:50 pm

RACookPE1978 says:
December 20, 2012 at 7:18
Your absolutely right RACookPE1978. No matter how many times they run the models the results are always duff.
We still haven’t heard from LT 🙂

Doug Proctor
December 20, 2012 1:51 pm

Let’s be generous, say SAR got it right. Doesn’t that still mean the GCMs that produce high forecasts have been proven inappropriate? Doesn’t all this still mean that the “C” part of CAGW has fallen off the table.
Even if the aeorosol component in the prior GCMs is considered wrong, to account for the discrepancey, doesn’t this mean that the science is not settled?
Connolly says the SAR, at least, is correct, but doesn’t concede the Catastrophic part has been invalidated by time.

December 20, 2012 2:17 pm

Gunga Din says:
December 19, 2012 at 7:21 pm
“Have any warmests ever admitted even that, that the models need improvement? Let alone admit they’ve been just plain wrong? Yet they still insist we take immediate action based on the past flawed models.”
Well Nasa doesn’t admit that they are wrong, just that some of the answers weren’t right 🙂
http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/
http://icp.giss.nasa.gov/research/ppa/2001/mconk/

pete
December 20, 2012 2:30 pm

RACookPE1978 almost gets to the issue.
For any of these projections to be valid, they need to not only reproduce the forward temperature but also the components of the projection need to be correct. If they get the temperature correct but CO2, water vapour, ENSO, clouds, aerosol, TSI etc are wrong then the model isn’t correct at all, it’s got the temperature correct by pure chance. You can do this with virtually any ensemble of models you like.
So when the IPCC puts together these ensembles they are trying to hide the fact that their underlying models have zero predictive power from the get go. Not only do they not have a single model that can be run and produce any kind of predictive output, they don’t have a single model that can be run to get even a hindcast of temperature correct with all of the underlying variables also being correct.
The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.

Lance Wallace
December 20, 2012 6:21 pm

Ira, I think you are still not grappling with the main point here. The point, as rgb and many others have said, is that this graph is NOT showing a range of predictions with a “best” value somewhere in the middle and uncertainties around the best value shown by colored bands. That is what the IPCC wants people to think! When you accept that, as you implicitly do by picking the central estimate, you are now open to the IPCC response (e.g., see Connelly) that at least the actual values are within the uncertainty. But these values are not even close to the uncertainty if you use reasonable uncertainty values enclosing the ACTUAL SCENARIO that ensued following the IPCC projection. That is, one would see four lines (probably lying close to the upper boundaries of each band of colors), with NARROW bands associated with each line, and the measured temperatures would lie far outside those narrow bands. This would give the IPCC no wiggle room.
Roger Knights, I did not “mis-report” what you said. I quoted your response and gave the time of 9:43 PM. That post of yours simply quoted Tokyoboy and said “it would be a nice addition.” You made two posts and it is the second one you are thinking of.

Gail Combs
December 20, 2012 7:28 pm

pete says: @ December 20, 2012 at 2:30 pm
…The temperature analysis here is a good starting point, but if it is also taken to a component analysis of the models then it will be quickly shown that they are rubbish.
____________________________________
Yes this chart alone shows the premise upon which the models are built is rubbish. They put in airplane contrails but they ignore clouds and water is bundled into CO2 as a “Feedback”!
And it is not like they do not have any real world data either.

Parameterization of atmospheric long-wave emissivity in a mountainous
site for all sky conditions
J. Herrero and M. J. Polo
Received: 14 February 2012 – Accepted: 11 March 2012 – Published: 21 March 2012
ABSTRACT
Long-wave radiation is an important component of the energy balance of the Earth’s surface. The downward component, emitted by the clouds and aerosols in the atmosphere, is rarely measured, and is still not well understood. In mountainous areas, the models existing for its estimation through the emissivity of the atmosphere do not give good results, and worse still in the presence of clouds….. This study analyzes separately three significant atmospheric states related to cloud cover, which were also deduced from the screen-level meteorological data. Clear and totally overcast skies are accurately represented by the new parametric expressions, while the intermediate situations corresponding to partly clouded skies, concentrate most of the dispersion in the measurements and, hence, the error in the simulation. Thus, the modeling of atmospheric emissivity is greatly improved thanks to the use of different equations for each atmospheric state.
——–
Introduction Long-wave radiation has an outstanding role in most of the environmental processes that take place near the Earth’s surface (e.g., Philipona, 2004). Radiation exchanges at wavelengths longer than 4 μm between the Earth and the atmosphere above are due to the thermal emissivity of the surface and atmospheric objects, typically clouds, water vapor and carbon dioxide. This component of the radiation balance is responsible for the cooling of the Earth’s surface, as it closely equals the shortwave radiation absorbed from the sun. The modeling of the energy balance, and, hence, of the long-wave radiation balance at the surface, is necessary for many different meteorological and hydrological problems, e.g., forecast of frost and fog, estimation of heat budget from the sea (Dera, 1992), simulation of evaporation from soil and canopy, or simulation of the ice and snow cover melt (Armstrong and Brun, 2008)….
Downward long-wave radiation is difficult to calculate with analytical methods, as they require detailed measurements of the atmospheric profiles of temperature, humidity, pressure, and the radiative properties of atmospheric constituents (Alados et al., 1986; Lhomme et al., 2007). To overcome this problem, atmospheric emissivity and temperature profile are usually parameterized from screen level values of meteorological variables. The use of near surface level data is justified since most incoming long-wave radiation comes from the lowest layers of the atmosphere (Ohmura, 2001).
…. the effect of clouds and stratification on atmospheric emissivity is highly dependent on regional factors which may lead to the need for local expressions (e.g., Alados et al., 1986; Barbaro, et al., 2010). on environmental processes, especially if snow is present. As existing measurements are scarce (e.g., Iziomon et al., 2003; Sicart et al., 2006), a correct parameterization of downward long-wave irradiance under all sky conditions is essential for these areas….
Conclusions
The long-wave measurements recorded in a weather station at an altitude of 2500 m in a Mediterranean climate are not correctly estimated by the existing models and frequently used parameterizations. These measurements show a very low atmospheric emissivity for long-wave radiation values with clear skies (up to 0.5) and a great facility for reaching the theoretical maximum value of 1 with cloudy skies.….

unha
December 20, 2012 7:39 pm

The problem with any model is as follows: by every iteration, the error tends to be enlarged. When one runs a model through thousands of iterations, errors accumulate.
Simply put: when my model does a 90% good prediction of the temperature at day one, what will it do for day two, assuming the same skill of the model? 0.*0/9???
And on dAy three? 0.9*0.9*0.9????
Anyone tried o.9^100????
It is 26 exp -6
And the models do much more than cycling through 100 cycles.
Please, do not consider models as if they were experiments. They are not. Discard models.

mpainter
December 20, 2012 8:55 pm

Ira Glickstein, PhD says:
December 20, 2012 at 7:38 pm

The very first sentence of Chapter 1 of the leaked AR5 says:
Since the fourth Assesment Report (AR4) of the IPCC, the scientific knowledge derived from observations, theoretical evidence, and modelling studies has continued to increase and to further strengthen the basis for HUMAN ACTIVITIES being the PRIMARY driver in climate change. At the same time, the capabilities of the observational and modelling tools have continued to improve. [EMPHASIS mine]
====================================
It looks as though they intend to brazen it out. Is any more proof needed that the IPCC reports are the vehicle of a particularist agenda?

davidmhoffer
December 20, 2012 8:56 pm

Ira Glickstein;
It seems to me that the opposite conclusion has been increased and strengthened, namely that the IPCC-supported Climate Theory and models derived from that theory were wrong to start with (like “loaded” dice) and, after four tries, are still wrong.
>>>>>>>>>>>>>>>>>>
Ch11 of AR5 is about the models and shorter term (a few decades) predictions. There’s a section on initialization as a technique to make the models more accurate, in which they make the most astounding (to me anyway) statement:
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be? They adjust to make one part to be more accurate and it makes another part worse. They don’t even seem to consider that this is an indication that the models contain one or more fatal flaws which render them incapable of producing an accurate result. It is direct evidence that the things the model gets right, it gets right for the wrong reasons.

TimC
December 20, 2012 10:30 pm

Dr Glickstein (and mikerossander): many thanks and (referring to your December 20, 7:38 pm comment) I can no longer fault your analysis. Particularly, I agree that there is likely to be a (probably self-serving) bias to the models which to me also suggests “the dice are loaded”, leading to a higher probabilty of error than the original 1/10 for any single run – but not so high as 1/10,000 for 4 truly independent events.
Apropos of nothing, what actually first came to my mind (as a lawyer here in the UK for many years) was the notorious Sally Clark case here in the UK. She was wrongly convicted of the murder of two of her sons both of whom died suddenly within a few weeks of birth. A paediatrician gave evidence that in his opinion the chance of two children from her well-off background suffering sudden infant death syndrome was 1 in 73 million, taken by squaring 1/8500 (his estimate of the likelihood of a single cot death occurring in similar circumstances). The jury convicted on that evidence, despite the judge giving a warning of the possible “prosecutor’s fallacy”. She was imprisoned for life and had served 4 years when it emerged that the pathologist failed to disclose microbiological reports implying the possibility that at least one of her sons had died of (unlinked) natural causes. Her convictions were then overturned but (having lost two sons, then having been wrongly convicted of their murders) she never recovered – she died just 4 years later, aged 42. A very sad case.

JazzyT
December 21, 2012 1:07 am

The title of the post,
“An animated analysis of the IPCC AR5 graph shows ‘IPCC analysis methodology and computer models are seriously flawed”
raises a question: Flawed for what purpose?
Obviously, current global temperatures are below what the models would have led us to believe. But the models can’t predict specific ENSO events in advance, or long-term solar output trends, at all. People who work with them, or are used to examining their output, know this, and can allow for the fact that unexpected ENSO events or solar forcings will give a real-life result that the models didn’t predict. But when the model results are presented to non-specialists, it’s hard to avoid this point being lost.
Foster and Rahmstorf have taken a stab at adjusting the temperature history, for ENSO/solar/volcanic histtory, with the aim of isolating the CO2 effects. They used a multivariate regression analysis, so the accuracy of their results will depend on whether the factors they examined affecting temperature (CO2, ENSO, solar output, and aerosols) leave out any significant contributors, and the extent to which their effects can, for the metrics they chose, be thrown together as linear, independent influences on temperature.
Models do include ENSO events at random, and it would be interesting to see what predictions came out when selecting runs with a strong El Nino bias in the late 1990s, and a strong La Nina bias recently. What I’d really like to see would be some models run using the known ENSO history and solar influences, for hindcasting. That would give a better idea of how well the models work, and what we might expect under various scenarios for future ENSO and solar influences.

Roger Longstaff
December 21, 2012 2:29 am

unha says: December 20, 2012 at 7:39 pm:
Thank you for raising the exponential rate of error accumulation in GCM time step integrations.
When I could not understand how climatologists thought that they could get sensible data from GCMs I did some checking and found out that the models use low pass filters between integration steps in order to preserve conservation of energy, mass and momentum, and to maintain “stability”. Even worse, they use pause/reset/restart techniques when physical laws are violated, or the “climate trajectory” breaches boundary conditions.
All of this tells me that what they are trying to do is mathematically impossible.

mpainter
December 21, 2012 3:56 am

davidmhoffer says: December 20, 2012 at 8:56 pm
************
“While there is high agreement that the initialization consistently improves several aspects of climate (like North Atlantic SSTwith more than 75% of the models agreeing on the improvement signal), there is also high agreement that it can consistently degrade others (like the equatorial Pacific temperatures).”
************
How much more obvious could it be?
=========================================
Exactly. A frank admission of the inadequacy of the models. Tinker here, and – oops! Contradicts the cite above from Ch. One, as given by Dr. Glickstein. The crack of Doom? Or a case of indiscipline?

herkimer
December 21, 2012 6:11 am

An analysis of past climate history shows that during the period 1870 to 1910, the global air temperatures and the global ocean surface temperatures both declined as the sunspot number declined. From 1910 to 1940 all three again moved up together. From 1940 to 1970’s, the global ocean surface temperatures declined as they entered their cool mode and wiped out the global surface temperatures rise from continuing solar sunspot increase. From 1980 to 2000 all three variables again moved up in unison.. During the last decade or 2000-2010, all three climate variables are again going down as global cooling again gets underway. This declining pattern is likely to continue until 2030 at least . It would appear that the decadal average yearly sunspot number level of about 30-45 seems to be the tipping point where any level below this figure causes global cooling and above this figure causes global warming unless ocean cycles happen to be out of sync and over ride any warming [ like 1950’s-1970’s]. Most recently we have been running at an average yearly decadal sunspot number of 29.2 over the last 10 years. This low figure clearly explains why there has been no warming for the last 16 years and why instead we are starting to see global cooling like during the past the period of 1880-1910. Not enough solar energy is being put into the planet to cause any warming.
The average yearly sun spot numbers during the Dalton Minimum decades [ 1790 to 1837], a period of much colder temperatures like the period 1880-1910 were 27.5, 16.5, 19.3 and 39 . So there is some convincing evidence that low solar sunspot numbers and declining global temperatures are directly linked.and we are already in cooling phase like we had before

herkimer
December 21, 2012 6:23 am

IRA
IPCC has been completely wrong about their winter climate predictions .
UNITED STATES
The winter temperatures for Contiguous United States has been dropping since 1990 at -0.26 F per decade [per NCDC]
The annual temperature for Contiguous United States has been dropping since 1998 at -0. 80 F per decade[ per NCDC]
Basically US winter temperatures have been flat with no warming for 20 years
CANADA
The annual temperature departure form 1961-1990 averages has been flat since 1998
The winter temperature anomaly has been rising mostly due to the warming of the far north and Atlantic coast only
8 of the 11 climate regions in other parts of Canada showed declining winter temperature departures since 1998
During the 2011/2012 winter the Canadian Arctic showed declining winter temperature departures
Yet the IPCC assessment for North America was:
All of North America is very likely to warm during this century, and the annual mean warming is likely to exceed the global mean warming in most areas. In northern regions, warming is likely to be largest in winter, and in the southwest USA largest in summer. The lowest winter temperatures are likely to increase more than the average winter temperature in northern North America
EUROPE
The winter temperature departures from 1961-1990 mean normals for land and sea regions of Europe have been flat or even slightly dropping for 20 year or since 1990
Yet the IPCC assessments of projected climate change for Europe was:
Annual mean temperatures in Europe are likely to increase more than the global mean. The warming in northern Europe is likely to be largest in winter and that in the Mediterranean area largest in summer. The lowest winter temperatures are likely to increase more than average winter temperature in northern Europe
It is not happening

GregF
December 21, 2012 7:55 am

Ira,
I’m in general a sceptic and very much find the graph you pulled from the report highly confusing. I think the poor quality of the graph has led you to totally mis-read it and worse to misapply it.
For AR4 as an example, the starting point for the hindcast / forecast is clearly 1990.
If you want to eliminate that hindcast portion of the AR4 fan, then you need to start your AR4 line from the middle of the AR4 2007 hindcast for 2007. Then connect that line to the center of the 2012 forecast. The slope of that line would be totally different from the slope of the line you got by mixing a 2007 actual temp with a 2012 temp findcast/forecast with a 1990 starting point.
As it currently is, I think your entire blog post should be withdrawn as simply being a misinterpretation of a really poorly done graph.
[GregF, you are entitled to your opinion. However, I find it somewhat ridiculous that the middle of the AR4 prediction fan (the brown and rust-colored band) is so far above the actual observations for the year before AR4 was issued as well as for the year AR4 was issued. It seems to me that a prediction should start with a known situation and predict the future from that point. Nevertheless, thanks for your input. – Ira]

davidmhoffer
December 21, 2012 9:10 am

Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>
I’m only part way through it, but there are a few more beauts in there. One is that they predict 0.4 to 1.0 degrees of warming for 2016-2035 compared to 1986-2005, and they expect to be at the low end of that range. For starters, we are right now today at +0.2 compared to 1986-2005, so they only need +0.2 by 2016-2035 to hit their projection range. But they then hedge their bets further by stating that this is all based on the assumption that there will be rapid decreases in aerosol emissions over the next few years. No justification for the assumption that I can find, and it makes little sense to make such an assumption given the rapidly industrializing economies in China, India and Brazil which will ramp up emissions far beyond what we can reduce them in the western world. Talk about a get out of jail free card! Nor can I find (so far anyway) who much of the warming they project is due to the decrease in aerosols that they project, so how much is actually left to attribute to CO2 is currently a mystery to me.
But here’s one that got the expletive’s going big time:
“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005”
Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!
And you have just got to love this one on surface ozone:
“There is high confidence that baseline surface ozone (O3) will change over the 21st century, although projections across the RCP, SRES, and alternative scenarios for different regions range from –4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
Are they kidding? They are highly confident that it will be either higher, or lower, or about the same, but not exactly the same?
The more of it I read, the sadder it gets.

Tim Clark
December 21, 2012 12:03 pm

{ davidmhoffer says:
December 21, 2012 at 9:10 am }
RE:
–4 to +5 ppb by 2030 and –14 to +15 ppb by 2100.”
LOL…Nice catch. But the real question is……(drumroll)
{ There is high confidence }
The best they can do is “high confidence”. I think it’s “very highly likely”, or maybe “almost certainly”, or even so far as, dare I go there, “irrefutably robust”.
;<)

JazzyT
December 21, 2012 11:13 pm

Ira Glickstein, PhD says:
December 21, 2012 at 7:57 am

It seems to me that we “non-specialists” who are not invested in the meme of human-caused Global Warming are more attuned to the abject failures of the IPCC models.

Well first, the bit about “non-specialists” was not intended as a jab at anyone, and I regret it if that’s how it came through through.
But I’ll try to clarify what I meant. Suppose a model prediction persistently fails to match reality within a stated tolerance. (I say “persistently” because one excursion could be a statistical fluke.) Now, if the model diverges from reality because processes that were modeled were gave incorrect answers, then the model is not working. However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.
Is this what has been happening? I don’t know. ENSO processes can”t be predicted, so they are modeled randomly. The real-life events of a super El Nino in 1998 and double-dip La Ninas recently tend to flatten out temperatures. These won’t match the mean of model runs using random ENSO processes, some of which would raise the trend and others lower, or flatten it. Weak solar output over this cycle and the last contribute more to temperature flattening. How would the temperature curve have looked without these? Would it have matched the model predictions?
There’s been one statistical attempt to deal with all these processes, that could not have been included in model predictions (because they’re unpredictable). But that gives the best fit to the data, which is not necessarilty the most physically plausible interpretation. That’s why I’d like to see some model runs that actually include the ENSO and solar events of the last 15 years, as they actually happened. That would have a lot to say about how well the model is working in general.
Now, the climate modelers understand these issues very well. They may be exposed to the risk of confusing models with reality, but they do know what’s in the models and what isn’t. When I see a peer-reviewed article about models, the language seems appropriately cautious, trying to state simplifying assumptions and areas of uncertainty. When it gets into the IPCC scientific summary, it gets compressed and these caveats lose detail. In the summary for policymakers, these technical details are likely to be left out. By the time it has been digested by the mass media, possibly several times, and passed on to people who have no reason worry about how the models work. At this point, they see the prediction, but none of the caveats.
So, the divergence of models from reality is clearly due, partly, to things that just weren’t modeled. But the predictions, as communicated to the public, didn’t include that as a possibility. So, if you want to define a model at each stage–modelers, two (or three) layers of IPCC, and one or more runs through the mass media–well, the end prediction could be called a model too. And, the predictions that came out at the end certainly didn’t work. And that’s a problem. How much of it was in the code and how much in the communication–that’s what I’d like to find out.

Martin Lewitt
December 22, 2012 7:58 am

JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled. Models may not seem that far wrong when consideration is given to the things that could not be modeled in advance, but they can achieve that by just following the trend linearly for awhile. They diverge from that in longer range projections, and are not credible when we know they have “matched” the climate incorrectly. They have documented correlated errors larger than the phenomenon of interest.

JazzyT
December 23, 2012 2:42 am

Ira Glickstein, PhD says:
December 22, 2012 at 8:08 am

[Quoting me]

However, if reality does not match prediction solely because of processes that were not modeled, then it’s not the model that’s failed, although the prediction has failed.

In other words, “the operation was a success but the patient died.” :^)

This happens sometimes. But if the patient died in a traffic accident as their spouse was driving them home from the hospital, it would take a rather brazen lawyer to sue the surgeon for malpractice. :->
But we’re on the same page as far as what’s in and out of the models.
I couldn’t help noticing something else, and I’m surprised I didn’t see it come up in the thread. With the arrow metaphor, of course, we score a hit when the arrow hits the target. The target, in this case, could be the actual temperature…or, you could say that the temperature was the bulls-eye, and the scoring rings extend to the edge of the error bars. But 2012 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011. Two of the arrows, SAR and AR4, actually hit 2011, not in the bulls-eye, by any means, but still in scoring range. It’s the same for 2010. The arrows would probably not hit the error bars for 2012 once those are available, but insisting on using 2012, and disregarding the two previous years would invite a charge of cherry-picking.
Others have covered things like picking the starting point, how to get the slope, etc. I’ll only add that I’m old enough to have learned how to do a linear fit to the data by eye (and, in fact, they still have students do this at least once or twice in a college physics lab, to make the students interact with their data). When I do that, I get a slope that is, by eye, slightly lower than that of FAR, higher than SAR, lower than TAR, and distinctly lower than AR4.
But it seems strange to compare the slope for the entire series with the slopes for each model. Why would each model’s predictions for the future be tested against the past? It seems that you’d want four slopes for measured data that start at the time of each model’s predictions. But then, AR4s and TARs would be completely impractical due to the short time intervals, and TAR could be dodgy as well.
If you want to do this again when 2012 data are complete, well, those are the issues I noticed, which others would surely notice if this is released to a wider audience. Now they’re in the same pile as everyone else’s comments; some stuff from that pile will probably be useful for the next version.

Roger Knights
December 23, 2012 5:18 am

davidmhoffer says:
December 21, 2012 at 9:10 am
Ira Glickstein;
Your comment led me to look at Chapter 11 and I found this amazing statement
>>>>>>>>>>>>>>

“It is virtually certain that globally-averaged surface and upper ocean (top 700m) temperatures averaged over 2016–2035 will be warmer than those averaged over 1986–2005″

Well duh! Since CURRENT temps are ALREADY 0.2 degrees above 1986-2005, we’d have to see a COOLING of 0.2 degrees by 2016-2035 for this to NOT be true!

Wouldn’t it be a hoot if that’s what actually happens! (I suspect the Pranksters on Olympus are thinking the same way.)

Roger Knights
December 23, 2012 5:30 am

JazzyT says:
But 2012 2011 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011 2010.

As per my comment upthread:

Roger Knights says:
December 19, 2012 at 11:02 pm
There’s an error in the chart. The oval labeled “2012″ should read “2011,” and the heading “1990 to 2012″ should read “1990 thru 2011″. The last year, shown by vertical bars or dots on the chart, is 2011, not 2012. (2012 will be somewhere between 2010 and 2011.)

herkimer
December 23, 2012 5:48 am

Ira
“Those of us who come up with scientific theories and make predictions about the future know that no model can capture the total reality, because, if it did, it would BE the reality.”
I n my opinion, there is nothing wrong with scientists doing model work to understand the climate. Personally i think one is trying to model something that has too many variables that cannot be predicted or modeled completely. However where I have a more serious concern is when unproven and purely experimental models are portrayed as solid science and are thrust on the public domain to shape public policy . This very expensive , wasteful and burdensome on the society . These models should remain as experimental only until there is sufficient evidence that they have high level of success. . In my judgment , we are decades away from that point when it comes to climate.

herkimer
December 23, 2012 6:52 am

There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage. Yet when it comes to climate science we are doing exactly the opposite . We are into the implementation and construction stage when it comes to energy changes , environmental actions and public policy while the models are still in the concept and unproven stage . So the whole planet is now like big experiment where these scientists are allowed to play around with public resources , energy options and taxpayers money based only on questionable science and unproven models most of which have been seriously wrong predicting just the first few years ahead .Successful hind casting of models does not prove the model as it is too easy to feed fudge factors and twig the model to give you a known answer,Successfully predicting a decades into the future is the only true test in my opinion..

Gail Combs
December 23, 2012 7:36 am

herkimer says: @ December 23, 2012 at 6:52 am
There used to be a rule of thumb in engineering work , that one should make all your changes or alternate options studies during the conceptual design stages because if you make major changes as you progress from concept to detail design to procurement and finally construction, the costs go up progressively and they can be 100 to a 1000 fold higher than during the concept stage…..
>>>>>>>>>>>>>>>>>>>>>>>>>
And any company that has it’s head on straight gathers all of its technical personnel together to have a go at ripping to shreds the design while in the pilot stage BEFORE it gets expensive.
This is what the most successful company I worked for did with very good results. Sadly it is not common because of the delicate sensibilities of the scientists/engineers who head projects and who can not stand criticism. It takes a brave soul to present his ‘baby’ to the critiquing wolves.

Gail Combs
December 23, 2012 11:41 am

Ira Glickstein, PhD says:
December 23, 2012 at 10:51 am
….As you point out, blindly accepting the catastrophic predictions of climate models based on flawed Climate Theory has wasted taxpayer money. IMHO, public funding of harebrained “green” energy schemes has benefited no one but the Official Climate Establishment and politically-connected industries. Theories must be VALIDATED before predictions based on them are implemented on any large scale.
>>>>>>>>>>>>>>>>>>>>>>>>>
Too bad the run-of-the-mill taxpayer who is being scammed can not see that. One wonders just how bad the backlash will be when realization hits. Given the acceptance of the Banker bailout fiasco by those who were conned it looks like everyone will take it lying down or maybe not….
I think a friend’s four year old had the right idea when she said she wanted to grow-up to be a government. (She now works in DC)

herkimer
December 23, 2012 12:42 pm

This entire exercise I think has also been made considerably worse by having the scientific and political mandates together at UN /IPCC where the political objective to collect money and distribute the wealth dictates scientific mandate and clouds the scientific objectivity. Things are being rushed where there is no reason to rush as we now see that the warming will not be anywhere near the rate predicted. We have the time to do things right with the right science

Rob Nicholls
December 23, 2012 1:47 pm

I’m assuming that the data points in the graphic only go as far as 2011 (?)
How was the increase in temperature of between 0.12 and 0.16 degrees C, between 1990 and 2011, calculated in the animated graphic? It appears to me that this was done using only the first and last data points in the chart (1990 and 2011). If so, then I don’t think this isn’t the best method for estimating the increase in temperature. I think linear regression would be better as it uses all of the data points, and thus reduces the influence of year-to-year variability.
Using annual global combined (land and ocean) surface temperature anomaly data from 3 data sets (GISS, HardCrut4, NOAA/NCDC) I calculated the slope of the regression line between 1990 and 2011, and estimated the increase in temperature in degrees C between 1990 and 2011 to be 0.33 for HadCrut4, 0.33 for NOAA/NCDC, and 0.37 for GISS.
Admittedly, the estimates obtained above are most likely to be too high, as the slope of the regression line would be steepened due to mount Pinatubo erupting in 1991, so I did 2 very simple alternative analyses to adjust for this:
Firstly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 as the average of the anomalies for 1990 and 1995. When I did this, the increase in temperature in degrees C between 1990 and 2011 was estimated to be 0.23 for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS.
Secondly, I re-calculated the temperature anomalies for 1991, 1992, 1993 and 1994 using simple linear interpolation (from the temperature anomalies for 1990 and 1995). This gave idenitical results to 2 decimal places (i.e. 0.23 degrees C for HadCrut4, 0.23 for NOAA/NCDC, and 0.25 for GISS).
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.

RACookPE1978
Editor
December 23, 2012 2:16 pm

Rob Nicholls says:
December 23, 2012 at 1:47 pm
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.
Let’s assume you are trying to evaluate a trend from “good” data. (And, the judgement of most independent observoers – those not in the pay of the CAGW community – , none of those is particularly accurate w/r to the real “world” temperatures…)
Regardless, let us assume those are valid.
You ARE making an error: You are trying to <artificially create a conclusion that the weorld’s temperatures are linear! They are NOT linear. There is a very evident inflection point – a bend in the curve at 1997-1998-1999. Your “method” creates an “anomaly” (an average needed to calculate differences) early in the time series of events; creates a second anomaly (a second average) at the end, then looks for a lest squares single line between anomalies based on the start and end values.
If you insist on using straight lies to analyze cyclic trends, do this: Run TWO least-squares linear trends. One based on 1990 (or better yet 1975) through 1998. The second using 1996 through 2012 values.
Now plot the two different straight lines.

Rob Nicholls
December 24, 2012 8:19 am

Ira and RACookPE1978, thanks for your prompt responses – these are much appreciated. I’ll try to do some further analyses around this at a later date. I hope all here will have a good Christmas.
[THANKS, and I’ll look forward to further analysis. Also, a Merry Christmas to all as well as a slightly belated Happy Chanukah. (Let us keep CHRIST in CHRISTmas and the “CH” in CHanukah -pronounced like the “CH” in the Scottish “loCH” or the German “aCHtung!” :^) – Ira]

Jacob
December 24, 2012 11:37 am

Ira —
“Perhaps I am too simple-minded, but if my doctor told me I was eating too much over the past two decades and had therefore gained weight, I would simply subtract my weight in 1990 from my weight in 2011 to determine how much I had gained, net-net, over that period. ”
Imagine that your body weight fluctuates by as much as 15% on a day-to-day basis, let alone year-to-year. Would weighing yourself one Tuesday in 1990 and again on a Tuesday in 2011 be enough to conclude that your weight had increased by 30% over that period?
Probably not.
Temperature seems to be like that.
If you look at the up-to-date GISTEMP record for 2011 and take the (black) data points for 1990 and 2011, you can see about 0.17 to 0.19 C increase.
http://data.giss.nasa.gov/gistemp/2011/Fig2.gif
But if you compare that against the 5-year running average of the data (red trendline), you’ll see that it’s quite unrepresentative to do that two-point estimate. No climate models are expecting to produce the year-to-year accurate temperature estimates, but if they can capture the average trend that’s a success.
So if you’re going to do just a two-point trend estimate, at the very least you should base your start and end temperatures not on the instantaneous temperature in the particular year you chose, but on the 5- or 10-year average temperature centered about that year.

JazzyT
December 24, 2012 4:26 pm

Ira Glickstein, PhD says:
December 23, 2012 at 10:19 am

JazzyT: You seem bound and determined to ignore the role of the IPCC Climate Theorists in the failure of the Climate Theory that underlies the Climate Models.

I’m determined to figure out whether the theory has failed or not. If I become convinced that it has, I’ll want to know why. I can’t help feeling that a lot of people are leapfrogging this step. Nothing wrong with considering various known and potential errors, but I want to know about the how much of diverge between prediction and measurement actually arises from an unusual combination of ENSO and solar events. The well-known attempt do do this is Rahmstorf and Foster’s paper:
http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf
You know this one, I’m sure; but there it is as a reference (and thanks to Louis Hooffstetter on another thread for the link). Now, this is the result of a multivariate regression, which will work better or worse depending on a number of things, including whether anything was left out. Also, it’s all statistics, and necessarily includes simplifying assumptions. That’s why I’d like, in the end, to see the known ENSO and solar events to go into a few models, and see whether the modelers end up with a happy face or a sad one. (Or, a puzzled face).
The really warm people seem to like this paper a lot. On the skeptic side, maybe a few dismissive comments, but mostly, just dead air on that frequency. The only one I’ve seen really acknowledge and engage this paper, or even the issue at all, is Bob Tisdale. I don’t think his counterargument is ready for prime time yet, but he’d plugging away at it, and taking the issue on.

What if there was a pattern of MISDIAGNOSIS and evidence the hospital had done that four times in a row? What if there was reason to suspect the MISDIAGNOSIS was not an accident but rather a way to increase the income of the hospital or to help the surgeon make a payment on his yacht?

This certainly happens in the hospital setting, and it’s worth lawsuits, and sometimes, prosecutions. But in the analogous modeling field, I don’t think that there have been four failures. FAR’s aarrow shot high, but it was the first, and clipped the tops of some error bars. SAR actually did pretty well right up until about 2000, the beginning of the strange era I want explained. For TAR we could say the same thing. AR4 gets within a couple of error bars but it’s too early to say anything, no matter what the data.
So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
People do get attached to their pet theories, and the careers that they engender. I think most of this is unconscious; my thesis advisor was a highly ethical man, but he could talk himself into anything. By the same token, I see a lot of people who really want the whole process to be flawed, dishonest, and/or failed, so that they won’t see a threat to their lifestyles, freedoms, etc. This is equally unconscious. Me, I want to know what’s going on. But then,in the end, so does everyone else.
Martin Lewitt says:
December 22, 2012 at 7:58 am

JazzyT, The communication ignores more than just the possibility of divergence because of things that weren’t modeled like volcanoes, ENSO and an change in solar activity. It also generally ignores the diagnostic literature documenting problems in the things that were modeled.

Not that I need another subject to read deeply…but if you have a ref to a good starting place, I’d be grateful.
Beyond that, I hope everyone enjoys whatever they’re doing for the holiday. I hope I don’t have too many typos; dinner is called, and I have to hit “send”

RACookPE1978
Editor
December 24, 2012 6:41 pm

So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
OK, but it is not 4 models that are not tracking the real world.
It’s ALL 23 models, in EVERY ONE of their hundreds of runs. To date, NO model run at ANY time using conventional consensus state-of-the-art “science” has duplicated the real world over the past 16 years.
SO, I would grant this is important, but I would caution that the analysis needs to be “real world”: pollution and aerosols need to match real-world measurements, not conveniently “canned” assumptions that “aerosols increased between 1955 and 1975 so solar radiation decreased by xxx.yyy% over that time frame.” When modeled, ENSO events need to be as little and short and follow their actual rise, steady, and fall patterns as they actually where – not “light switch” on-and-off “high and low” positive and negative inputs.
Perhaps the result will be instructive.
Then again, if it were instructive, the “scientific” CAGW theists would have run their latest models with … maybe … the past 16 years of “real” data, wouldn’t they?

Rob Nicholls
December 27, 2012 5:16 am

The problem with estimating the trend in global temperatures by using just 2 data points at the start and end of the relevant time period (subtracting one data point from the other) is that this method is heavily influenced by year-to-year fluctuations in temperature (such fluctuations are caused by, among other things, the El Nino Southern Oscillation, the 11 year solar cycle, and volcanic activity).
This is illustrated if we move the timescale of the analysis by just one or two years:
The 1990 to 2011 global temperature difference calculated by subtracting the 1990 global temperature anomaly from the 2011 global temperature anomaly was 0.14 for NCDC/NOAA, 0.11 for HadCRUT4, and 0.15 for GISS (GISTEMP) (all temperatures are in degrees C).
But, if the timescale shifts backwards by one year, and we subtract the 1989 global temperature anomaly from the 2010 global temperature anomaly, we get 0.40 for NCDC/NOAA, 0.42 for HadCRUT4 and 0.43 for GISS. (The animated graphic in the article would look quite different with these figures.)
If we shift backwards by another year, and we subtract the 1988 global temperature anomaly from the 2009 global temperature anomaly, we get 0.26 for NCDC/NOAA, 0.29 for HadCRUT4 and 0.25 for GISS.
(I note that Jacob already commented along these lines on 24th December).
I appreciate what RACookPE1978 said about linear regression on 23rd December. I can see that if the underlying temperature trend is markedly non-linear then linear regression may not provide a reliable estimate of the underlying trend in global temperatures from 1990 to 2011.
One method for calculating the trend in global average temperature over a 21-year period, which does not assume a linear trend, and which reduces the influence of year-to-year variability, is to perform a subtraction of moving averages. I’ve done the calculations with 5 and 10 year moving averages.
10-year centred moving averages can be calculated up to 2006, and 5-year centred moving averages can be calculated up to 2009. So, the latest 21-year period for which subtraction of 10-year moving averages can be performed is 1985 to 2006. The latest 21-year period for which subtraction of 5-year moving averages can be performed is 1988 to 2009. (Fortuitously, the calculations for these time periods do not involve data from the years 1991-1994, which seem to have been heavily influenced by the Pinatubo eruption).
Subtracting the 10 year moving average centred on 1985 from that centred on 2006 gives 0.36 for NCDC/NOAA, 0.37 for HadCRUT4, and 0.36 for GISS (in degrees C).
Subtracting the 5 year moving average centred on 1988 from that centred on 2009 gives 0.27 for NCDC/NOAA, 0.29 for HadCRUT4, and 0.29 for GISS (in degrees C).
I also used similar methods to look at the temperature changes after 1990 (again in degrees C; this time the calculations involve the years 1991-4, so I’ve included adjustments for the Pinatubo eruption in 1991):
Subtracting the 10-year moving average centred on 1990 from that centred on 2006 gives 0.30 for NCDC/NOAA, 0.31 for HadCRUT4 and 0.32 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.26 for NCDC/NOAA, 0.27 for HadCRUT4 and 0.27 for GISS (with adjustment for Pinatubo’s eruption).
Subtracting the 5-year moving average centred on 1990 from that centred on 2009 gives 0.26 for NCDC/NOAA, 0.26 for HadCRUT4 and 0.28 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.22 for NCDC/NOAA, 0.23 for HadCRUT4 and 0.24 for GISS (with adjustment for Pinatubo’s eruption.)
[The method I used for adjusting for Pinatubo’s 1991 eruption in this case was to re-calculate the temperature anomalies for 1991 to 1994 as the average of the anomalies for 1986 to 1990 and 1995 to 1999 (i.e. 5 years either side of the time period 1991-4). This is the best method that I could come up with given my limited skills (I think it’s better than the methods that I mentioned in my previous comment), but I’m not very happy with it and I’m sure there must be better ways of adjusting for these kinds of events. It’s pretty arbitrary to adjust for Pinatubo and not for other short-term causes of fluctuation in temperature. My reason for performing this one particular adjustment is that the Pinatubo eruption seems to have had a big influence on temperatures between 1991 and 1994, and to leave out the adjustment would, I think, lead to an over-estimate of the underlying trend in global temperatures since 1990, if the data from 1991-4 is used in calculating the estimate. I don’t have the time or the expertise to try to replicate Foster & Rahmstorf’s 2011 paper].

RACookPE1978
Editor
December 27, 2012 9:38 pm

Ira Glickstein, PhD says:
December 27, 2012 at 8:36 pm

Nearly two years ago, I did a survey of WUWT commenters and, taking the average of their estimates: Data Bias = 0.28˚C, Natural Variations = 0.33˚C, and Human Caused = 0.18˚C.
What do you think of these estimates?

Reasonable estimates.
however, the actual satellite measurements show a significant but random month-to-month change of +/- 0.2 . That is, a temperature measurement (expressed as an anomaly) in May has been as much as 0.2 degrees different than the temperature anomaly for August or March. This is not really an error band – the error band would describe instrument or sensor differences that “record” or “report” a difference from the actual temperature.
In the satellite record, it appears to be the actual measurements – the temperatures – that vary. In turn this creates two questions: Is the temperature actually varying in this kind of random ways over a very short term interval? or do these variations stem a detector/analysis flaw?
If the temperatures do vary over such short term,s, is it valid to even consider a 1/3 of 1 degree difference as significant in any “symptom” of climate change?
.

JazzyT
December 27, 2012 11:35 pm

Roger Knights says:
December 23, 2012 at 5:30 am

But 2012 2011 has no error bars, and when viewing the animation, the eye naturally goes to the last year with error bars, 2011 2010.

Thanks Roger, belatedly, for the correction.
Ira Glickstein, PhD says:
December 25, 2012 at 8:33 pm

I see no mention of CO2 Climate Sensitivity in your comments. IMHO, CO2 is the key THEORETICAL issue that plagues the IPCC researchers.

The CO2 forcing seems uncontroversial among scientists.skeptical and warm alike. It’s the feedbacks, especially clouds, where controversy lies. Tropical cloud formation influences how much heat is available to warm the temperate zones, melt ice caps and decrease albedo, etc. Cloud formation is the least understood process in the game, and clouds are difficult to model on the grid size used, which is limited by available computing power, As I understand it, the way to deal with such uncertainties is to make a number of runs, using different parameters to (hopefully) cover the range of possible values. Meanwhile, smaller models can be run over a limited areas, e.g., just a small tropical ocean zone, for a month, to give a snapshot that can be compared to observations. Finally, to insure against errors, and because each model has somewhat different algorithms that may do better or worse in different situations, they use 23 different models, as noted in several posts above.

If they had assumed that doubling atmospheric CO2, all else being equal, would raise average Global temperatures by only 1˚C, their predictions would have been pretty close to the truth for 2011. On the other hand, had they used 1˚C, their whole “tipping point” panic argument would have evaporated.

Over the long term, say, 100 years, I would view any tipping points, scary or otherwise, as part of the overall CO2 sensitivity. Of course, it’s useful to define a shorter-term CO2 sensitivity that specifically excludes tipping points. At this point, I haven’t heard of it being officially defined this way; it’s just what makes sense to me. Meanwhile, it may be true that 1 degree for CO2 doubling would fit the recent data, but after correction for ENSO and solar, this probably would not be true anymore.
So, to make a short story long, I’d say that although they eventually do get CO2 sensitivity right or wrong, that would be something to come out of the model, and it can be used as a diagnostic, if extraneous influences (ENSO, Solar) have been accounted for. So, “do they get CO2 sensitivity right” is indeed a good question. On the input end, it’s not CO2 sensitivity, but rather its components that they have to get right: CO2 forcing, which is pretty well nailed down, and all of the feedbacks, especially clouds. As others point out from time to time, “it’s all in the feedbacks” that you’ll find the remaining controversies that will determine CO2 sensitivity, and, for various emission scenarios, future warming. It’s all in the feedbacks–especially clouds.

Jacob
December 28, 2012 1:04 am

Hi Ira,
(OT: How do you get a real “quote” in italics like that, to make it a proper response? I don’t see any rich-text formatting options. I’ll just have to quote you the traditional way)
“Thanks for sharing your ideas and I agree that it would be foolish to compare my weight (or the temperature) on “one Tuesday in 1990 and again on a Tuesday in 2011″ which is why I compared not a day or a month but the whole IPCC YEARLY average temperature anomaly reports for the YEAR of 1990 with the YEAR of 2011.”
I know that you are not comparing averages over a day or month, but a year. My point (perhaps I can stress it better than I did) is that even a yearly average is still not good enough. The reason I can say this confidently is well illustrated by Rob Nicholls’ post, or better yet, by glancing at any graph of the temperature record (for example the GISTEMP graph that I linked to). You can clearly see just by looking at the graph that the year-to-year temperature variations are extremely noisy. Those are already averaged over an entire year (and the entire globe). Simply from this, it is totally apparent that a comparison of “not a day or a month but the whole IPCC YEARLY average” is still not going to give you any useful result. Rob’s example of how taking your exact approach, but shifting the year (arbitrarily) by plus or minus 1, gives completely different answers.
I do believe that you have good reasons to pick 1990 as your starting point and 2011 as your ending point and that you aren’t cherry-picking those particular years to support your argument. But still, picking just two individual years (even if they’re a decade apart) won’t give you any useful result, because the signal-to-noise ratio is simply too high.
“Your answers range from 0.22˚C to 0.43˚C. That is about 50% uncertainty.”
That is a very unusual definition of uncertainty that you are using. Uncertainty in a dataset is not the minimum value divided by the maximum value, not even as a rough estimate. It doesn’t take more than 5 minutes to do this calculation properly if you have Excel, so no excuses for being lazy! 😉
Furthermore, his 0.43˚C value was a one-year to one-year two-point estimate (the same type that you used in your original post), which he used as an example of why your method does not work. I don’t think he is claiming this to be a valid estimate, or an “answer”. He’s claiming it as an example of what not to do.
His actual answers have an average of 0.29˚C with a standard deviation of 0.043˚C (counting his data both with and without Pinatubo’s correction). In other words, his estimated trend is 0.29˚C +/- 0.09˚C, 19 times out of 20. That’s an “uncertainty” of about 30%, keeping in mind that one would truthfully expect a range of different trends because he’s looking at a range of different starting and ending years.
You also wrote:
“What is your prediction for 2015? For 2020 and beyond?”
The exact point I am making is that nobody and no model can make a prediction for 2015. And nobody and no model tries to make a prediction for 2015. As a non-expert though I would be willing to hazard a guess for the 5-year running average centered on 2015, which we will be able to calculate in 2018, when the previous year’s data is released. My guess is that it will be 0.3˚C warmer than the 5-year running average centered on 2005. In other words, I predict the average of the anomaly from 2013 to 2017 will be 0.87˚C (on GISTEMP scale). I hope it’s an overestimate though; this is a bet I would rather lose.
I’m really not qualified to comment on the IPCC itself, and for all I know it is as corrupt and misrepresentative as you say. To be honest, I hope it is, and that this whole global warming deal is a scam. I would much rather the IPCC were wrong than that they were right. And if they turn out to be wrong (and knew so) I will be angry at them for misleading me, and yet I do hope this is how it turns out, because I would prefer this to them being right. In other words, I acknowledge a bias and preference for the truth to turn out to be that they’re lying about everything. I’m unfortunately not convinced yet, though.

Rob Nicholls
December 29, 2012 1:14 pm

Thanks v much Ira for your response to my last comment, and thanks for Jacob’s subsequent response. Jacob correctly surmised that I did not mean 0.43 degrees C to be a realistic estimate of the warming since 1990 – I was just trying to illustrate how susceptible to year-to-year variation an estimate is when it’s only based on two years of data.
The question you ask about how close to reality I think the “Hockey Team’s” estimate of 0.8 degrees C of warming since 1880 is, is important. It’s way beyond my expertise (and the free time that I have) to deduce this from first principles as I’m not a climate scientist, although I have tried to follow all the arguments as well as I can. I have to say I’ve never found any evidence or argument that casts serious doubt in my mind on the IPCC’s assertion (in AR4) that the vast majority of the warming is real and anthropogenic (Of course I hope that I’m wrong and that you are right.) The much maligned adjustments to temperature data series seem, as far as I can tell, to be scientifically rigorous and necessary to correct for known biases and to make the data comparable so that trends in temperature can be assessed. I seem to remember that there’s strong peer-reviewed science suggesting that the contribution from urban heat island effects is very small (this is somewhat counter-intuitive for me). The contribution to warming from changes in solar irradiance since 1880 seems to me to be small, and I don’t think there’s any convincing evidence that galactic cosmic rays play a significant role. (sorry I cannot quote the papers to back up any of this, I don’t have time to pull it all together at the moment, but there’s plenty of websites on both sides of the debate which link the relevant papers). I don’t think there’s a conspiracy of mainstream climate scientists making up the evidence for anthropogenic global warming (I’ve had a good look for evidence of such a conspiracy, ever since Climategate in 2009).
Admittedly, as a lot of the arguments go over my head, I cannot say with 100% certainty who is right and who is wrong with respect to climate change, but I’d be extremely surprised if the IPCC, which seems to involve hundreds of experts and which seems to summarise the science cautiously and honestly, have got it very wrong. I may of course be wrong!
Anyway, thanks for taking the time to respond so thoroughly to my comments. Your responses have been thought-provoking for me. Best wishes for 2013 to you and all at this site.