This post made me think of this poem, The Arrow and the Song. The arrows are the forecasts, and the song is the IPCC report – Anthony
I shot an arrow into the air,
It fell to earth, I knew not where;
For, so swiftly it flew, the sight
Could not follow it in its flight.
I breathed a song into the air,
It fell to earth, I knew not where;
For who has sight so keen and strong,
That it can follow the flight of song?
– Henry Wadsworth Longfellow
Guest Post by Ira Glickstein.
The animated graphic is based on Figure 1-4 from the recently leaked IPCC AR5 draft document. This one chart is all we need to prove, without a doubt, that IPCC analysis methodology and computer models are seriously flawed. They have way over-estimated the extent of Global Warming since the IPCC first started issuing Assessment Reports in 1990, and continuing through the fourth report issued in 2007.
When actual observations over a period of up to 22 years substantially contradict predictions based on a given climate theory, that theory must be greatly modified or completely discarded.

IPCC SHOT FOUR “ARROWS” – ALL HIT WAY TOO HIGH FOR 2012
The animation shows arrows representing the central estimates of how much the IPCC officially predicted the Earth surface temperature “anomaly” would increase from 1990 to 2012. The estimates are from the First Assessment Report (FAR-1990), the Second (SAR-1996), the Third (TAR-2001), and the Fourth (AR4-2007). Each arrow is aimed at the center of its corresponding colored “whisker” at the right edge of the base figure.
The circle at the tail of each arrow indicates the Global temperature in the year the given assessment report was issued. The first head on each arrow represents the central IPCC prediction for 2012. They all mispredict warming from 1990 to 2012 by a factor of two to three. The dashed line and second arrow head represents the central IPCC predictions for 2015.
Actual Global Warming, from 1990 to 2012 (indicated by black bars in the base graphic) varies from year to year. However, net warming between 1990 and 2012 is in the range of 0.12 to 0.16˚C (indicated by the black arrow in the animation). The central predictions from the four reports (indicated by the colored arrows in the animation) range from 0.3˚C to 0.5˚C, which is about two to five times greater than actual measured net warming.
The colored bands in the base IPCC graphic indicate the 90% range of uncertainty above and below the central predictions calculated by the IPCC when they issued the assessment reports. 90% certainty means there is only one chance in ten the actual observations will fall outside the colored bands.
The IPCC has issued four reports, so, given 90% certainty for each report, there should be only one chance in 10,000 (ten times ten times ten times ten) that they got it wrong four times in a row. But they did! Please note that the colored bands, wide as they are, do not go low enough to contain the actual observations for Global Temperature reported by the IPCC for 2012.
Thus, the IPCC predictions for 2012 are high by multiples of what they thought they were predicting! Although the analysts and modelers claimed their predictions were 90% certain, it is now clear they were far from that mark with each and every prediction.
IPCC PREDICTIONS FOR 2015 – AND IRA’S
The colored bands extend to 2015 as do the central prediction arrows in the animation. The arrow heads at the ends of the dashed portion indicate IPCC central predictions for the Global temperature “anomaly” for 2015. My black arrow, from the actual 1990 Global temperature “anomaly” to the actual 2012 temperature “anomaly” also extends out to 2015, and let that be my prediction for 2015:
- IPCC FAR Prediction for 2015: 0.88˚C (1.2 to 0.56)
- IPCC SAR Prediction for 2015: 0.64˚C (0.75 to 0.52)
- IPCC TAR Prediction for 2015: 0.77˚C (0.98 to 0.55)
- IPCC AR5 Prediction for 2015: 0.79˚C (0.96 to 0.61)
- Ira Glickstein’s Central Prediction for 2015: 0.46˚C
Please note that the temperature “anomaly” for 1990 is 0.28˚C, so that amount must be subtracted from the above estimates to calculate the amount of warming predicted for the period from 1990 to 2015.
IF THEORY DIFFERS FROM OBSERVATIONS, THE THEORY IS WRONG
As Feynman famously pointed out, when actual observations over a period of time contradict predictions based on a given theory, that theory is wrong!
Global temperature observations over the more than two decades since the First IPCC Assessment Report demonstrate that the IPCC climate theory, and models based on that theory, are wrong. Therefore, they must be greatly modified or completely discarded. Looking at the scattershot “arrows” in the graphic, the IPCC has not learned much about their misguided theories and flawed models or improved them over the past two decades, so I cannot hold out much hope for the final version of their Assessment Report #5 (AR5).
Keep in mind that the final AR5 is scheduled to be issued in 2013. It is uncertain if Figure 1-4, the most honest IPCC effort of which I am aware, will survive through the final cut. We shall see.
Ira Glickstein
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Rob Nicholls says:
December 23, 2012 at 1:47 pm
Therefore, unless I’m missing something, or unless I’ve made a mistake in my calculations, the graphic’s suggestion that the actual increase in global surface temperature from 1990 to 2011 was between 0.12 and 0.16 degrees C seems misleading to me.
Let’s assume you are trying to evaluate a trend from “good” data. (And, the judgement of most independent observoers – those not in the pay of the CAGW community – , none of those is particularly accurate w/r to the real “world” temperatures…)
Regardless, let us assume those are valid.
You ARE making an error: You are trying to <artificially create a conclusion that the weorld’s temperatures are linear! They are NOT linear. There is a very evident inflection point – a bend in the curve at 1997-1998-1999. Your “method” creates an “anomaly” (an average needed to calculate differences) early in the time series of events; creates a second anomaly (a second average) at the end, then looks for a lest squares single line between anomalies based on the start and end values.
If you insist on using straight lies to analyze cyclic trends, do this: Run TWO least-squares linear trends. One based on 1990 (or better yet 1975) through 1998. The second using 1996 through 2012 values.
Now plot the two different straight lines.
Rob Nicholls and RACookPE1978: Thanks for your comments and I hope you continue posting and comparing specific linear regression and straight line trends. I find it interesting that the linear regression results calculated by Rob are either in the range of 0.33˚C to 0.37˚C or 0.23˚C to 0.25˚C which are about as far from each other as my simple delta from start (1990) to end (2011) of 0.12˚C to 0.16˚C is from your lowest estimates. I would be interested in the results you might get if you took RACook’s suggestion and did the calculation that way.
Perhaps I am too simple-minded, but if my doctor told me I was eating too much over the past two decades and had therefore gained weight, I would simply subtract my weight in 1990 from my weight in 2011 to determine how much I had gained, net-net, over that period.
In any case Rob, yes, simple subtraction is how I determined the change in temperature anomaly from 1990 to 2011. (And, yes, my graphic is misleading when it mentions 2012 as was brought to my attention earlier in this comment thread. The graphic should say 2011 and the oval and arrow heads that say 2012 should be moved a year to the right.)
Regardless of the exact numbers and how they are calculated, the bottom line for me is that the IPCC made four predictions, basically implying that Global Warming would end human life and civilization within our lifetimes unless our governments took drastic actions to reduce the rapid rise in CO2 levels. Despite the fact that CO2 has continued its rapid rise, each of those four predictions turned out to be way too high with the result that actual observations in 2011 are outside the supposed 90% certainty limits of all four predictions.
Where I come from (System Engineering), predictions generally improve as we get more data and learn the folly of past predictions. This has not happened in the world of the IPCC. Although SAR turned out closer to the truth than FAR, the last two predictions, TAR and AR4, have turned out worse than SAR.
For me, that confirms the most likely explanation that the Official Climate “hockey” Team is not motivated primarily by a scientific agenda, but rather with a political one.
If you have any evidence to the contrary, please bring it forth. I’m listening.
Ira
Ira and RACookPE1978, thanks for your prompt responses – these are much appreciated. I’ll try to do some further analyses around this at a later date. I hope all here will have a good Christmas.
[THANKS, and I’ll look forward to further analysis. Also, a Merry Christmas to all as well as a slightly belated Happy Chanukah. (Let us keep CHRIST in CHRISTmas and the “CH” in CHanukah -pronounced like the “CH” in the Scottish “loCH” or the German “aCHtung!” :^) – Ira]
Ira —
“Perhaps I am too simple-minded, but if my doctor told me I was eating too much over the past two decades and had therefore gained weight, I would simply subtract my weight in 1990 from my weight in 2011 to determine how much I had gained, net-net, over that period. ”
Imagine that your body weight fluctuates by as much as 15% on a day-to-day basis, let alone year-to-year. Would weighing yourself one Tuesday in 1990 and again on a Tuesday in 2011 be enough to conclude that your weight had increased by 30% over that period?
Probably not.
Temperature seems to be like that.
If you look at the up-to-date GISTEMP record for 2011 and take the (black) data points for 1990 and 2011, you can see about 0.17 to 0.19 C increase.
http://data.giss.nasa.gov/gistemp/2011/Fig2.gif
But if you compare that against the 5-year running average of the data (red trendline), you’ll see that it’s quite unrepresentative to do that two-point estimate. No climate models are expecting to produce the year-to-year accurate temperature estimates, but if they can capture the average trend that’s a success.
So if you’re going to do just a two-point trend estimate, at the very least you should base your start and end temperatures not on the instantaneous temperature in the particular year you chose, but on the 5- or 10-year average temperature centered about that year.
Ira Glickstein, PhD says:
December 23, 2012 at 10:19 am
I’m determined to figure out whether the theory has failed or not. If I become convinced that it has, I’ll want to know why. I can’t help feeling that a lot of people are leapfrogging this step. Nothing wrong with considering various known and potential errors, but I want to know about the how much of diverge between prediction and measurement actually arises from an unusual combination of ENSO and solar events. The well-known attempt do do this is Rahmstorf and Foster’s paper:
http://iopscience.iop.org/1748-9326/6/4/044022/pdf/1748-9326_6_4_044022.pdf
You know this one, I’m sure; but there it is as a reference (and thanks to Louis Hooffstetter on another thread for the link). Now, this is the result of a multivariate regression, which will work better or worse depending on a number of things, including whether anything was left out. Also, it’s all statistics, and necessarily includes simplifying assumptions. That’s why I’d like, in the end, to see the known ENSO and solar events to go into a few models, and see whether the modelers end up with a happy face or a sad one. (Or, a puzzled face).
The really warm people seem to like this paper a lot. On the skeptic side, maybe a few dismissive comments, but mostly, just dead air on that frequency. The only one I’ve seen really acknowledge and engage this paper, or even the issue at all, is Bob Tisdale. I don’t think his counterargument is ready for prime time yet, but he’d plugging away at it, and taking the issue on.
This certainly happens in the hospital setting, and it’s worth lawsuits, and sometimes, prosecutions. But in the analogous modeling field, I don’t think that there have been four failures. FAR’s aarrow shot high, but it was the first, and clipped the tops of some error bars. SAR actually did pretty well right up until about 2000, the beginning of the strange era I want explained. For TAR we could say the same thing. AR4 gets within a couple of error bars but it’s too early to say anything, no matter what the data.
So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
People do get attached to their pet theories, and the careers that they engender. I think most of this is unconscious; my thesis advisor was a highly ethical man, but he could talk himself into anything. By the same token, I see a lot of people who really want the whole process to be flawed, dishonest, and/or failed, so that they won’t see a threat to their lifestyles, freedoms, etc. This is equally unconscious. Me, I want to know what’s going on. But then,in the end, so does everyone else.
Martin Lewitt says:
December 22, 2012 at 7:58 am
Not that I need another subject to read deeply…but if you have a ref to a good starting place, I’d be grateful.
Beyond that, I hope everyone enjoys whatever they’re doing for the holiday. I hope I don’t have too many typos; dinner is called, and I have to hit “send”
So, if you correct, or model, the ENSO and solar events of the last 15 years. what does it look like? I don’t know, but I want to. I know many on the skeptical side would dismiss this issue as being unlikely to matter. But it will definitely matter in the debate, because the Warm Ones will be all over it, every time the “flat decade” argument comes out.
OK, but it is not 4 models that are not tracking the real world.
It’s ALL 23 models, in EVERY ONE of their hundreds of runs. To date, NO model run at ANY time using conventional consensus state-of-the-art “science” has duplicated the real world over the past 16 years.
SO, I would grant this is important, but I would caution that the analysis needs to be “real world”: pollution and aerosols need to match real-world measurements, not conveniently “canned” assumptions that “aerosols increased between 1955 and 1975 so solar radiation decreased by xxx.yyy% over that time frame.” When modeled, ENSO events need to be as little and short and follow their actual rise, steady, and fall patterns as they actually where – not “light switch” on-and-off “high and low” positive and negative inputs.
Perhaps the result will be instructive.
Then again, if it were instructive, the “scientific” CAGW theists would have run their latest models with … maybe … the past 16 years of “real” data, wouldn’t they?
Jacob wrote:
Thanks for sharing your ideas and I agree that it would be foolish to compare my weight (or the temperature) on “one Tuesday in 1990 and again on a Tuesday in 2011” which is why I compared not a day or a month but the whole IPCC YEARLY average temperature anomaly reports for the YEAR of 1990 with the YEAR of 2011.
I chose 1990 because that was the IPCC’s First Assessment Report (FAR) and the first actual temperature anomaly data point on the AR5 draft Figure 1-4 graphic. I chose 2011 because it was the last actual anomaly data point. Yes, taking a five- or ten-year average would have yielded a different result. For example, taking the five-year average from 1990-1995 (which includes the low point at 1992) and comparing it to the average of the most recent five years available (2006-2011) would have yielded about double the delta from what I presented. However, three out of the four IPCC reports have central estimates for 2011 or 2012 that are higher than the five-year averages you suggest.
The key message of Figure 1-4 is that the IPCC has been wrong on the high side four times over a period of up to 22 years. I do not think they are that incompetent, so I have to conclude their primary motivation has been political. They may have really thought they were saving human life and civilization from a Global Warming “tipping point” by making exagerated predictions in the warming direction when they issued the FAR. However, by the third or fourth report, they should have figured out that their assumption of ultra-high CO2 sensitivity was unjustified. I think CO2 sensitivity is closer to 1˚C than the approximately 3˚C they seem to have used. What would their four projections look like if they re-ran them with lower CO2 sensitivity? I’ll bet much closer to the truth.
Those of us who think low solar activity is related to lower than usual temperature anomalies are encouraged by the most recent Solar Cycle #24 that seems likely to peak this coming year at about 60% below Solar Cycle #23. That, taken together with the lack of statistically significant warming over the past decade and a half, lead me to expect a continuation of a low level of warming and perhaps even some cooling. All this despite the continued high rate of atmospheric CO2 growth.
What is your prediction for 2015? For 2020 and beyond?
Ira
JazzyT: I read your latest comment with interest and perhaps it is all a run of bad luck that will turn around over the coming decade. Kind of like the guy who bets his rent money on a “sure shot” and, each time he loses, doubles down because the law of averages says he can’t lose forever.
I see no mention of CO2 Climate Sensitivity in your comments. IMHO, CO2 is the key THEORETICAL issue that plagues the IPCC researchers.
If they had assumed that doubling atmospheric CO2, all else being equal, would raise average Global temperatures by only 1˚C, their predictions would have been pretty close to the truth for 2011. On the other hand, had they used 1˚C, their whole “tipping point” panic argument would have evaporated.
Ira
Thanks RACookPE1978 for the information that the four IPCC Assessment Reports are actually based on hundreds of runs of some 23 different models. Every one aimed too high.
When will they ever learn?
In this Topic back in 2010, I included the following with apologies to Pete Seeger:
On my personal blog, back in January 2009, I predicted that Solar Cycle #24 would peak at about 80 in 2013. It looks like I was a bit high since the likely peak is below 70. In 2006, NASA was predicting a peak of about 146 and, in 2008, they reduced it to about 137. When I made my prediction of 80, NASA was already down to 104. If Solar Cycle #24 comes in low as expected, and if it happens to be followed by another couple of low cycles, we could be in for some Global Cooling and future generations may thank us for putting a cushion of atmospheric CO2 up to moderate the cooling temperatures.
Ira
The problem with estimating the trend in global temperatures by using just 2 data points at the start and end of the relevant time period (subtracting one data point from the other) is that this method is heavily influenced by year-to-year fluctuations in temperature (such fluctuations are caused by, among other things, the El Nino Southern Oscillation, the 11 year solar cycle, and volcanic activity).
This is illustrated if we move the timescale of the analysis by just one or two years:
The 1990 to 2011 global temperature difference calculated by subtracting the 1990 global temperature anomaly from the 2011 global temperature anomaly was 0.14 for NCDC/NOAA, 0.11 for HadCRUT4, and 0.15 for GISS (GISTEMP) (all temperatures are in degrees C).
But, if the timescale shifts backwards by one year, and we subtract the 1989 global temperature anomaly from the 2010 global temperature anomaly, we get 0.40 for NCDC/NOAA, 0.42 for HadCRUT4 and 0.43 for GISS. (The animated graphic in the article would look quite different with these figures.)
If we shift backwards by another year, and we subtract the 1988 global temperature anomaly from the 2009 global temperature anomaly, we get 0.26 for NCDC/NOAA, 0.29 for HadCRUT4 and 0.25 for GISS.
(I note that Jacob already commented along these lines on 24th December).
I appreciate what RACookPE1978 said about linear regression on 23rd December. I can see that if the underlying temperature trend is markedly non-linear then linear regression may not provide a reliable estimate of the underlying trend in global temperatures from 1990 to 2011.
One method for calculating the trend in global average temperature over a 21-year period, which does not assume a linear trend, and which reduces the influence of year-to-year variability, is to perform a subtraction of moving averages. I’ve done the calculations with 5 and 10 year moving averages.
10-year centred moving averages can be calculated up to 2006, and 5-year centred moving averages can be calculated up to 2009. So, the latest 21-year period for which subtraction of 10-year moving averages can be performed is 1985 to 2006. The latest 21-year period for which subtraction of 5-year moving averages can be performed is 1988 to 2009. (Fortuitously, the calculations for these time periods do not involve data from the years 1991-1994, which seem to have been heavily influenced by the Pinatubo eruption).
Subtracting the 10 year moving average centred on 1985 from that centred on 2006 gives 0.36 for NCDC/NOAA, 0.37 for HadCRUT4, and 0.36 for GISS (in degrees C).
Subtracting the 5 year moving average centred on 1988 from that centred on 2009 gives 0.27 for NCDC/NOAA, 0.29 for HadCRUT4, and 0.29 for GISS (in degrees C).
I also used similar methods to look at the temperature changes after 1990 (again in degrees C; this time the calculations involve the years 1991-4, so I’ve included adjustments for the Pinatubo eruption in 1991):
Subtracting the 10-year moving average centred on 1990 from that centred on 2006 gives 0.30 for NCDC/NOAA, 0.31 for HadCRUT4 and 0.32 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.26 for NCDC/NOAA, 0.27 for HadCRUT4 and 0.27 for GISS (with adjustment for Pinatubo’s eruption).
Subtracting the 5-year moving average centred on 1990 from that centred on 2009 gives 0.26 for NCDC/NOAA, 0.26 for HadCRUT4 and 0.28 for GISS (without adjustment for Pinatubo’s 1991 eruption), and 0.22 for NCDC/NOAA, 0.23 for HadCRUT4 and 0.24 for GISS (with adjustment for Pinatubo’s eruption.)
[The method I used for adjusting for Pinatubo’s 1991 eruption in this case was to re-calculate the temperature anomalies for 1991 to 1994 as the average of the anomalies for 1986 to 1990 and 1995 to 1999 (i.e. 5 years either side of the time period 1991-4). This is the best method that I could come up with given my limited skills (I think it’s better than the methods that I mentioned in my previous comment), but I’m not very happy with it and I’m sure there must be better ways of adjusting for these kinds of events. It’s pretty arbitrary to adjust for Pinatubo and not for other short-term causes of fluctuation in temperature. My reason for performing this one particular adjustment is that the Pinatubo eruption seems to have had a big influence on temperatures between 1991 and 1994, and to leave out the adjustment would, I think, lead to an over-estimate of the underlying trend in global temperatures since 1990, if the data from 1991-4 is used in calculating the estimate. I don’t have the time or the expertise to try to replicate Foster & Rahmstorf’s 2011 paper].
Thanks, Rob Nicholls, for doing all these alternative calculations using five- and ten year moving averages. Your answers range from 0.22˚C to 0.43˚C. That is about 50% uncertainty.
I note that your results are based on the excellent satellite data available over the two most recent decades. Given that range of uncertainty for data that is far more reliable than the old manual thermometer readings prior to the 1970’s, how close to reality do you think the current Official “hockey” Team estimate of Global Warming of about 0.8˚C since the 1880’s is?
My estimate is that about 0.3˚C of the 0.8˚C is due to data bias such as measurement stations that were originally located far from artificial heat sources that, over the decades, have been encroached by expanding human civilization, as well as “adjustments” of previously reported data that, in the late 1990’s, seem to have systematically reduced temperature readings from before 1960 and raised readings taken after 1980, increasing net warming by up to 0.3˚C. About 0.4˚C is due to natural variations not under human control or influence, such as multi-decadal ocean oscillations and sequences of high or low 11-year solar cycles. IMHO, only about 0.1˚C of the total reported Global Warming since the 1880’s is due to human activities including our unprecedented burning of fossil fuels and land use changes that affect the Earth’s albedo.
Nearly two years ago, I did a survey of WUWT commenters and, taking the average of their estimates: Data Bias = 0.28˚C, Natural Variations = 0.33˚C, and Human Caused = 0.18˚C.
What do you think of these estimates?
Ira
Ira Glickstein, PhD says:
December 27, 2012 at 8:36 pm
Reasonable estimates.
however, the actual satellite measurements show a significant but random month-to-month change of +/- 0.2 . That is, a temperature measurement (expressed as an anomaly) in May has been as much as 0.2 degrees different than the temperature anomaly for August or March. This is not really an error band – the error band would describe instrument or sensor differences that “record” or “report” a difference from the actual temperature.
In the satellite record, it appears to be the actual measurements – the temperatures – that vary. In turn this creates two questions: Is the temperature actually varying in this kind of random ways over a very short term interval? or do these variations stem a detector/analysis flaw?
If the temperatures do vary over such short term,s, is it valid to even consider a 1/3 of 1 degree difference as significant in any “symptom” of climate change?
.
Roger Knights says:
December 23, 2012 at 5:30 am
Thanks Roger, belatedly, for the correction.
Ira Glickstein, PhD says:
December 25, 2012 at 8:33 pm
The CO2 forcing seems uncontroversial among scientists.skeptical and warm alike. It’s the feedbacks, especially clouds, where controversy lies. Tropical cloud formation influences how much heat is available to warm the temperate zones, melt ice caps and decrease albedo, etc. Cloud formation is the least understood process in the game, and clouds are difficult to model on the grid size used, which is limited by available computing power, As I understand it, the way to deal with such uncertainties is to make a number of runs, using different parameters to (hopefully) cover the range of possible values. Meanwhile, smaller models can be run over a limited areas, e.g., just a small tropical ocean zone, for a month, to give a snapshot that can be compared to observations. Finally, to insure against errors, and because each model has somewhat different algorithms that may do better or worse in different situations, they use 23 different models, as noted in several posts above.
Over the long term, say, 100 years, I would view any tipping points, scary or otherwise, as part of the overall CO2 sensitivity. Of course, it’s useful to define a shorter-term CO2 sensitivity that specifically excludes tipping points. At this point, I haven’t heard of it being officially defined this way; it’s just what makes sense to me. Meanwhile, it may be true that 1 degree for CO2 doubling would fit the recent data, but after correction for ENSO and solar, this probably would not be true anymore.
So, to make a short story long, I’d say that although they eventually do get CO2 sensitivity right or wrong, that would be something to come out of the model, and it can be used as a diagnostic, if extraneous influences (ENSO, Solar) have been accounted for. So, “do they get CO2 sensitivity right” is indeed a good question. On the input end, it’s not CO2 sensitivity, but rather its components that they have to get right: CO2 forcing, which is pretty well nailed down, and all of the feedbacks, especially clouds. As others point out from time to time, “it’s all in the feedbacks” that you’ll find the remaining controversies that will determine CO2 sensitivity, and, for various emission scenarios, future warming. It’s all in the feedbacks–especially clouds.
Hi Ira,
(OT: How do you get a real “quote” in italics like that, to make it a proper response? I don’t see any rich-text formatting options. I’ll just have to quote you the traditional way)
“Thanks for sharing your ideas and I agree that it would be foolish to compare my weight (or the temperature) on “one Tuesday in 1990 and again on a Tuesday in 2011″ which is why I compared not a day or a month but the whole IPCC YEARLY average temperature anomaly reports for the YEAR of 1990 with the YEAR of 2011.”
I know that you are not comparing averages over a day or month, but a year. My point (perhaps I can stress it better than I did) is that even a yearly average is still not good enough. The reason I can say this confidently is well illustrated by Rob Nicholls’ post, or better yet, by glancing at any graph of the temperature record (for example the GISTEMP graph that I linked to). You can clearly see just by looking at the graph that the year-to-year temperature variations are extremely noisy. Those are already averaged over an entire year (and the entire globe). Simply from this, it is totally apparent that a comparison of “not a day or a month but the whole IPCC YEARLY average” is still not going to give you any useful result. Rob’s example of how taking your exact approach, but shifting the year (arbitrarily) by plus or minus 1, gives completely different answers.
I do believe that you have good reasons to pick 1990 as your starting point and 2011 as your ending point and that you aren’t cherry-picking those particular years to support your argument. But still, picking just two individual years (even if they’re a decade apart) won’t give you any useful result, because the signal-to-noise ratio is simply too high.
“Your answers range from 0.22˚C to 0.43˚C. That is about 50% uncertainty.”
That is a very unusual definition of uncertainty that you are using. Uncertainty in a dataset is not the minimum value divided by the maximum value, not even as a rough estimate. It doesn’t take more than 5 minutes to do this calculation properly if you have Excel, so no excuses for being lazy! 😉
Furthermore, his 0.43˚C value was a one-year to one-year two-point estimate (the same type that you used in your original post), which he used as an example of why your method does not work. I don’t think he is claiming this to be a valid estimate, or an “answer”. He’s claiming it as an example of what not to do.
His actual answers have an average of 0.29˚C with a standard deviation of 0.043˚C (counting his data both with and without Pinatubo’s correction). In other words, his estimated trend is 0.29˚C +/- 0.09˚C, 19 times out of 20. That’s an “uncertainty” of about 30%, keeping in mind that one would truthfully expect a range of different trends because he’s looking at a range of different starting and ending years.
You also wrote:
“What is your prediction for 2015? For 2020 and beyond?”
The exact point I am making is that nobody and no model can make a prediction for 2015. And nobody and no model tries to make a prediction for 2015. As a non-expert though I would be willing to hazard a guess for the 5-year running average centered on 2015, which we will be able to calculate in 2018, when the previous year’s data is released. My guess is that it will be 0.3˚C warmer than the 5-year running average centered on 2005. In other words, I predict the average of the anomaly from 2013 to 2017 will be 0.87˚C (on GISTEMP scale). I hope it’s an overestimate though; this is a bet I would rather lose.
I’m really not qualified to comment on the IPCC itself, and for all I know it is as corrupt and misrepresentative as you say. To be honest, I hope it is, and that this whole global warming deal is a scam. I would much rather the IPCC were wrong than that they were right. And if they turn out to be wrong (and knew so) I will be angry at them for misleading me, and yet I do hope this is how it turns out, because I would prefer this to them being right. In other words, I acknowledge a bias and preference for the truth to turn out to be that they’re lying about everything. I’m unfortunately not convinced yet, though.
RACookPE1978, Rob Nichols, and Jacob: Thanks for your latest comments and I accept them as reasonable and science-based.
For Jacob, a brief HTML “lesson” on how to do a Blockquote. Jacob wrote:
How did I get the above text to appear indented and in italics? I just put the text I copied from your comment between HTML blockquote commands:
<blockquote> any text you want </blockquote>
and it automagically appears indented and in italics like this:
Good luck.
Ira
Thanks v much Ira for your response to my last comment, and thanks for Jacob’s subsequent response. Jacob correctly surmised that I did not mean 0.43 degrees C to be a realistic estimate of the warming since 1990 – I was just trying to illustrate how susceptible to year-to-year variation an estimate is when it’s only based on two years of data.
The question you ask about how close to reality I think the “Hockey Team’s” estimate of 0.8 degrees C of warming since 1880 is, is important. It’s way beyond my expertise (and the free time that I have) to deduce this from first principles as I’m not a climate scientist, although I have tried to follow all the arguments as well as I can. I have to say I’ve never found any evidence or argument that casts serious doubt in my mind on the IPCC’s assertion (in AR4) that the vast majority of the warming is real and anthropogenic (Of course I hope that I’m wrong and that you are right.) The much maligned adjustments to temperature data series seem, as far as I can tell, to be scientifically rigorous and necessary to correct for known biases and to make the data comparable so that trends in temperature can be assessed. I seem to remember that there’s strong peer-reviewed science suggesting that the contribution from urban heat island effects is very small (this is somewhat counter-intuitive for me). The contribution to warming from changes in solar irradiance since 1880 seems to me to be small, and I don’t think there’s any convincing evidence that galactic cosmic rays play a significant role. (sorry I cannot quote the papers to back up any of this, I don’t have time to pull it all together at the moment, but there’s plenty of websites on both sides of the debate which link the relevant papers). I don’t think there’s a conspiracy of mainstream climate scientists making up the evidence for anthropogenic global warming (I’ve had a good look for evidence of such a conspiracy, ever since Climategate in 2009).
Admittedly, as a lot of the arguments go over my head, I cannot say with 100% certainty who is right and who is wrong with respect to climate change, but I’d be extremely surprised if the IPCC, which seems to involve hundreds of experts and which seems to summarise the science cautiously and honestly, have got it very wrong. I may of course be wrong!
Anyway, thanks for taking the time to respond so thoroughly to my comments. Your responses have been thought-provoking for me. Best wishes for 2013 to you and all at this site.