Guest post by Frank Lansner
IPCC – How not to compare temperatures – if you seek the truth.
There are numerous issues discussed intensely when it comes to IPCC-illustrations of historic temperatures, here for example the illustration from IPCC Third Assessment Report:
Fig 1. Taken from IPCC TAR
In short we have heard of problems with 1) the Mann material, 2) the Briffa material, 3) The cherry picking done by IPCC to predominantly choose data supporting colder Medieval Warm Period, 4) Problems joining proxy data with temperature data mostly obtained from cities or airports etc, 5) Cutting proxy data of when it doesn’t fit temperatures from cities, 6) Creating and Using programs that induces global warming to the data and finally 7) reusing for example Mann and Briffa data endlessly (Moberg, Rutherford, Kaufmann, AR4 etcetcetc).
But, as I believe another banal error needs more attention:
8) Wrong compare.
Imagine for a moment that none of the above mentioned problems 1) – 7) has any impact and then lets just focus on the comparing itself. The data of the proxies suffer from 2 impacts:
A) “Technical averaging” – Impact of many series of date summarized.
Check out what happens when summarizing many datasets of temperatures, an example, the cooling episode 8200 years ago:
Data taken from: https://wattsupwiththat.com/2009/04/11/making-holocene-spaghetti-sauce-by-proxy/
The white graph with the red squares are the resulting average graph: More temperature sets added together tends to flatten the average. Notice for example how many datasets certainly has a down peak between 8000 and 9000 years ago, but the timing for these datasets are slightly off, and so the down peak is almost gone.
So, to some degree we can expect multi proxies to yield an averaged overall graph.
B) “Direct averaging” – on top of the technical averaging, the data series are often averaged further to some degree using 30, 40 and 50 years Gaussian filters.
The result of averaging by A) and B) is, that the variability of the IPCC graphs on a decadal ´timescale are limited to just tenths of a degree K. But in reality, if there where any real temperature peaks on decadal time scale in the Medieval period, we will would not see these much in the typical data series IPCC shows.
Is this a problem?
Well, it certainly becomes a problem if these “super averaged” data are compared with data that is NOT quite as “super averaged”. And this faulty compare is just what IPCC do.
IPCC “Super averaged” data from proxies, are typically compared to “Observed” temperatures, that is, recent temperatures not at all submitted to the same degree of averaging.
Technical averaging – type A) – is to some degree not happening for observed temperatures, so how about type B), the direct averaging, filtering?
Well, For the IPCC graphis shown in fig 1 above, the IPCC text says: “All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.”
Explanation: If your data ends in year 2000, then the last genuine 40-year averaged/filtered point on the graph would be a point for 1980 with average of 1960-2000 near +0,2K anomaly. But the IPCC graph for observed temperatures ends at +0,43 K around year 2000. This more resembles the normal five years average of GISS year 2000 data:
Fig 3. Giss temperatures illustrated in year 2001.
So for IPCC/Mann etc. to get a year 2000 temperature as high as +0,43K, they must have used just normal 5 yr avg. A longer average period would yield lower temperature for the last year.
So, when IPCC wrote “with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” – they mean: “We don’t use 40 year average/filter in the last 25 years…!”
So the bottom line is: IPCC compares “super averaged” temperatures of the medieval period with a peak in modern temperatures only submitted to 5 years average.
IPCC basically compares a peak in temperatures in recent years with super averaged medieval data where peaks are more suppressed to conclude how much it is warmer today than in the MWP.
This is a problem !
From this illustration it appears that the peak after 1998 to some degree appears related to the big El Nino 1998 peak, here from appinsys:
So, where there no El Nino peaks in the medieval period that could have affected the compare with recent temperatures? Yes, there where: http://co2science.org/articles/V12/N5/C2.php
So we have every reason to believe that there where also temperature peaks in the medieval period – peaks that just might resemble the recent El Nino Peak.
So no excuse for the IPCC to compare a modern temperature peak with medieval average temperatures.
This is banal, of course, and even IPCC must have been aware of this, one should think.
Here: An illustration where the single year 2004 for observed temperature data explicitly is used in comparison with the super averaged medieval temperature data.
Fig 5. (from here)