IPCC – How not to compare temperatures

Guest post by Frank Lansner

IPCC – How not to compare temperatures – if you seek the truth.

There are numerous issues discussed intensely when it comes to IPCC-illustrations of historic temperatures, here for example the illustration from IPCC Third Assessment Report:

Fig 1. Taken from IPCC TAR

In short we have heard of problems with 1) the Mann material, 2) the Briffa material, 3) The cherry picking done by IPCC to predominantly choose data supporting colder Medieval Warm Period, 4) Problems joining proxy data with temperature data mostly obtained from cities or airports etc, 5) Cutting proxy data of when it doesn’t fit temperatures from cities, 6) Creating and Using programs that induces global warming to the data and finally 7) reusing for example Mann and Briffa data endlessly (Moberg, Rutherford, Kaufmann, AR4 etcetcetc).

But, as I believe another banal error needs more attention:

8) Wrong compare.

Imagine for a moment that none of the above mentioned problems 1) – 7) has any impact and then lets just focus on the comparing itself. The data of the proxies suffer from 2 impacts:

A) “Technical averaging” – Impact of many series of date summarized.

Check out what happens when summarizing many datasets of temperatures, an example, the cooling episode 8200 years ago:

Fig 2.

Data taken from: http://wattsupwiththat.com/2009/04/11/making-holocene-spaghetti-sauce-by-proxy/

The white graph with the red squares are the resulting average graph: More temperature sets added together tends to flatten the average. Notice for example how many datasets certainly has a down peak between 8000 and 9000 years ago, but the timing for these datasets are slightly off, and so the down peak is almost gone.

So, to some degree we can expect multi proxies to yield an averaged overall graph.

B) “Direct averaging” – on top of the technical averaging, the data series are often averaged further to some degree using 30, 40 and 50 years Gaussian filters.

The result of averaging by A) and B) is, that the variability of the IPCC graphs on a decadal ´timescale are limited to just tenths of a degree K. But in reality, if there where any real temperature peaks on decadal time scale in the Medieval period, we will would not see these much in the typical data series IPCC shows.

Is this a problem?

Well, it certainly becomes a problem if these “super averaged” data are compared with data that is NOT quite as “super averaged”. And this faulty compare is just what IPCC do.

IPCC “Super averaged” data from proxies, are typically compared to “Observed” temperatures, that is, recent temperatures not at all submitted to the same degree of averaging.

Technical averaging – type A) – is to some degree not happening for observed temperatures, so how about type B), the direct averaging, filtering?

Well, For the IPCC graphis shown in fig 1 above, the IPCC text says: “All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.”

Explanation: If your data ends in year 2000, then the last genuine 40-year averaged/filtered point on the graph would be a point for 1980 with average of 1960-2000 near +0,2K anomaly. But the IPCC graph for observed temperatures ends at +0,43 K around year 2000. This more resembles the normal five years average of GISS year 2000 data:

Fig 3. Giss temperatures illustrated in year 2001.

So for IPCC/Mann etc. to get a year 2000 temperature as high as +0,43K, they must have used just normal 5 yr avg. A longer average period would yield lower temperature for the last year.

So, when IPCC wrote “with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” – they mean: “We don’t use 40 year average/filter in the last 25 years…!”

So the bottom line is: IPCC compares “super averaged” temperatures of the medieval period with a peak in modern temperatures only submitted to 5 years average.

IPCC basically compares a peak in temperatures in recent years with super averaged medieval data where peaks are more suppressed to conclude how much it is warmer today than in the MWP.

This is a problem !

From this illustration it appears that the peak after 1998 to some degree appears related to the big El Nino 1998 peak, here from appinsys:

Fig 4.

So, where there no El Nino peaks in the medieval period that could have affected the compare with recent temperatures? Yes, there where: http://co2science.org/articles/V12/N5/C2.php

So we have every reason to believe that there where also temperature peaks in the medieval period – peaks that just might resemble the recent El Nino Peak.

So no excuse for the IPCC to compare a modern temperature peak with medieval average temperatures.

This is banal, of course, and even IPCC must have been aware of this, one should think.

Here: An illustration where the single year 2004 for observed temperature data explicitly is used in comparison with the super averaged medieval temperature data.

Fig 5. (from here)

Cheers!

0 0 votes
Article Rating
136 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
April 4, 2010 9:22 pm

depending on the scales you use for each axis you could make that graph look as flat as a pancake or a silhouette of the alps.

Pete Olson
April 4, 2010 9:33 pm

Somebody needs to proofread this stuff before posting it. Legitimate points get lost in poor grammar, spelling, and use of language.

George Turner
April 4, 2010 9:45 pm

Good point!
Small note: A few places after figure 4 the text said “where” instead of “were”.

JinOH
April 4, 2010 9:47 pm

Dang. Now I want spaghetti.

churn
April 4, 2010 9:49 pm

Everyone needs to read the books “How to Lie with Statistics” by Darrell Huff and “Freakonomics” by Dubner and Levitt.

April 4, 2010 9:53 pm

As I recall, that last wikipedia chart above had 8 of the 10 curves generated by card-carrying hockey team members.
Not to mention using Mike’s Trick, and clipping off the unhelpful data.

CRS, Dr.P.H.
April 4, 2010 9:54 pm

Nicely done, Frank! Thanks!

Claude Harvey
April 4, 2010 10:04 pm

Pretty much indecipherable to this reader. My quick read leaves me with the impression that the IPCC and the usual cast of characters have done bad things with the numbers. This is new news? Maybe I shouldn’t have read the thing immediately after struggling for hours with my balky computer which left me “crabby”, but I didn’t quite get it and the piece didn’t excite enough energy to cause me to work at it.

Steve Garcia
April 4, 2010 10:46 pm

Good going, Anthony –
Whether intentional or not, treating recent data differently is not cool, not without mentioning that this is the case. 40-year averages is a really curve-smoothing operation in itself. If that is done for most of the years on top of the blending of many proxies like Anthony shows, yes, you’re going to get a super-smoothed curve, with VERY few peaks and valleys.
The extreme example of long-year averaging would be averaging ALL the years into one value, which would be a flat horizontal line. The more years for the rolling averages, the flatter it gets. The fewer years, the spikier it gets.
It IS pretty heinous to then tack on the less-blended (i.e., multiple data types) and shorter rolling average data from the later years and then act like the spikes mean anything.
I’m no expert, but when I plot even single source data with rolling averages, I don’t even TRY to include the last years. If I do a 10-year average, I stop the rolling average graphs 5 years short – the last year that has real data covering its entire 10-year span. AND I always show both non-rolling and rolling graphs, so people can see what is going on.
Half the 40-year average is just the 20 years that are the years they say the temps have been rising.
From my experience, I’d rate this post VERY high in real world graphing.
But I have to wonder why Steve M hasn’t pointed this out.
.

Steve Garcia
April 4, 2010 10:47 pm

OOPS! I thought this was from Anthony. Sorry!
My kudos should have gone to Frank.

Mikkel
April 4, 2010 10:51 pm

This post is written in good quality Danglish, which should not be difficult for native English speakers to understand.
As native speaker of a world language, one must be able to decipher its use by non-native speakers.
I have seen far worse examples in reports and blog entries linked-to from WUWT.
/Mikkel

wayne
April 4, 2010 11:00 pm

That’s an aspect I hadn’t considered, thanks Frank.
This Hamming filtering with its 25 period mean padding on either end means it matters a whole lot whether it was near a high point or low point at either end. That leaves a hidden way to artificially bias data if the endpoints themselves do not truly have a Gaussian distribution about the mean. That is, if beginning points of multiple sets tend to be up, they would tend to be artificially smoothed downward due to the mean padding, if ending points of all sets tend to be down, they would tend to be artificially smoothed up. That’s a great place to prove bias, if it exists, when many sets of data are merged. Thanks again Frank.

NZ Willy
April 4, 2010 11:08 pm

I’m suspicious that the 1998 El Nino was used as a smokescreen to jack up the global average temperatures. Worth keeping in mind when investigating the “adjustments” done on temperature records.

wayne
April 4, 2010 11:19 pm

Additum to: wayne (23:00:15)
I think I just made a loose statement above. After some thought, I realized it matters greatly exactly which ‘mean’ the IPCC is speaking of. Is it the mean of each data set’s data of 25 years on either end or is it the mean of the endpoints of the collection of all data sets. I wonder exactly which one they were referring to in:

[…] the IPCC text says: “All series were smoothed with a 40-year Hamming-weights lowpass filter, with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.”

wayne
April 4, 2010 11:41 pm

Mikkel (22:51:19) :
[…] As native speaker of a world language, one must be able to decipher its use by non-native speakers. […]
I call it ‘try to make it be correct’ before ‘calling it wrong’.

pat
April 4, 2010 11:42 pm

I think the whole idea of averages is now suspect.

John F. Hultquist
April 4, 2010 11:55 pm

http://wmbriggs.com/blog/?p=195
Do not smooth times series, you hockey puck!
Try this one by William M. Briggs

Capn Jack.
April 5, 2010 12:09 am

I think the best thing that can be said about the IPCC, Mann, and so on, there has been no statistical review by high level Mathematicians, in their own organisations.
It has been up to external review, by sceptics, what ever that means, but by experienced math stat scientists.
Thank you Frank, Poxies must be submitted to the same filtering torture as other sets. Whether it’s a 40 or 5 year setup. Otherwise it’s gibberish.
Me and a mate once created a Roulette program, which we tried at the local casino, we made money but we argued all the way home about the program. I said I have an issue on our random number generator we used. I think it was set constant.
So we checked it we and fixed the bug, we were lucky not to have lost our trousers. However we never experimented at the casino again.
And climate seems to me to a very big complex casino.
AArgh.

Capn Jack.
April 5, 2010 12:12 am

Oops I mean proxies.

April 5, 2010 12:47 am

OT
Powerful earthquake shakes California
Two people have died and at leeast 100 have been injured in a powerful earthquake that hit California two countries and three states on Sunday, swaying buildings from Los Angeles to Phoenix to Tijuana.
http://earthquake.usgs.gov/earthquakes/recenteqsww/Maps/10/245_30.php

John Peter
April 5, 2010 12:59 am

I think that Frank Lansner has touched upon something quite important, which explains to a certain degree how the IPCC has managed to remove the Medieval warm period from their temperature record. The persistent efforts to make the latter part of 20th century and the last ten years look warmer than any time over the last 1000 years is not just a scientific issue but very much a real problem with substantial economic consequences. For us living in United Kingdom it is now becoming an even more immediate and expensive problem. Just read Christopher Booker’s latest column in The Telegraph here to understand how the real cost of “Climate Change” will soon become unbearable for more and more industry and the flight to China will intensify with additional unemployment in the pipeline for no gain.
http://www.telegraph.co.uk/comment/columnists/christopherbooker/7550164/Climate-Change-Act-has-the-biggest-ever-bill.html

David S
April 5, 2010 1:03 am

Looks like this is one to be filed under “weather is not climate”. Averaging the proxies will of course have a further smoothing effect, on top of the 40 year averaging and the use of “Hamming Weights lowpass filter”, to heighten the contrast with a few warm years between 1995 and 2004. Can anyone enlighten us as to the rationale for this gloriously named methodology, normally seen in digital filter processing? It seems to be a long way from home, and as far as I can see there is no innocent reason why it should have been used for weather observations.
It’s a very good spot – well done Frank.

AlanG
April 5, 2010 1:27 am

If you calculate an average P/E ratio for a set of stocks you calculate:
sum(price * shares) / sum(EPS * shares) where all prices and EPS are brought to the same currency unit. You do NOT use average(price/EPS). I know this because I wrote the software for one of the largest investments banks to perform these kind of calculations.
Now look at Fig. 2. All the lines are temperature proxies which are calculated as (something / temp conversion factor) where ‘something’ might be dO18 or tree ring width or whatever. The temperature conversion factor might be a single value or a lookup value from a table if the conversion is non-linear. An example might be (tree ring width/(width per degC)) giving you a proxy temperature series.
Although you have converted everything to a temperature line, you can’t just average these temperatures. The ‘proper’ calculation would be sum(measure)/sum(conversion factors) which would be nonsense because the units are different for each proxy. So the average in Fig 2 is nonsense. The fact the proxies don’t all change at the same time tells you that the temperature proxies are not very good. In fact, I think the only thing you could learn from Fig 2 is how the proxies are correlated.

Ian Cooper
April 5, 2010 1:48 am

Capn Jack, that was a Fruedian slip. I’m sure you meant ‘poxies!’
Frank,
if I am reading this correctly would we expect the temperature plateau shown for the decade that we are just now concluding in Fig. 4 to be subsumed as time rolls on, and this decade is then treated the same as those prior to 1980?
The raw data for my region in New Zealand obtained from NIWA does not support the claims of this current decade being a hot one, in fact this summer has become the third one from this decade to join the ranks of the 20 coldest summers of the past 55 years. I only have data from 1954 onwards, but none of the southern summers from the 1950’s make the list. There is a fairly even spread for all of the following decades. The top two coldest summers are the Pinatubo affected ones of 1991-92 and 1992-93. No surprises there. 1975-76 comes in a close third.
The fact that 1975-76 comes in at 3rd coldest is quite interesting to me when I see that the previous summer is the clear winner in the hottest summer stakes ahead of 1998-99, and 2007-08. As people will no doubt comment, just another fine example of natural variation.
Cheers
Coops.

L
April 5, 2010 2:03 am

Like Pete Olson, I have often taken exception to the atrocious spelling and use of grammar in the posts here, but my complaint is limited to the Americans who obviously spent their time in grammar school picking their noses and failing to learn spelling, punctuation, etc. When some fairly obviously non-native English speaker posts here, I ignore all of the obvious non-native English usages and try to reach the mind of the poster.
People who live in grass houses shouldn’t stow thrones.
Our ‘world’ would be much poorer without the foreigners here who do their level best to get their points across. And, while we’re talking about this, who among the critics I refer to can actually communicate in any language other that, which I will, for sake of fairness, call “Americanese?” Hey, folks, some of the ‘Strines’ here can’t even communicate in that language. Anyone want to see them gone? Didn’t think so. Reserve your vituperation for the American speakers that somehow can’t speak American, leave the English and others alone. L
[Agree. But the most egregious offenders are those who misuse idioms. ☺ ~dbs]

April 5, 2010 2:03 am

Many thanks Frank for this very informative article. Also “mange tak” for the article on http://www.klimadebat.dk/frank-lansner-klimadebat-a-z-del-1-r141.php
A goldmine of information 🙂

SandyInDerby
April 5, 2010 2:24 am

Mikkel (22:51:19) :
This post is written in good quality Danglish, which should not be difficult for native English speakers to understand.
As native speaker of a world language, one must be able to decipher its use by non-native speakers.
I have seen far worse examples in reports and blog entries linked-to from WUWT.
/Mikkel
Well said Mikkel, even in England (not to mention the UK) there many variants of spoken English. If we can cope with that then a few mixed up Where, were, we’re and wear shouldn’t be too hard to cope with. Admonishing a non-native speaker is no way to teach them the nuances of a language.
[Reply: Agree, Sandy. Native English speakers should try to learn the difference between “effect” (verb) and “affect” (noun). Example: “The effect of precipitation affects the climate.” This is the mistake I fix most often. (Many times I don’t bother because it’s hard to keep up.) ~dbs, mod.]

TerrySkinner
April 5, 2010 2:25 am

So much of the debate centers about wavy lines on graphs and what I pick up from this is how much of it is based on trickery and idiocy. I think the point being made here is akin to taking a graph of winter and summer temperatures over say 40 years and doing a chart of the average between winter and summer. But then takking onto the end of the chart the highest temperature of last summer. Result: Instant hockey stick.

Mari Warcwm
April 5, 2010 2:25 am

Many thanks for all your hard work. Very interesting.
I am no mathematician, but I could follow, and I could see typos and cope with them. I am very grateful that someone is putting precious time into disentangling this scam.

SandyInDerby
April 5, 2010 2:26 am

PS meant to add
Neither does bad grammar/spelling mean that the truth of the content is negated.

TerrySkinner
April 5, 2010 2:31 am

Getting away from graphs for a moment, I was reading a book recently and came across some real world scientific evidence for climate change:
Medieval Frontier Societies (1989)
Chapter 1: Frontier and Settlement: Which Influenced Which? England and Scotland 1100 – 1300 by Geoffrey Barrow:
“Archaeological research, and especially intensive aerial photography in recent years, has made it impossible to doubt that the climate was more favourable for cereal cultivation than it is today: we have abundant traces of regular ploughing on ground which no modern farmer would dream of turning over even under the stimulus of huge barley or oil-seed rape subsidies, and at heights above sea level at which it is very doubtful if any modern varieties would ripen. These archaeological findings are particularly interesting because they abundantly corroborate surviving documentary evidence which showed that many Border parishes now drastically depopulated possessed large acreages of arable which must have required a sizeable number of inhabitants to be cultivated and harvested.”
Warmer than today means of course warmer than c 1989 when the book was published. But this is an illustration of how warming after 1989 would have a fair way to go to equal the historical record. It also shows how warmer temperatures supported a larger population without the world being drowned in rising seas.

Stephen Brown
April 5, 2010 2:36 am

Perhaps Mr. Lansner could lend a hand with this project:-
http://www.timesonline.co.uk/tol/news/environment/article7039264.ece
The Met Office are going to “re-examine” their figures to “look at the data in much greater detail than previous attempts and provide more information about which regions are suffering extreme heat waves and the greatest average changes in climate. The Met Office said that this would allow international funding to be directed to where it was most needed.”
It’s going to cost millions and they have already stated the outcome.

April 5, 2010 2:49 am

Pete Olson (21:33:46) :
Somebody needs to proofread this stuff before posting it. Legitimate points get lost in poor grammar, spelling, and use of language.
George Turner (21:45:29) :
Good point!
Small note: A few places after figure 4 the text said “where” instead of “were”.

“SpellCheck: The Model”…

Sleepalot
April 5, 2010 2:57 am

In the first graph, what does it mean, when the smoothed average lines
go outside the grey area?

Stefan
April 5, 2010 3:03 am
Christopher Hanley
April 5, 2010 3:12 am

It is amazing, in retrospect, that the ‘hockey stick’ graph could have been accepted by (allegedly) trained scientists.
The more proxy data you throw in, the flatter the ‘handle’ becomes (the LIA is an accepted fact by all).
Any part of the error bars (grey area) could represent an approximation of the temperature at that particular time (we’re talking about fractions of °C).
Then to graft on to the end the instrumental data (with all its pre-’79 uncertainty together with the more reliable post-satellite data tacked on for good measure), well…….words fail.
I’ve wondered why no one at the time questioned the first part of the ‘blade’ (pre-c.1940)– it could not possibly be due to human CO2 emissions:
http://photos.mongabay.com/09/0323co2emissions_global.jpg

Alan Bates
April 5, 2010 3:25 am

A trivial point, perhaps, but irritating to someone who believes in precision in science:

… the variability of the IPCC graphs on a decadal ´timescale are limited to just tenths of a degree K.

Temperature is measured in Kelvin, not deg. Kelvin.
You are correct when you use 2K or similar. Please, don’t give people an excuse to ignore the key point being made.

April 5, 2010 3:31 am

[Reply: Agree, Sandy. Native English speakers should try to learn the difference between “effect” (verb) and “affect” (noun). Example: “The effect of precipitation affects the climate.” This is the mistake I fix most often. (Many times I don’t bother because it’s hard to keep up.) ~dbs, mod.]
Forget that. What irks me is the inability to differentiate between the effect they’re having on their affect!
(I know it makes no sense, but it was as good as I can get after a longer Easter w/e 😉

April 5, 2010 3:33 am

Sheesh!
“longer” w/e than what?
me <= idiot

Turboblocke
April 5, 2010 3:37 am

You tell us how the Holocene average is calculated.
You tell us how the modern average is calculated.
You don’t tell us how the MWP average is calculated.
You can’t just assume that the method used for the MWP is the same as that used for the Holocene, you have to prove it. Without this step, your conclusion is unproven.

SteveS
April 5, 2010 3:37 am

Wikipedia entry for Maurice Strong. Hello. There was a photograph of Mr.Strong in his entry which was later removed.I wonder if there is some way I can find this picture? Is each Wikipedia editing instance noted or recorded somewhere?

Turboblocke
April 5, 2010 3:43 am

Re; John Peter (00:59:23)
Booker has presented a slanted article: the costs are shown in an assesment report which you can download here http://www.decc.gov.uk/en/content/cms/legislation/cc_act_08/cc_act_08.aspx and range from £14.7 to 18.3 billion/year but our infamous correspondent neglects to mention the benefits which range from £20.7 – 46.2 billion/year.

kwik
April 5, 2010 3:52 am

Christopher Hanley (03:12:18) :
“It is amazing, in retrospect, that the ‘hockey stick’ graph could have been accepted by (allegedly) trained scientists.”
I dont think it is amazing at all. It is done on purpose, Christopher. Remember the IPCC objective paragraph?
Thank you Frank for this interesting post.

Frank Lansner
April 5, 2010 4:04 am

Never mind how carefully one tries to callibrate, the use of different proxies means not only bigger or smaller timing problems, but also different reaction patterns for the proxi on temperature. For example a pollen proxy reflects a response from the entire flora of an area whereas tree rings reflects changes of individual trees. The latter wont show if the same area as a response to heat produces more trees etc. Different tree sorts reacts differently from each other. Ice cores has a very smoothed temperature profile generally etc.etc.etc.
I DO believe that the sum of a large number of different types of proxies are valuable, but perhaps mostly valuable as a mean-temperature-indicator for a period. INot the best conditions to show an El Nino peak.
Perhaps variability should be estimated from individual proxies, whereas mean values from a greater number of proxies?
By the way, the use of 5 year evg. (or similar) of recent temperature data compared with 30-50 year gaussian foltered data is obviously a problem even though one thinks im not correct about the impacts of summing more types of proxy data.
The error of this wrong compare is so obvious that it puzzles me that some comments seems to ignore this?
PS: THANKYOU ALL for comments! Its higly appreciated!
K.R. Frank Lansner

DirkH
April 5, 2010 4:14 am

Frank, this was a great post again!
People are easily fooled by the IPCC’s shiny brochures. You’re doing a great job pointing this out.
The important part is this:
“So, when IPCC wrote “with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” – they mean: “We don’t use 40 year average/filter in the last 25 years…!”

This is a deception ON PAR WITH THE ORIGINAL HOCKEY STICK !
I have never seen a Hamming filter used like this. You pad your data with averages, extrapolated values, whatever, the best thing is, stop the filter run BEFORE the shoulder of the Hamming filter runs out of data.
But that would mean that the shiny brochure’s horror scenario ends at 1980 – not nearly frightening enough. Reminds me of Mann’s padding with a mirrored image of the end of the data series (mirrored in x and y direction!).
Mannian Padding – see here :
http://climateaudit.org/2009/07/03/the-secret-of-the-rahmstorf-non-linear-trend/
These guys are either clueless or criminals.

Frank Lansner
April 5, 2010 4:27 am

“Agust H. Bjarnason (02:03:58) :
Many thanks Frank for this very informative article. Also “mange tak” for the article on http://www.klimadebat.dk/frank-lansner-klimadebat-a-z-del-1-r141.php
A goldmine of information 🙂

Im greatful and its nice to hear that the A-Z is being used ! Its a wide info source in easy understandable language.
The A-Z has been translated to English (or Danglish) and more articles has been added and can be found using the A-Z letters in the menu line of http://www.hidethedecline.eu
K.R. Frank Lansner

rbateman
April 5, 2010 4:29 am

Stephen Brown (02:36:40) :
Oh, you mean the Money Laundering part of Cap ‘n’ Trade. The most $$$ will go to those places where they are ‘hep’ on taking a healthy fee in exchange for circulating the funds back into the awaiting war chests. Actual investments for the specified purpose of funds will consist of a token facade.
The crazed drooling over piles of money to rake in was evident at Copenhagen.

Oldjim
April 5, 2010 4:44 am

Re Turboblocke
From the report you reference

S2. It should be noted that the benefits of reduced carbon emissions have been valued using the social cost of carbon which estimates the avoided global damages from reduced UK emissions. The benefits of UK action will be distributed across the globe. In the case where the UK acts in concert with other countries then the UK will benefit from other nations reduced emissions and would be expected to experience a large net benefit. Where the UK acts alone, though there would be a net benefit for the world as a whole the UK would bear all the cost of the action and would not experience any benefit from reciprocal reductions elsewhere. The economic case for the UK continuing to act alone where global action cannot be achieved would be weak.

It would seem that the costs are actual and the benefits are largely hypothetical and based upon the Stern Review as stated in Annex H

son of mulder
April 5, 2010 4:46 am

“Turboblocke (03:43:15) :
Re; John Peter (00:59:23)
Booker has presented a slanted article: the costs are shown in an assesment report which you can download here http://www.decc.gov.uk/en/content/cms/legislation/cc_act_08/cc_act_08.aspx and range from £14.7 to 18.3 billion/year but our infamous correspondent neglects to mention the benefits which range from £20.7 – 46.2 billion/year.”
==============================================
The benefits look like fantasy. And they are all predicated on Dangerous Climate Change as can be seen from page 3 of the report which says.
“What are the policy objectives and the intended effects?
1. To avoid dangerous climate change in an economically sound way. In particular by:
Demonstrating the UK’s leadership in tackling climate change – to increase the chances of a binding international emissions reduction agreement that would stabilize concentrations of greenhouse gases at a level that would avoid dangerous climate change;
Establishing an economically credible emissions reduction pathway to 2050; and
Providing greater clarity and predictability for UK industry to plan effectively for, and invest in, a low-carbon economy.
2. To put in place a framework that commits the Government to assess and address climatic impacts so that the UK is better able to respond to the unavoidable impacts of climate change.”
Well nothing dangerous has happened yet and nothing has convinced me that “do nothing” will lead to anything dangerous. So looks like fantasy benefits.

Frank Lansner
April 5, 2010 4:50 am

“DirkH (04:14:41) :
The important part is this:
‘So, when IPCC wrote “with boundary constraints imposed by padding the series with its mean values during the first and last 25 years.” – they mean: “We don’t use 40 year average/filter in the last 25 years…!” ‘
This is a deception ON PAR WITH THE ORIGINAL HOCKEY STICK !
I have never seen a Hamming filter used like this. You pad your data with averages, extrapolated values, whatever, the best thing is, stop the filter run BEFORE the shoulder of the Hamming filter runs out of data.
These guys are either clueless or criminals.”
**
Dirk H, thankyou and Yes…. This is the core problem you point out so well. These errors are obviously so faulty that they must have been carried out by morons or peoble that for some reason wants to twist the outcome. This is a good useful example of the IPCC ways, and yet again we can conclude that things “happens” to be twisted one way and one way only: To fit the global warming message.

Capn Jack.
April 5, 2010 5:02 am

Sleepalot.
For a smoothed plot to leave error bands, needs either a lot of creativity, or a degree logarithmic of incompetence, because the error bands are a part of the smooth plot. The data strip itself can and will leave the error bands, but the smoothing is to deterine trend.
Smoothing is for trend. Smoothing is trend analysis, nothing more nothing less.

Gary Pearse
April 5, 2010 5:04 am

So the IPCC is saying that the temp has risen only 0.6C in 1000 years (graph above) to 2004?

Grumbler
April 5, 2010 5:12 am

“SandyInDerby (02:24:18) :
……… Admonishing a non-native speaker is no way to teach them the nuances of a language.
[Reply: Agree, Sandy. Native English speakers should try to learn the difference between “effect” (verb) and “affect” (noun). Example: “The effect of precipitation affects the climate.” This is the mistake I fix most often. (Many times I don’t bother because it’s hard to keep up.) ~dbs, mod.]”
Sorry to be a pedant but isn’t ‘effect’ the noun and ‘affect’ the verb ;-)? The example is correct though.
And while we are on the subject can people stop using ‘loose’ [opposite of tight] when they mean ‘lose’. Snarl.
cheers David
[Reply: Right. It was 2:20 a.m. here, and I added the parts of speech as an afterthought – in the wrong place. That’s my excuse, and I’m sticking with it. Also right about loose/lose. I correct that all the time. And don’t even get me started on apostrophe use… ~dbs]

Grumbler
April 5, 2010 5:15 am

“churn (21:49:51) :
Everyone needs to read the books “How to Lie with Statistics” by Darrell Huff and “Freakonomics” by Dubner and Levitt.”
Couldn’t agree more. Add to that ‘Understanding Organisations’ Charles Handy All three – life changing experiences.
cheers David
ps – must get a life 🙂

Sleepalot
April 5, 2010 5:23 am

Christopher Hanley (03:12:18) :
How come neither of the world wars show on that graph?

Peter Plail
April 5, 2010 5:54 am

This problem highlights yet again that no matter how capable these scientists are in their field of science, they struggle to use statistics correctly.
I find it illuminating to see how aggressively they dismiss the offers from statisticians, such a Steve McIntyre, to perform analysis of their raw data. They make much of the fact that these people (who offer help) are not climate scientist, whereas the analysis methodologies are statistical and not scientific, and as Frank has shown in this admirable piece, they appear to be regularly misused (or should that be abused? .
These are professional scientists who are in the main amateur statisticians and even more amateur programmers (if the testimony of many IT professionals, who have commented on WUWT about the quality of revealed program code, is to be believed).
I liken statistics to a carpenters tool kit. A DIYer and a timeserved craftsman both use the same tools, but it’s easy to see who would produce the highest quality, most consistent results.

1DandyTroll
April 5, 2010 5:57 am

Sometimes I wonder if they’re not little kids after all. They took to liking splicing and dicing, so they do it 10000 times before bedtime.

Bill in Vigo
April 5, 2010 6:00 am

Frank, Wonderful analysis. I live in the southern part of the USA. For years we in our area have been criticized about how we speak and how we use verbiage. Of course that is because thought we speak a form of bastardized English so do the folks in the other localized areas of the USA, Great Brittan, Australia, NZ, South Africa, and most of the rest of the “English” speaking world. It may be that I am used to reading many things written by folks from other areas than just where I live that I am able to get the gist of your post. For instance my wife would get angry and tell me to not “tip” her. In a few minutes I would take her hand and help her up the stairs and again she would tell me don’t “tip” me. She is from an other area of the USA and there the word “tip” meant touch. That little difference in colloquialism in speach while we were dating got me in trouble many times. I tend to be sympathetic when some one is trying to get a point across when using a language other than their native tongue.
All that considered I can’t spell and my grammar isn’t worth a “toot”. For me spell checker is a wonderful thing.
Yours is a job well done.
Bill Derryberry

Richard P
April 5, 2010 6:19 am

Hello Frank,
This is a great post. I work with DSP filters often, and know what the effects of run on and run off are from a real world applications. When I use these filters, the data supplied is not used until the filtering becomes valid after run on, and ends before run off. To use that data would cause erroneous results, and possible dangerous conditions if what you are controlling is a critical application.
This basic error by the IPCC is indeed banal, and shows either ignorance, or malfeasance. Either does not bode well for the IPCC, since they are the “Official” climate resource. Unfortunately, this type of error in unsurprising from the IPCC. It is indeed unfortunate that they are either incapable of understanding these errors, or complicit in allowing them.
As far as the the format of the post, given my Dyslexia I am lucky to even type straight, no less criticize someone’s posting. I was impressed with the insight and information, and your work is greatly appreciated.

Stephan
April 5, 2010 6:24 am
Jörg Schulze
April 5, 2010 6:38 am

Thanks! Very helpful, cause as a civil engineer I don’t use much statistics and so was not aware of these tricks. Do these people stop at nothing?

April 5, 2010 6:41 am

Clueless or criminals? They are so solid in their belief that man’s activities are harmful to the earth that their minds latched onto an interpretation of the data that supported this worldview. They interpreted the data in a way that made sense to them. It did not even occur to them that this method is the polar opposite of legitimate science.

Tenuc
April 5, 2010 6:42 am

Thank you Frank for another thought provoking post. My view is that all the changes in long-term weather can easily be explained buy the deterministic chaos we observed in Earth’s turbulent systems.
The mistakes made by the IPCC, which you clearly illustrate, are either caused by lack of knowledge, or more likely, yet another deliberate attempt to mislead the public.
“Oh what a tangled web we weave, when first we practice to deceive”

jdn
April 5, 2010 6:53 am

Frank, your article is full of mistakes, and, your point isn’t clear either. If you’re going to criticize the IPCC for making a mistake, you should put your finger on the mistake and not simply take issue with an averaging technique. Your article hasn’t added to my knowledge of any issues.
I have a feeling you are writing outside of your normal pattern of speech. For that, you need an editor.

Henry chance
April 5, 2010 6:59 am

We do not use average snowfall or rainfall for the planet. Why do we insist on averaging temperatures for the planet?
JUst my perspective. We were not sure of the existence of the western hemisphere before it was discovered. Why are the warmists so incredibly convinced of the temperatures during the same period before discovery?

April 5, 2010 6:59 am

jdn (06:53:25),
The article is “full of mistakes”?
What mistakes? List them in order.

Warren
April 5, 2010 7:03 am

Bill in Vigo (06:00:31) :
All that considered I can’t spell and my grammar isn’t worth a “toot”.
For me spell checker is a wonderful thing.
===================
Ode to my spell checker (author unknown):
Eye halve a spelling checker
It came with my pea sea.
It plainly marks four my revue miss steaks eye kin knot sea.
Eye strike a quay and type a word and weight for it to say
Weather eye yam wrong oar write.
It shows me strait a weigh as soon as a mist ache is maid.
It nose bee fore two long and eye can put the error rite.
Its rare lea ever wrong.
Eye have run this poem threw it,
I am shore your pleased to no.
Its letter perfect awl the way.
My checker told me sew.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Craig Loehle
April 5, 2010 7:08 am

In this paper
Loehle, C. 2005. Estimating Climatic Timeseries from Multi-Site Data Afflicted with Dating Error. Mathematical Geology 37:127-140.
I showed that if multiple proxies have dating error and you average them, then past variability (like MWP) is damped out, as this post alludes to. This is particularly true when proxies only have dated samples every hundred years or so and interpolate dates in between, as many do. And yes, it is a REAL geology journal.

Invariant
April 5, 2010 7:10 am

Tenuc (06:42:27) : …all the changes in long-term weather can easily be explained buy the deterministic chaos we observed in Earth’s turbulent systems.
This is my point of view too! What is the relationship between this , and this quote of Dr. Brooks in Climate Through the Ages (1950)pp. 286-287?
The weather of one year differs from that of another year, the weather of one decade from that of another decade ; why should not the climate of one century differ from that of another century ?
Answer
The Navier-Stokes equations for fluid flow, the heart of climate models, are scale invariant. This means that rapid variations on short time scale (years, decades) may resemble slow variations on longer timescales (centuries, millenia).
Chaotic systems may exhibit scale invariance accross a wide range of scales. In our climate, variations in cloud coverage and ocean circulation may lead to temperature fluctuations over many timescales.
I’ve never understood the viewpoint that the climate should not have large oscillations on long timescales when we observe small fluctuations on short timescales. To me, this seems unfair, and I suspect that this may violate the fundamental scale invariance power laws that is seen in many nonlinear dynamical systems.

Pascvaks
April 5, 2010 7:16 am

Global Temperature Graphs are a lot like Real Estate aren’t they?
Location! Location! Location!
Imagine what a Global Real Estate Value Graph for the last 500 years would look like?
(Forget inflation, just for a moment:-)

Steve Keohane
April 5, 2010 7:18 am

Good solid piece Frank, thank you.

Tenuc
April 5, 2010 7:18 am

Re: jdn (Apr 5 06:53)
Give the guy a break JDN, Frank isn’t a native English speaker.
His points are all well made, as anyone with even an ounce of common sense would realise!

John Peter
April 5, 2010 7:19 am

“Turboblocke (03:43:15) :
Re; John Peter (00:59:23)
Booker has presented a slanted article: the costs are shown in an assesment report which you can download here http://www.decc.gov.uk/en/content/cms/legislation/cc_act_08/cc_act_08.aspx and range from £14.7 to 18.3 billion/year but our infamous correspondent neglects to mention the benefits which range from £20.7 – 46.2 billion/year.”
Turboblocke
Please demonstrate the actual (real and measurable) benefits of £20.7-46.2 billion/year to us in the UK. As far as I can see they are at best unreal and will never materialise. They are based on a Brown/Milliband unshakeable belief in man made global warming and that we can have an influence thereon and also that some warming has more dire consequences for the Globe overall than no warming. I am afraid that the consequences will be a further erosion of UK competitivenes as more and more companies are driven out or forced to move intensive industrial activities out of the UK (and Europe). It is typical that few MPs and bureaucrats now have any contact with or appreciation of industry. We are moving towards pre 1990 East Eurpean conditions where more and more of the population watches the rest while productivity and value outputs go down the drain.

Enneagram
April 5, 2010 7:21 am

OT: Yesterday´s magnitude 7.2 Richter in BAJA CALIFORNIA, MEXICO
http://earthquake.usgs.gov/earthquakes/recenteqsww/Quakes/ci14607652.php
Seems as as zipper being opened (in this case “upwards”) like the Ethiopia “zipper”:
http://wattsupwiththat.com/2009/11/03/big-crack-in-ethiopia-beachfront-property-soon-to-be-available/

Marc77
April 5, 2010 7:30 am

If we just look at el-mimo peaks or even decade peaks. The proxies could easily have 5-10 years errors on the time axis. And then, averaging them would eradicate those peaks.

Steve Garcia
April 5, 2010 7:33 am

One of the other issues of proxies is the dating of them. All their dating from previous centuries and millennia are approximate, no matter how careful researchers read the tree rings or ice cores. Just finding a starting point is an art. And Carbon 14 dates are all, or nearly all, approximates for earlier centuries, so their placements along the time scale are critical for preserving the peaks and valleys that MUST have occurred.
Then with different proxies the peak of one graph may be shifted in time versus another, and then when blended/averaged, the peaks and valleys of that averaged graph would become flattened across that span of time.
Certainly some of this happens.
This brings up the question: Do the “combiners” of all this (Mann et al) make any effort to rectify (meaning to shift forward or backward to match other proxy graphs) the timing of these separated graphs, or do they simply take them “as is” from the specialists who have created each database?
Each specialist would have no reason to know his or her own data is shifted in time from other proxies, due to the unknown precision of time for each proxy.
It is not just the height of the graphs (temps) but also the time element that has to be considered. In combining graphs, if no effort is made to rectify the various traces in time, flattening of the resultant graph is inevitable – and MISLEADING.
Method matters – in the combining process perhaps more than any other time – since graphs can only get more flat, not less. Avoiding this flattening should be of prime concern to the combiners of other people’s work.

DirkH
April 5, 2010 7:38 am

“jdn (06:53:25) :
[…]
If you’re going to criticize the IPCC for making a mistake, you should put your finger on the mistake and not simply take issue with an averaging technique.”
He pointed out a very basic mistake, namely the production of a convincing-looking graphic with a technique that is not suitable to deliver reliable results. “Take issue with an averaging technique” ???? You seem not to know what you are talking about here.
The effect of the mistake is that high frequencies – noise if you will are dampened in the past but as we approach the present they are less and less dampened.
Obviously this will hide extrema in the past record.
Thus, the assertion that current extrema are unprecedented has been PROVEN to be without base by Frank Lansner.
If this seems like nit-picking to you you obviously have no understanding of science at all.
You can’t just make the data say what you like and still call it science.

David S
April 5, 2010 7:41 am

jdn (06:53:25)
So you are saying it is OK to mix smoothed data and pure data in the same line, and to express a view that the movement from one to the other represents a trend? And it is OK to use a filtering method completely outside its sphere of operation, when it conveniently gives the answer you want, and simpler methods do not?
I guess you are. By the way, the comma after “and” is redundant. Apologies if you are not a native English speaker.

Robert Austin
April 5, 2010 7:47 am

Re: jdn (Apr 5 06:53),
I thought Frank’s article was clear and understandable and the indictment of IPCC statistical techniques evident. Are you sure that you are not just being willfully obtuse?

April 5, 2010 7:48 am

When geologists first saw the Mann et al. curve, we all just laughed because of the solid geological evidence for the Medieval Warm Period and Little Age published in hundreds of papers for decades. Our conclusion was that either the trees that Mann and others used were not sensitive enough to record climate changes or else it was a total fraud. The Climategate emails showed us the answer to that question. Instead of endless arguing over the details of their bogus curves, it would make more sense to just go to the geologic evidence which shows beyond reasonable doubt the Mann et al. curves are worthless.

April 5, 2010 7:51 am

Good article, Frank, and thanks for all the effort you put in. Another nail in the coffin of IPCC veracity.
Your ‘Danglish’ is very easy to understand and the odd typo is difficult to avoid entirely when one’s writing is driven by ideas.
As a native speaker of English and a former teacher, I get irritated by those who get picky and pedantic over ‘correct’ English useage and who have lost sight of the fact that the essential job of written language is to communicate. For those who didn’t grow up with it, English is a damnably hard language to learn, but everyone who prides themselves on their knowledge of it would do well to remember the old adage – ‘every rule in the English language has an exception which serves to prove that rule’.

vigilantfish
April 5, 2010 7:52 am

jdn (06:53:25) :
Frank, your article is full of mistakes, and, your point isn’t clear either. If you’re going to criticize the IPCC for making a mistake, you should put your finger on the mistake and not simply take issue with an averaging technique. Your article hasn’t added to my knowledge of any issues.
I have a feeling you are writing outside of your normal pattern of speech. For that, you need an editor.
————-
What a mean-spirited comment. I found Dr. Lansner’s revelations to be very enlightening. I struggle to understand all the minutae of some of the statistical arguments put in other posts, but the significance and importance of this guest post are immediately apparent. In a way I’m surprised nobody has pointed this out before! The smoothing of a series of unrelated proxies into a bogus super average which is then compared with a different scale of averaging in recent temperature records offends on two levels, as has been pointed out by earlier commentators. Thank you Dr. Lansner for this illuminating information.
I second Smokey’s demand: list for us the mistakes that offend your delicate sensibilities, jdn.

DirkH
April 5, 2010 7:59 am

“DirkH (07:38:07) :
[…]
You can’t just make the data say what you like and still call it science.”
And i’m fuming about this display of scientific incompetence – but maybe i should be laughing. We’ve all been had by the IPCC and their people; they’ve been enjoying good salaries for such a long time and just messed around with the numbers any way they saw fit. That’s chutzpah, a quality all in itself. The great climate swindle.
Here’s a description of Hamming window coefficients
http://en.wikipedia.org/wiki/Hamming_window#Hamming_window
and here’s a good description with the Fourier transform
https://ccrma.stanford.edu/~jos/sasp/Hamming_Window.html
The reason for using a Hamming window is its optimal suppression of ripples in the transform. So just messing with its width should have some effect on the spectrum, right? Makes you wonder why you ran a filter in the first place.

April 5, 2010 8:01 am

This comment is off topic for this thread but might be of interest to WUWT readers. Many here are aware of William Connolley’s role in keeping the climate change articles on Wikipedia scrubbed of anything he doesn’t like and inserting questionable material in it to support what his point of view. With that in mind there is an article up at:
http://pediawatch.wordpress.com/2010/04/05/a-good-example-of-william-m-connolleys-work-on-wikipedia/
which discusses some minor fiddling he has done with the aid of his friends on other blogs, and which points out, among others, the following diff:
http://en.wikipedia.org/w/index.php?title=The_Real_Global_Warming_Disaster&diff=next&oldid=342554628
in which he is scrubbing mention of WUWT from the article on Christopher Booker’s book “The Real Global Warming Disaster.”

MikeN
April 5, 2010 8:01 am

So they pad beyond the end date, and then use the 40 year filter.
Were is the evidence that they use a 5 year average?

Bill Illis
April 5, 2010 8:01 am

This Climategate email has the raw unsmoothed and untruncated (for the post-1960 decline) data from Briffa’s first reconstruction in 1998 covering 1402 to 1995. It was sent to Michael Mann from Tim Osborne.
It is calibrated to the 1881-1960 average so the temps are comparable over the period. Chart it up and you can see what the raw tree-ring data really shows.
http://eastangliaemails.com/emails.php?eid=146
939154709.txt

Ian W
April 5, 2010 8:12 am

I am concerned that no-one has raised the skewed ‘geographic’ averaging that occurs with the use of the proxy data. The coverage of the proxies is hardly ‘world-wide’ yet they are used as indicators of the entire planet. Is it safe to claim to know the past warmth in the Amazon basin from tree rings measured in Siberia? Most proxies are of small, sometimes very small, geographic spread, and where there has been an attempt at validation they have been shown to be inaccurate. Averaging instrumental temperature over the globe is bad enough (it should be heat content the enthalpy of the atmosphere varies hugely with humidity) but averaging multiple inaccurate proxies for temperature just provides huge scope for cherry picking. That is before the mathematical and statistical games are played.

Steve Garcia
April 5, 2010 8:16 am

BTW –
After that last thought dawned on me (feet2thefire 07:33:30), I am tending to think the CRU people and Mann – especially Mann – went where no man has gone before in accumulating all the work of specialists, and that in the process they made assumptions they didn’t even know they were making. These main assumption was that they could just take the data as found and – as complicated as it was already – process it in a way that DID flatten curves, though they did not see that at the time.
Frank has made a VERY important observation here, pointing us to this flattening issue.
In breaking new ground, they wouldn’t recognize all the pitfalls that lay along the path. In addition, they would have wanted to take at least some shortcuts (read “assumptions”), so as to not take far longer to get their results.
(Yet, having found a nearly flat graph that eliminated the MWP and the LIA, Mann clearly should have seen that he’d done something wrong; he HAD to know that people would jump all over his Hockey Stick. That he was a total jerk when they did only made him look like he had something to hide. That he went to the lengths he did to destroy the efforts of others to correct the graphs made him look vicious and despicable. He should have instead welcomed efforts to improve his work.)
Add that to Mann’s Primary Component failures pointed out by Steve M and (and Ross M?), and the Hockey Team’s work was flawed, even if they didn’t intend it.
With the time-flattening of their failure to rectify different curves in time, added to Mann’s lack of statistical expertise, I am pretty certain that they honestly believe they have done nothing “wrong.” They are perhaps even now ignorant of their errors.
But I think that they were also are insecure about their findings (they WERE blazing trails in the beginning), and when people are insecure about their methods/work, they are very easily caught up in efforts to neurotically defend the work.
I am certain that their work will be overturned, on the basis of better procedures that are already being discovered. In the big picture, their work shouldn’t be so heartily denigrated as most of us do here, but appreciated as a strong first effort, even if it was flawed by misguided assumptions.
On their part, they need to be open to the improvements, not paranoid about someone else “finding flaws in it.” But they shouldn’t just blindly or blithely accept such developments; they should always challenge the proposed improvements, so that other assumptions don’t prolong the entire already drawn-out process of getting a true temperature history of the planet.
Very few pioneers get everything right the first time. This current process of finding and fixing the flaws in the work that began with Mann’s work did not need to be so contentious. The Hockey Stick was just WRONG. Mann should have welcomed Steve M’s corrections. If it wasn’t Steve, then it would have been someone else, sooner or later.
The real problem came when other people – people with agendas – took Mann’s work as gospel, as confirming their environmental concerns, and then who created a “whole cloth” out of it that simply wasn’t true, and then carried it into the political sphere and started laying heavy trips on governments and the public about humans killing the planet.
There was, and IS, no firm and automatic connection between “it looks like the climate is getting warmer” to “humans are the cause of it all.” We here think there is no connection whatsoever, of course.
Seriously, IMHO that is a connection that came out of the environmentalists’ individual and collective psychology, which then was played upon an unsuspecting public’s desire to not kill Bambi’s mother, the environment. Once the “planet killers” concept kicked in, there was no logic that could convince the public that we should all just stop and think this thing through. All arguments for the public became the Cautionary Principle – “But shouldn’t we err on the side of caution? What if they are right?
Once the public thought that, nothing short of the nuclear option was available: Logic and rational discourse were no longer possible. Corrections would be VERY difficult to make, and it would be a long, uphill battle.
It didn’t have to be.

Rod Smith
April 5, 2010 8:19 am

Henry Chance: “Why do we insist on averaging temperatures for the planet?”
GOOD question. My opinion is that it is because the whole scam started with “Global Warming.” When that failed to stand up to scrutiny, the focus was moved to “Anthropogenic Climate Change”, but little was actually changed except the verbiage even though it is not possible to accurately describe (or forecast) “climate” with temperature alone.
I also believe that in the beginning it all boiled down to an easy way to comply with research grants ($$), regardless of its actual merit. Another plus is that it is easy to accomplish and made impressive graphs.
I contend that if you put any stock in “average temperatures,” you need to spend a few days wondering about the high deserts of the US in winter time. I guarantee you won’t even notice the “average” temperature when it goes by as the sun goes down!

James Sexton
April 5, 2010 8:23 am

Frank, thank you for articulating something I’ve been trying to say for years. As pointed out, your statement may need some refining for people that have to have everything spelled out for them.
jdn (06:53:25) : Just because you’re not grasping the obvious statement the article is making doesn’t mean its “full of mistakes”. I’ll try to help, after all the averaging, smoothing, extrapolating, proxy splicing, ect….., when you see a graph comparing historical temps to present day temps, you’re not seeing a proper comparison. It is an “apple to orange” comparison and totally meaningless. It is stating our recent spike is illustrated against smooth historical data. Even the standard set by alarmists is smoothed over a 4 decade period of time. Of course that shouldn’t be confused with a smoothed average over a millennium period of time. THEY ARE NOT THE SAME.
Turboblocke (03:43:15) :
I was going to seriously respond to a couple of your comments until I saw your obvious attempt at humor when you stated, “…….. neglects to mention the benefits which range from £20.7 – 46.2 billion/year.”
Can you please list some of the alleged “benefits” in term of monetary value, please?

DCC
April 5, 2010 8:35 am

@jdn (06:53:25) :”Frank, your article is full of mistakes, and, your point isn’t clear either. If you’re going to criticize the IPCC for making a mistake, you should put your finger on the mistake and not simply take issue with an averaging technique.”
Actually, his points are sufficiently clear for anyone who wants to understand them. He clearly summarized several known IPCC mistakes and then pointed out that one more has not been sufficiently analyzed. That’s why the concentration on averaging technique. As for the “mistakes.” I presume you are referring to his non-native English. Try reading the existing comments before blasting off. For that matter, re-reading the article itself if you don’t understand it the first time.
“Your article hasn’t added to my knowledge of any issues.”
I think we can all agree to that. But it’s not Frank’s fault.

Stephen Wilde
April 5, 2010 8:37 am

Hmmm,
Remove (or be unable to identify) all the short term temperature peaks from the Mediaeval Warm Period.
Note a short term peak that has just occured.
Panic in all directions because there is no known past peak equal to the recent one.
Ignore the historical and proxy evidence that it was warmer than now during the Mediaeval Warm Period, the Roman Warm Period and the Minoan Warm Period
Recommend global energy rationing and world government on that basis ?
Are we governed by the mentally challenged ?

April 5, 2010 8:39 am

IPCC compared average temperatures for MWP and the present, not an average in one case and a peak in another. Your claim here lacks merit.
On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU, and others. It’s beside the point that your data is not correct. It is unworthy of consideration.
Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training. Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is not credible, and is unsupported by actual scientific evidence.

jaypan
April 5, 2010 8:41 am

Frank,
looks to me like another very important contribution to prove AGWT wrong.
It deserves to get reviewed and published in a more official way.

James Sexton
April 5, 2010 9:05 am

Stephen Wilde (08:37:22) :
“Are we governed by the mentally challenged ?”
Go here for an answer. http://wattsupwiththat.com/2010/04/01/congressional-tipping-point-not-an-april-fools-joke/#more-18094
While there are several other examples, this one entry should sufficiently answer your question.

A C Osborn
April 5, 2010 9:19 am

mike roddy (08:39:54) :
Sorry “Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training.”
is absolute Cr*p, are you saying Mathematicians & Scientists using the published procedures are not capable of doing it without “extensive training”?
Of course to do it the “RIGHT WAY”, i.e. to show AGW takes very special training in deed.

Frank Lansner
April 5, 2010 9:29 am

“MikeN (08:01:08) :
So they pad beyond the end date, and then use the 40 year filter.
Were is the evidence that they use a 5 year average?”
The 5 year average is an estimate using GISS global temperature data. If you use around 5 yr average on GISS data published year 2000-2001, this enables an end point that matches the endpoint in the IPCC graphic rather well : 0,43K.
For some reason, the IPCC did not write themselves: “For the last years we use 5 yrs averaging enabling big warm temperature peak, and then we compare with older temperatures that we have submited to 40-years averaging and thus suppressing peaks markedly more.”
IPCC should make it quite clear in their legend text what they do.
K.R. Frank Lansner

James Sexton
April 5, 2010 9:30 am

mike roddy (08:39:54) :
“IPCC compared average temperatures for MWP and the present, not an average in one case and a peak in another. Your claim here lacks merit.
On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU, and others. It’s beside the point that your data is not correct. It is unworthy of consideration.
Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training. Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is not credible, and is unsupported by actual scientific evidence. ”
Mike, they were averaged using different time sequences and periods. THIS ISN’T ASTRO PHYSICS, this is simple base mathematics that any high-school grad should understand. The final comparison illustrated in the graphs are in no way correlated. One can’t average over different lengths of time and not expect different trends in the graphs.
Your appeal to authority lacks any credibility because you didn’t appeal to the proper authority. You could start with any college level algebra instructor, but a statistician would probably be better. You can peer review until the cows come home, but until you include some one with appropriate math skills(or in the way your think) “credentials” it is all meaningless. But then, once someone with the appropriate math skills gets involved, it will be declared meaningless anyway.
Why do you think they had to hide the decline? Because if they didn’t, tree ring proxies become invalidated. THEY DON’T EQUATE WITH RECENT OBSERVED TEMPERATURES(that’s not my observation, that’s the hockey teams). This then calls into question how they KNOW they are even correlated with historical temperatures. They don’t. It isn’t possible for them to know if they don’t correlate with observed temps. So, then, how do they know we are warmer than the past using proxies? They don’t. But they can convince you that apples equal oranges because they work for the NOAA!!!!????!!! Nice.

Steve Garcia
April 5, 2010 9:34 am

Perhaps for those who don’t “get” Frank’s point, this may help or may not:
His graphic of a very brief moment in time confused me at first (the Holocene temperature proxy series for 9000-8000bp).
Perhaps segregating the curves and pointing out that there ARE peaks and valleys would be useful, and then showing the averaged graph, separately – and then asking “Where did the peaks and valleys go?”

Isn’t that Frank’s main point, that the peaks and valleys disappear, the more data sets one includes?
* * * * *
On my own point – of the peaks and valleys perhaps being shifted forward or backward in time:
When I look at that first graphic here, I see that the red, the bright green and the medium-light blue graphs have serious dips in them. The red one is out of phase with the other two. It is probable that the time scale on them is merely shifted a bit, because of the methodology or the inherent errors in each method for determining the time element.
If one averages those three, the red graph’s shift will start to flatten out the other two. Even though there is a clear and severe dip in all three, the averaged graph will show less severe of a dip.
And the more graphs one adds in the averaging – if there are shifts in them like this – the more the peaks and valleys disappear.
IMHO, if the measurements of time in all of them was absolutely correct, then the peaks and valleys would show up in the averaged graph. Why? Because I think that the measurements of temps are all probably pretty accurate – because the warming and cooling DID happen and DID have effects. But if we can’t measure the time of these occurrences well enough, then when we combine/average the data, then the peaks and valleys start to cancel each other out, flattening the curves.

DirkH
April 5, 2010 9:40 am

“mike roddy (08:39:54) :
[…]
It is unworthy of consideration”
Mike, is this you?
http://northwardho.blogspot.com/2008/09/polar-cities-go-hollywood-2112-hopes-to.html
Catastrophe movie 2112? Global warming? Like 2 Emmerich films in one? Who’s going to play the damsel-in-distress? Oh… wait… let me guess… Sandra Bullock?
So may i deduce that you have a vested financial interest in keeping the AGW scare going?

TJA
April 5, 2010 9:41 am

Do you think there would be some kind of peak on a reconstruction if it properly accounted for 1289 the way it putatively does for 1998?
http://www.newscientist.com/article/dn12098-freak-winter-is-europes-warmest-for-700-years.html

George E. Smith
April 5, 2010 9:41 am

My First comment would be, that when a “message” fails to communicate a “meaning”, it could mean that the “writer” of the message is not fully conversant in the finer points of the “language” that was intended to be used.
BUT; A “message” can also fail to communicate a “meaning” if the “reader” of the message is not fully conversant in the finer points of the language that was intended to be used.
So I have no idea where I might fit in the competence level of the English language; specially the American version of it; but I thought I grasped the message of Frank’s essay with no apparent difficulties; so I am somewhat non-plussed by the observations of others.
Secondly; averaging is avery well understood mathematical process. You have a set of (n) numbers; you add all the numbers together, and then divide the sum by (n) and the result is the “average” or “mean” of that set of (n) numbers.
This is true regardless of (or irregardless, as the case may be) of any linkage or relationship between any subset of the (n) numbers.
The numbers can be the output of a random number generator, for (n) trials of that generator; or they could be totally linked, such as successive readings, at say 10 minute intervals of an ordinary thermometer sitting in a swimming pool full of water.
The mathematical process, is exactly the same in either case. In the first example of random numbers, the final result has exactly zero meaning or significance; other than it is the average of that set of numbers. In the second example, the final result, might be a better value for the temperature of that swimming pool of weater, than any single one of the individual readings was.
I don’t understand the point of computing the average of sets of numbers, that have no reason for the average to mean anything. It adds no information.
If the numbers are in fact the result of some linkage process, such as for example in the swimming pool temperature case, it could be that the value being measured actually happens to be time varying, over the course of the total data gathering cycle.
In which case, the true value might be expected to be different for each time epoch for which a value is recorded; so the sequence of numbers represents some sequence of events; such as a changing temperature due to some physical process that is happening.
In that case; averaging also adds no information; in fact it throws away information that was already there, and replaces it with a false number; which may never have been observed at any time in the process. The most we can say; as a result of a very fine argument in Galileo’s “Dialog on the Two World Systems”, is that there must be at least on point during the total interval where the variable had the exact value of that computed average.
Well the bottom line, is I understand the merit in averaging what are presumed to be a set of measurements of some observational value, in order to reduce random errors that crop up in measuring any real physical variable.
But I see no merit at all in averaging a set of values that are all different observations of a changing variable; those values being deliberately different because some physical process is changing the experimental conditions, and hence changing the observed value.
Those numbers are different becaue they are supposed to be different, and the average is not a valid substitute for any of them.

JDN
April 5, 2010 9:43 am

Tenuc (07:18:31) :
Give the guy a break JDN, Frank isn’t a native English speaker.
Didn’t realize. That makes sense. He *really* needs an editor. The “mistakes” are grammatical & spelling mistakes, not scientific so far as I can tell. I was writing that comment over breakfast, and, my comment wasn’t as fully qualified as I would normally make.
DirkH (07:38:07) :
You seem not to know what you are talking about here.
I got the general idea about windowing errors with a running average, but, this is a common problem. I’m not sure the IPCC is doing anything dishonest or wrong here. That’s why I said I didn’t get anything from the article and that it needs better examples of the mistake he is alleging. The argument should be made again with a better organization & a more pointed example of how this alleged error is skewing the IPCC reporting in a way which would be avoidable by other methods, if that is possible.
You folks are being too kind to the man. Given the gross data manipulation which has occurred to get to the hockey stick, this changing of the averaging window seems a second order effect. Most people commenting here seem convinced by this article that the IPCC’s averaging is a problem. Is this really on the same order of error as everything else we’ve heard about? If it is, then this article didn’t demonstrate the severity of the error.

TJA
April 5, 2010 9:49 am

“Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is not credible, and is unsupported by actual scientific evidence.”
This is what they have come to. Just handover the trillions, put on this hairshirt, and don’t question “science”, because you are too stupid to understand why the linear algebra and statistics you learned in college was, in fact, correctly applied.
Lets apply a little logic to Mike Ruddy’s statement. By his own standards, can he know whether the science is right or not? No. So, what we have here is the opinion of a blog commenter that, because he can’t understand the issue, nobody else can either. A blog commenter who probably gets his science through a layer of press that is in no way peer reviewed.
I always said that McIntyre was like Martin Luther nailing his manifesto to the IPCC door, and authority loving toadies continue to insist that only they are qualified to interpret the scripture.
Another revolution happened at that time that was likely related. The printing press. A revolution kind of like the internet, which brought orders of magnitude more minds into the discussion of issues.
Face it Mark, the days of Phil Jones issuing Papal Bulls, or Fatwas, from East Anglia are over. The sooner you understand this, the better.

April 5, 2010 9:59 am

JDN (09:43:39),
So you explain: “The ‘mistakes are grammatical & spelling mistakes, not scientific so far as I can tell.'”
Then, you make the same accusation of scientific error that you originally made, again without specifying exactly where the “mistake” is.
FYI: McIntyre & McKittrick falsified Mann’s hokey stick chart to the point that the UN/IPCC is no longer able to use it.
And the IPCC loved Mann’s chart, using it numerous times until it was thoroughly debunked. It was the most alarming chart imaginable. And now it’s been debunked.
If Mann’s hokey stick hadn’t been debunked, the IPCC would still be using it in their assessment reports.

TJA
April 5, 2010 10:00 am

I wonder what would happen, or if it would be possible to calibrate historical proxies to events like 1289, or cold periods due to volcanism? The reason this cannot be done is political, I am betting, not scientific.

Steve Garcia
April 5, 2010 10:09 am

In paragraph 6, meant to say the second graphic.

DirkH
April 5, 2010 10:10 am

“JDN (09:43:39) :
[…]
window seems a second order effect. Most people commenting here seem convinced by this article that the IPCC’s averaging is a problem. Is this really on the same order of error as everything else we’ve heard about? If it is, then this article didn’t demonstrate the severity of the error.”
It’s really got to do with fidelity, quality, error bars. You see, they are trying to filter a 0.6 deg C signal rise out of the temperature record of a planet that’s surface temperature varies in an interval of 130 deg C. So allright, we need averaging or filtering or we have no chance to see this diminuitive rise.
The entire AGW theory is built on the reliable detection of this minuscule signal. So the way you handle your data is deciding about whether your result has any meaning. We are talking about a signal to noise ratio of about 1:256 or 48 db here.
And that’s why i think this improper treatment of the temperature record is highly significant. It renders the IPCC’s result practically meaningless IMHO.

Frank Lansner
April 5, 2010 10:16 am


Don Easterbrook (07:48:29) :
When geologists first saw the Mann et al. curve, we all just laughed because of the solid geological evidence for the Medieval Warm Period and Little Age published in hundreds of papers for decades. Our conclusion was that either the trees that Mann and others used were not sensitive enough to record climate changes or else it was a total fraud. The Climategate emails showed us the answer to that question. Instead of endless arguing over the details of their bogus curves, it would make more sense to just go to the geologic evidence which shows beyond reasonable doubt the Mann et al. curves are worthless.

This comment i find very interesting indeed. I recently here on WUWT made an analysis of MWP temperature results. I compared results before and after IPCC changed opinion in 2001, and found a huge difference in results before and after 2001 when IPCC published the Mann graph:
http://wattsupwiththat.com/2010/03/10/when-the-ipcc-disappeared-the-medieval-warm-period/
Don Easterbrook, you here confirm that we had a consensus PRO a warmer MWP. If you can add any information on this issue I would be very interested.
The thing is, if IPCC in 2001 acted against the consensus back then, this is one of the strongest signals, that IPCC has an agenda that is not entirely scientific.
Is it possible to “prove” that we had a solid MWP-consensus until year 2000?
K.R. Frank Lansner

DirkH
April 5, 2010 10:18 am

“DirkH (09:40:50) :
[…]
Catastrophe movie 2112? Global warming? Like 2 Emmerich films in one? Who’s going to play the damsel-in-distress? Oh… wait… let me guess… Sandra Bullock? ”
Oh, i’ve read it, Penelope Cruz. She’s hot, great if you can get her. But drop that Armageddon guy, that movie was lame. Try to get Mel Gibson involved, i think your script idea sounds pretty much like Mad Max 2, and that was a fun movie to watch.

jorgekafkazar
April 5, 2010 10:23 am

Alan Bates (03:25:30) : “A trivial point, perhaps, but irritating to someone who believes in precision in science: ‘… the variability of the IPCC graphs on a decadal ´timescale are limited to just tenths of a degree K.’ Temperature is measured in Kelvin, not deg. Kelvin. You are correct when you use 2K or similar. Please, don’t give people an excuse to ignore the key point being made.”
Yes, beyond trivial, Alan. Scientists who can’t do actual science now play with nomenclature and symbolism, instead. Thus we have, of late, Pluto demoted from a planet to some other term. The Medieval Warm Period has been turned into the “Medieval Climate Anomaly.” The hoax formerly known as Global Warming has become the Climate Change canard.
Similarly, In ancient days of my youth, when Science and Journalism still existed, we had temperature measured in °K (or °C or °R or °F or °Whatnot). In postmodern times, the little degrees symbol (°) was dropped as too hard, too time-consuming for scientists to make, the latter no doubt being too busy contemplating their navels or doing creative statistics or counterfeiting data or making up new names for old concepts. Now we have, instead of 2°K, “2K,” which I invariably first read as 2000, since K has always stood for kilo or 1000. [I’ll skip allusions to potassium!] That little ° symbol was there for a reason. Science is dead, abandoned in favor of trivial word play and GCM mathematical self-molestation.

jorgekafkazar
April 5, 2010 10:26 am

Mikkel (22:51:19) : “This post is written in good quality Danglish, which should not be difficult for native English speakers to understand. As native speaker of a world language, one must be able to decipher its use by non-native speakers. I have seen far worse examples in reports and blog entries linked-to from WUWT.”
Easily we used to such phrasing and construction get can.

Frank Lansner
April 5, 2010 10:31 am

Hi JDN
Are you a secret agent 🙂 ?
You write “Is this really on the same order of error as everything else we’ve heard about? If it is, then this article didn’t demonstrate the severity of the error.”
Look, i start the article out:
“In short we have heard of problems with 1) the Mann material, 2) the Briffa material, 3) The cherry picking done by IPCC to predominantly choose data supporting colder Medieval Warm Period, 4) Problems joining proxy data with temperature data mostly obtained from cities or airports etc, 5) Cutting proxy data of when it doesn’t fit temperatures from cities, 6) Creating and Using programs that induces global warming to the data and finally 7) reusing for example Mann and Briffa data endlessly (Moberg, Rutherford, Kaufmann, AR4 etcetcetc).
But, as I believe another banal error needs more attention:
8) Wrong compare.

So, my input here is just a little shake under the IPCC graphs that already are ruined. The interesting about this error is, that its so banal that its tells a lot about IPCC having an agenda. Using proper compare might reduce the recent peak compared with MWP temperature with 0,2K or so (very much depending on what you do!) so a correct compare could not itself totaly ruin the Mann graph message. But we see an error so banal that IPCC appears either incompetent or manipulating results. Can you follow me on this?
K.R. Frank Lansner

April 5, 2010 10:58 am

Turboblocke (03:43:15) :
“our infamous correspondent neglects to mention the benefits which range from £20.7 – 46.2 billion/year.”
I can’t find anything about that in the link you mention. As it leads to maybe a dozen other documents, maybe you could be more specific.

Charlie A
April 5, 2010 11:08 am

This comparison of highly smoothed data to lightly smoothed or unsmoothed data shows up in other places also.
Maybe this should be called MANN TRICK #2:
The Penn State press release on the Mann 2009 article on hurricanes, at http://www.essc.psu.edu/essc_web/research/Nature09.html , states
>”In 2005, there were a record 15 Atlantic hurricanes, including
>hurricane Katrina, which devastated the city of New Orleans in
>Louisiana. These peaks contrast with lulls in hurricane frequency
>the study identified before and after the 1000 AD peak, when 8 or 9
>hurricanes occurred each year.”
The data from around 1000 AD of 8 to 9 hurricanes each year is highly smoothed data that should actually be expressed at 80-90 per decade, or more perhaps even more appropriately as 800-900 per century.
A tipoff that the 8 or 9 per year estimates are bogus is that the standard deviation is only about 1 hurricane per year. The observed std deviation in hurricane counts is about 4 per year.
Charlie

April 5, 2010 11:19 am

I stand by my prior comments.
The notion that mathematicians (McIntyre) weathermen (Watts, Bastardi) or economists (Lomborg) understand data better than the climatologists who developed it makes no sense whatsoever. Developing temperature reconstructions requires a lot more than knowledge of algebra.
The hockey stick is not “broken”. It has been vindicated by NSF and any major scientific body that has looked into it. There are at least 20 data sets that show the same trend lines and data points that appeared in Al Gore’s movie and IPCC. For that matter, a new study has debunked McIntyre’s deconstruction of the Yamal data.
Instead of relying on urban legends, I request that WUWT commenters begin the laborious task of reading IPCC IV, with appendices and addenda.
Of course, if you believe that scientists are fraudulent by nature, there’s not much basis for discussion.
REPLY: Mike you miss the biggest point. Climatologists, particularly government funded ones, have no consequences for failure to perform in forecasting. McIntyre, being a mining statistician does – if he overstates a claim, investors will have it in court. Bastardi of Accu-Weather loses subscribers for his company if he fails to deliver. Myself and other TV weatherman will have viewer revolt, sinking ratings, and loss of advertisers if we don’t get it right very often.
Economists, they’ll be ignored if they botch it. For example, Alan Greenspan didn’t get respect by being bad at what he did. The people you name have to perform in the public eye, climatologists just move on to the next grant and the next paper. There’s little if any consequence for being wrong. They got a pass from the media until climategate came along and woke some of them up.
Climatologists may understand certain things “better” but that understanding doesn’t always equate to performance. I tend to see it as people that like you named above understand the same things, but differently. One mans conclusion is another’s unsupportable forecast.
I’ve forecasted sea ice to recover for the third year at summer minimum. I made that forecast right here on WUWT last fall. We’ll see how it performs. – A

tty
April 5, 2010 11:25 am

David S (01:03:23) :
“Can anyone enlighten us as to the rationale for this gloriously named methodology, normally seen in digital filter processing? It seems to be a long way from home, and as far as I can see there is no innocent reason why it should have been used for weather observations.”
The explanation is almost certainly that Matlab has a standard function W=hamming(L). Some climate scientist who knows little about statistics and nothing about signal processing was playing around and found that this filter was the one that gave the result he wanted.
Same thing about using average padding for a smoothing function. This is ONLY allowable for data without a trend, so it proves either that IPCC does not understand statistics (which is not news), or that the IPCC does not think that temperatures change in a non-random way over time.

April 5, 2010 11:31 am

mike roddy (08:39:54) :
“On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU, and others. It’s beside the point that your data is not correct.”
No amateur ever made a temperature reconstruction (McIntyre keeps saying he never did a temp reconstruction, and hoi polloi keep saying he did.) That’s for the tree-ring guys. Now, as it happens, it seems that these tree-ring guys and temperature homogenizing guys might be statistics amateurs.
The absence of statisticians in their teams has been cited as a reason for their failure, which can be seen above in a simple statistical methodological matter.

Bart
April 5, 2010 11:44 am

mike roddy (08:39:54) :
“On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU…”
Who are the “amateurs” and who are the “specialists”? Do you have a copy of the curriculum vitae of everyone in the game? Does working for “NOAA, NASA, CRU” immediately, automatically, and magically confer upon an analyst the rank of “specialist”? Posters here have referred to the author as “Dr. Lansner”. Is it your contention that he received his PhD from a Crackerjack box?
Hundreds, even thousands, of peer reviewed “specialists” assured us in days past, against “amateur” opinion, of the physical impossibility of powered flight, of constructing super-sonic airplanes, of operating rocket ships beyond the atmosphere of the Earth, and of the universal invariance of time, to mention just a few.
Your complaint has no merit. It is complete ad verecundiam, one of the most basic logical fallacies.

April 5, 2010 11:55 am

jorgekafkazar (10:23:01) On the K and the ºK
Ah, Jorge, it’s not as you put it either, it’s even worse. You should use K, ºC, ºF, not ºK, C, F.
And don’t forget the space between the quantity and the unit: 2.3 K, 23.7 ºC, etc… If you intend to use the SI, that is: look here. I don’t know about the Imperial System.
I get regularly clobbered for saying these things 🙂 I should know better.

DCC
April 5, 2010 12:00 pm

“Is it possible to “prove” that we had a solid MWP-consensus until year 2000? K.R. Frank Lansner”
The question sounds a lot simpler than it is. When I was a graduate student in geology, dozens of years before 2000, the MWP was taught as fact. So the basic question must be “Did anyone question the areal validity of the MWP before 2000?” Well, my memory isn’t that good, but the MWP was so well-known for so long from both geological and historical evidence, that it seems highly unlikely that nobody would notice that it only applied to Europe and America, especially since it has recently been reiterated that existing data showed it to be a world-wide phenomenon.
I think the reverse is the proper question. When the IPCC (was it Mann?) claimed the MWP was a local phenomenon, did they show any data to back up that claim? The answer is no, they didn’t. That’s when the questions should have been asked and it’s more than peculiar that scientists supporting AGW did not ask. None of them even asked the more obvious question, “How could one hemisphere have a totally different temperature profile than the other?”
It’s prima facie evidence of very poor science, if not outright fraud.

Ralph
April 5, 2010 12:06 pm

>>>Wilde
>>>Are we governed by the mentally challenged ?
You need to ask that question, I thought it axiomatic. 😉
.
P.S. Note to mods – ‘effect’ is the noun and ‘affect’ is the verb, you had it A over T.
.

Peter Taylor
April 5, 2010 12:52 pm

Thanks Frank for highlighting this issue – it does get forgotton. Basically, if IPCC applied the same rules for the current period as they do for proxies, there would be no scary climate story. The instrumental record as a single global average of thousands of stations (with various adjustments!) cannot be directly compared to proxies because those proxies are a) not absolute measures of temperature – especially tree rings; b) only ever regional (even the Greenland Ice Core). If you are going to attempt to do it – by splicing the instrumental record onto the proxies – the Mannian and Jonesian Trickster Methodology then at least there should be transparency over how the calibration is performed – for example, the tree ring data would need to be used over the whole of the recent post-1980 warming (there being no warming from 1940-1978) – and we know that those data did not show warming so they were ‘truncated’ rather than admit the truth that the proxies were not reliable and the two data sets should not be compared.
I have come to the view – post-climategate, that we should abandon all global averages from the instrumental record prior to 1979, and stick with the modern era of satellite measurements – as well as abandon all attempts are computer prediction of temperatures for the next century. In the latter case, the current predicted range and uncertainty it next to useless for policy, especially given the regional levels of uncertainty for rainfall as well as temperature. We need to spend on resilience and adaptation to protect a burgeoning population of people very vulnerable to climate change – natural or otherwise, and mitigation was never going to be relevant over the next few decades at risk.

April 5, 2010 1:08 pm

mike roddy (08:39:54) :
Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training.
Translation: “You’re not smart enough to do it unless you work for the government.”
Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is not credible, and is unsupported by actual scientific evidence.
It’s credible when it’s supported by *physical* evidence, though. Just the code and Harry’s commentary were enough to convince even a casual reader that the people who wrote it were as “expert” as a squad of penguins trying to play basketball.

Frank
April 5, 2010 3:25 pm

Figure 5 is from Wikipedia, not the IPCC or a peer-reviewed scientific publication. The data used to construct the lines has been smoothed with a 5-year Gaussian filter, so the 2004 datapoint technically shouldn’t be plotted on the same graph with smoothed data, unless it has been smoothed also. However, we all know that temperature in the past decade hasn’t risen appreciably, so the distortion introduced included unsmoothed data with smoothed data probably isn’t worth complaining about.
The problem with Figure 2 is that it contains no information about uncertainty in the time coordinate. If the data is +/-500 years, then the troughs could coincide. If the data is +/- 50 years, then the troughs can’t overlap (unless a systematic dating error occurred, which is difficult to detect). On the other hand, these lines may represent warming and cooling in different geographic regions of the globe as the last Ice Age ended. In the absence of other evidence, there is no reason to assume that all temperature trends around the globe must move synchronously.
I agree that the instrumental temperature data in Figure 1 doesn’t look like it has been subjected to a 40 year filter with ends of series padded with the mean of the previous 25 years. It looks like it has been smoothed by a method which continues the trend (reflection). Unfortunately, the instrumental data Mann used is for only the Northern Hemisphere and your Figure 3 is for Global Temperature.
There is a huge problem with Figure 1 that was identified by von Storch. http://www.sciencemag.org/cgi/content/abstract/306/5696/679
Von Storch created a millennium of temperature data with a computer model and then created artificial proxy data (tree ring) from that temperature data. Then he added no noise or realistic levels of noise to the proxy data before analyzing it with Mann’s method. He found that the amplitude of the natural variations in the reconstructed temperature was reduced by a factor of 2 (no noise) to 4 (realistic noise) compared with the original data. This explains why the shaft of the original hockey stick is so straight with no trace of an MWP or LIA. There is absolutely no justification for plotting instrumental temperature (with full variation) on the same graph as reconstructed temperature (with suppressed variation). Reconstructions done since von Storch have been done with alternative methods that preserve the amplitude of natural variation.

April 5, 2010 5:00 pm

Frank–the geologic evidence for the MWP and LIA has an extensive literature that has been around for decades. It shows up especially well in the glacial record, which has been well documented by former glacier margins, pollen records from peat bogs, isotope measurements from ice cores, tree rings (not the kind that Mann et al. use!), historic records, and on and on. It’s so well established that geology classes for decades have taught it as beyond any reasonable doubt.
Don
Don Easterbrook (07:48:29) :
When geologists first saw the Mann et al. curve, we all just laughed because of the solid geological evidence for the Medieval Warm Period and Little Age published in hundreds of papers for decades. Our conclusion was that either the trees that Mann and others used were not sensitive enough to record climate changes or else it was a total fraud. The Climategate emails showed us the answer to that question. Instead of endless arguing over the details of their bogus curves, it would make more sense to just go to the geologic evidence which shows beyond reasonable doubt the Mann et al. curves are worthless.

This comment i find very interesting indeed. I recently here on WUWT made an analysis of MWP temperature results. I compared results before and after IPCC changed opinion in 2001, and found a huge difference in results before and after 2001 when IPCC published the Mann graph:
http://wattsupwiththat.com/2010/03/10/when-the-ipcc-disappeared-the-medieval-warm-period/
Don Easterbrook, you here confirm that we had a consensus PRO a warmer MWP. If you can add any information on this issue I would be very interested.
The thing is, if IPCC in 2001 acted against the consensus back then, this is one of the strongest signals, that IPCC has an agenda that is not entirely scientific.
Is it possible to “prove” that we had a solid MWP-consensus until year 2000?
K.R. Frank Lansner

April 5, 2010 5:18 pm

Speaking of Dr. Easterbrook, he provided an update to our interview of two years ago.
Enjoy…
http://www.gather.com/viewArticle.action?articleId=281474977336370

vigilantfish
April 5, 2010 5:33 pm

Ralph (12:06:07) :
P.S. Note to mods – ‘effect’ is the noun and ‘affect’ is the verb, you had it A over T.
————————–
At the risk of making a fool of myself, and to defend the mods, I will wade into the tangled issue of effect vs affect.
Effect is both a verb and a noun: as a verb it denotes the activity of getting something done: “Through his strenuous activity he effected the appropriate transformations…” As a noun it denotes the results of some action: “The effects of the storm were devastating”.
Affect is also both a verb and a noun. As a verb it has several meanings: one indicates putting on a false front or projecting an emotion: eg: “He affected surprise at the results.” It also indicates an emotion induced in the recipient: “The accusation of lying affected her deeply.” Also, as a verb it can indicate that there were consequences to an action or event: “The hail affected the crops.” As a noun affect is used in psychology to talk about the affects of mental illness. Since the use of affect as a noun is more obscure, it is probably safer to argue that affect is primarily a verb.

James Sexton
April 5, 2010 5:50 pm

mike roddy (11:19:46) :
“I stand by my prior comments.
The notion that mathematicians (McIntyre) weathermen (Watts, Bastardi) or economists (Lomborg) understand data better than the climatologists who developed it makes no sense whatsoever. Developing temperature reconstructions requires a lot more than knowledge of algebra. ect….”
Mike, you’re almost there!!! You’re correct, I suppose temp reconstructions do require more knowledge than base algebra. However, you can’t get to temp reconstructions without properly applying algebra, especially in regards to averaging and understanding what a set and a subset is. That is the beauty of mathematics. The rules and laws that govern math don’t change, regardless of the desire of the outcome.
Do you even bother to consider the implications of hiding the decline? IT MEANS IT DOESN’T MATCH!!! They found one particular point in time where they could “meld” the lines together to make them appear as one. (now think of sets and subsets)
Turns out, the chemistry and physics properties of mercury are not related to the biological and botanic properties of tree rings. So, and I’m going out on a limb here, but I don’t believe they belong in the same graph, much less on the SAME LINE!!! Strange, I know, but there it is.

gt
April 5, 2010 6:49 pm

OT, but a masterpiece written by the ever formidable Mike Roddy:
http://www.buffalobeast.com/?p=1237

Stephen Wilde
April 6, 2010 12:27 am

gt (18:49:39)
Thank you for blowing Mike out of the water with that link.
In the light of his emotional committment we cannot take anything he says as a serious attempt to understand the science.

April 6, 2010 5:16 am

Stephen Wilde (00:27:47) :
gt (18:49:39)
Thank you for blowing Mike out of the water with that link.

I’ll echo that.
Mr. Roddy’s evidently never been much a fan of reasoned dialogue…

Editor
April 6, 2010 6:16 am

mike roddy (08:39:54) :
“IPCC compared average temperatures for MWP and the present, not an average in one case and a peak in another. Your claim here lacks merit.
“On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU, and others. It’s beside the point that your data is not correct. It is unworthy of consideration.
“Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training. Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is not credible, and is unsupported by actual scientific evidence.”
—…—…
Rather, and what is actually true, is the following:
On a more general note, you are asking us to believe temperature reconstructions put together by amateurs, rather than the ones carefully developed and peer reviewed by hundreds of specialists at NOAA, NASA, CRU, and others. It’s entirely to the point that their data and their conclusions are not correct. They are unworthy of consideration at any level by any organization.
Historical temperature reconstruction, including those for recent decades, is a highly technical and detailed process, requiring extensive training and an open review policy rather than the amateurish, technically flawed and totally undocumented biased methods of the few so-called specialists in propaganda at NOAA, NASA, CRU, and others..
Questioning the basic expertise and integrity of those who develop these global temperature charts for international organizations is essential – because they are attempting to use their biases and propaganda to control the world’s energy supplies and economic policies to the harm of all, and their biases and illogical fallacies are completely unsupported by actual scientific evidence.

Frank Lansner
April 6, 2010 1:34 pm

One more thing:
When you use 40-year filterup to 1975, then the temperature peak around 1940 is seriously suppressed.
Both before and after 1930-40 peak temperatures where considderably lower, and thus the 40-year filter has indeed done the job to : HIDE THE DECLINE
And perhaps this was indeed Mike´s Nature trick? To hide the decline?
Then after 1975, peak can be seen since no 40-yr filter applied.

The Olde Curmudgeon
April 10, 2010 4:24 pm

“there where” should be “there were”
“where there..?” should be “were there…?”