From the told ya so department, comes this recently presented paper at the European Geosciences Union meeting.
Authors Steirou and Koutsoyiannis, after taking homogenization errors into account find global warming over the past century was only about one-half [0.42°C] of that claimed by the IPCC [0.7-0.8°C].
Here’s the part I really like: of 67% of the weather stations examined, questionable adjustments were made to raw data that resulted in:
“increased positive trends, decreased negative trends, or changed negative trends to positive,” whereas “the expected proportions would be 1/2 (50%).”
And…
“homogenation practices used until today are mainly statistical, not well justified by experiments, and are rarely supported by metadata. It can be argued that they often lead to false results: natural features of hydroclimatic times series are regarded as errors and are adjusted.”
The paper abstract and my helpful visualization on homogenization of data follows:
Investigation of methods for hydroclimatic data homogenization
Steirou, E., and D. Koutsoyiannis, Investigation of methods for hydroclimatic data homogenization, European Geosciences Union General Assembly 2012, Geophysical Research Abstracts, Vol. 14, Vienna, 956-1, European Geosciences Union, 2012.
We investigate the methods used for the adjustment of inhomogeneities of temperature time series covering the last 100 years. Based on a systematic study of scientific literature, we classify and evaluate the observed inhomogeneities in historical and modern time series, as well as their adjustment methods. It turns out that these methods are mainly statistical, not well justified by experiments and are rarely supported by metadata. In many of the cases studied the proposed corrections are not even statistically significant.
From the global database GHCN-Monthly Version 2, we examine all stations containing both raw and adjusted data that satisfy certain criteria of continuity and distribution over the globe. In the United States of America, because of the large number of available stations, stations were chosen after a suitable sampling. In total we analyzed 181 stations globally. For these stations we calculated the differences between the adjusted and non-adjusted linear 100-year trends. It was found that in the two thirds of the cases, the homogenization procedure increased the positive or decreased the negative temperature trends.
One of the most common homogenization methods, ‘SNHT for single shifts’, was applied to synthetic time series with selected statistical characteristics, occasionally with offsets. The method was satisfactory when applied to independent data normally distributed, but not in data with long-term persistence.
The above results cast some doubts in the use of homogenization procedures and tend to indicate that the global temperature increase during the last century is between 0.4°C and 0.7°C, where these two values are the estimates derived from raw and adjusted data, respectively.
Conclusions
1. Homogenization is necessary to remove errors introduced in climatic time
series.
2. Homogenization practices used until today are mainly statistical, not well
justified by experiments and are rarely supported by metadata. It can be
argued that they often lead to false results: natural features of hydroclimatic
time series are regarded errors and are adjusted.
3. While homogenization is expected to increase or decrease the existing
multiyear trends in equal proportions, the fact is that in 2/3 of the cases the
trends increased after homogenization.
4. The above results cast some doubts in the use of homogenization procedures
and tend to indicate that the global temperature increase during the
last century is smaller than 0.7-0.8°C.
5. A new approach of the homogenization procedure is needed, based on
experiments, metadata and better comprehension of the stochastic
characteristics of hydroclimatic time series.
- Presentation at EGU meeting PPT as PDF (1071 KB)
- Abstract (35 KB)
h/t to “The Hockey Schtick” and Indur Goklany
UPDATE: The uncredited source of this on the Hockey Schtick was actually Marcel Crok’s blog here: Koutsoyiannis: temperature rise probably smaller than 0.8°C
Here’s a way to visualize the homogenization process. Think of it like measuring water pollution. Here’s a simple visual table of CRN station quality ratings and what they might look like as water pollution turbidity levels, rated as 1 to 5 from best to worst turbidity:
In homogenization the data is weighted against the nearby neighbors within a radius. And so a station might start out as a “1” data wise, might end up getting polluted with the data of nearby stations and end up as a new value, say weighted at “2.5”. Even single stations can affect many other stations in the GISS and NOAA data homogenization methods carried out on US surface temperature data here and here.
In the map above, applying a homogenization smoothing, weighting stations by distance nearby the stations with question marks, what would you imagine the values (of turbidity) of them would be? And, how close would these two values be for the east coast station in question and the west coast station in question? Each would be closer to a smoothed center average value based on the neighboring stations.
UPDATE: Steve McIntyre concurs in a new post, writing:
Finally, when reference information from nearby stations was used, artifacts at neighbor stations tend to cause adjustment errors: the “bad neighbor” problem. In this case, after adjustment, climate signals became more similar at nearby stations even when the average bias over the whole network was not reduced.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.







Victor Venema says:
July 17, 2012 at 6:58 am
Victor, thanks for your comments. Nobody is saying that the inhomogeneities do not have a bias. What was said was that if you have a number of phenomena biasing the records (you point out several different ones above), you would expect the biases to cancel out, rather than reinforce. They may not do so, but that has to be the default assumption.
Thank you also for your fascinating and valuable study, but that is a huge oversimplification of the results. The work you report on focused on analyzed a host of methods for homogenization, and a total of 15 such methods were compared. Some of these improved the quality of the data, and some did not. Your abstract states (emphasis mine):
So for you to now claim that “homogenization improves the quality of climate data” is a total misrepresentation of the results. According to your study, some types of homogenization improved the temperature data, most didn’t improve the precipitation data, and people often misused the homogenization software. That’s a long, long ways from your statement above that “homogenization improves the quality of climate data”. The best we can say from your study is that sometimes, some homogenization techniques improve the quality of some climate data, depending on the metric chosen to measure the improvement …
Again, thanks for your very interesting work on the question.
w.
Seems to me that at temp is a temp. Why mess with it? if one has to homogenize, or adjust, then the site should be moved to an area that does not need such. Airports need real ramp temps so aviators can calculate landing and takeoff distances.
Also, we recognize that there is a difference in airport UHI sites and remote wilderness type stations that have not had an increase in typical infrastructures like asphalt, etc. All seem legitimate as individual sites. I just can’t understand the thinking and reasoning in making adjustments of any sort. We all know how to manipulate data for a cause for a paper or job using statistical methods. Appears as unnecessary busy work. (sounding cynical?)
As quoted from the rogerknights link where Steve McIntyre says: “In commentary on USHCN in 2007 and 2008, I observed the apparent tendency of the predecessor homogenization algorithm to spread warming from “bad” stations (in UHI sense) to “good” stations, thereby increasing the overall trend.” Pretty much says it in my view.
This needs proper peer review and crowd review. If correct then certain people deserve to go to jail. There can be no defence for such adjustments (homogenization).
When I read explanations for homogenization of data like “The data does need some kind homogenization to correct for inaccurate or poorly situated instruments.” I automatically get suspicious of the whole project. How do you tell if a thermometer is inaccurate? If it is truly inaccurate, instead of just recording temperatures that are inconveniently lower than what the reseacher wants, then the thermometer should be discarded and all the data it recorded discarded. And what is the criteria for a poorly placed instrument? If it truly doesn’t meet standard requirements for placement, then all its recorded data should be discarded. If eliminating truly faulty and poorly sited instruments leaves gaps, so be it. Unless an absolutely foolproof and validated method can be used to accurately “fill in the blanks”, only valid temps should be used and the problems with instruments and sites should be duly noted.
Jay Davis
Eyal Porat says:
July 17, 2012 at 3:51 am (Edit)
Somehow this doesn’t surprise me.
I believe the other half is the UHI effect.
##############################
That would put us in the LIA. Look if people want to twist and turn the numbers to make this century as cold as the LIA, then that’s a fun little game. But if you actually believe that the sun has anything to do with the climate and you believe in LIA solar minimum, then it makes it rather hard to argue that:
1. The sun was the cause of the warming since the LIA
2. its no warmer now than in the LIA.
But go ahead and knock yourself out with crazy arguments. Don’t expect to convince anyone.
JeffC says:
July 17, 2012 at 4:52 am
a station with data should never be homogenized … it doesn’t need to be … homogenization doesn’t reduce errors but simply averages them out over multiple stations … and why assume there are errors ? if there are then id them and toss them out otherwise assume the raw data is good … this assumption of errors is just an excuse to allow UHI to pollute nearby stations …
____________________________________
Agreed. Either the station is giving good data, in which case it should be left alone or the data is questionable in which case the data (with appropriate documentation of reasons) is tossed.
The fact they tossed so many stations makes the current data set questionable. Geroge E Smith (Chiefio) looked into the Great Dying of the Thermometers starting around HERE
Also see his: Thermometer Zombie Walk
Bob Tisdale looks at the Sea Surface data sets: http://bobtisdale.blogspot.com/2010/07/overview-of-sea-surface-temperature.html
The Cause of Global Warming by Vincent Gray, January 2001
A Pending American Temperaturegate By Edward Long, February 2010
Cumulative adjustments to the US Historical Climatological Network: http://cdiac.ornl.gov/epubs/ndp/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
Arctic station Adjustments: http://wattsupwiththat.files.wordpress.com/2012/03/homewood_arhangel_before_after.png?w=640
See: http://wattsupwiththat.com/2012/03/19/crus-new-hadcrut4-hiding-the-decline-yet-again-2/
But I think the clincher is AJ Strata’s error analysis article coupled with Jo Nova’s article Austrailian Temperature Records, Shoddy, Inaccurate, Unreliable – SURPRISE!
There is no way you can distinguish the trend from the noise in the temperature record. Especially after it has been tampered with.
I am certainly glad this paper got written but it is not news to anyone who has looked at WUWT.
This paper coupled with the recent article By its Actions the IPCC admits its past reports were unreliable should be sent to every Congresscritter in the Federal and State government with the header GOOD NEWS, there is no global climate change crisis….
highflight56433 says:
July 17, 2012 at 10:46 am (Edit)
Seems to me that at temp is a temp. Why mess with it? if one has to homogenize, or adjust, then the site should be moved to an area that does not need such.
###################
That’s not the problem
Situation: When have station named Mount Molehill. It is located at 3000 meters above sea level. It records nice cool temperatures from 1900 to 1980. Then in 1981 they decide to relocate the station to the base of Mount Molehill 5 km away. Mount Molehill suddenly because much warmer.
But won’t they rename the station? Nope! they may very well keep the station name the same.
But won’t the latitude and longitude change? Nope. it depends entirely on the agency recording the position, until recently many only reported to a 1/10 of a degree ( 10km) So, what you get, IF YOU ARE LUCKY, is a piece of metadata that says in 1981 the altitude of the station changed.
Now, my friends, how do you handle such a record. a station at 3000 meters is moved to 0 meters and suddenly gets warmer? That’s some raw data folks. Thats some un adjusted data.
anybody want to argue that it should be used that way??
no wait, you all looked at Roy Spencer new temperature record. right? Did Anthony complain that Roy adjusted his data for differences in altitude? Nope. Weird how that works.
When you have stations that change altitude over their record the data cannot be used as is. Roy Knows this. Anthony knows this. You all know this.
[Snip. You may disagree with Anthony, but when you call him a liar that crosses the line. ~dbs, mod.]
John Finn
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970 – i.e just about the time the CO2 effect would be expected to become distinguishable from natural variability.
This paper does nothing to debunk the CO2 effect. On the contrary it suggests that TSI, Svensmmark, PDO and other natural effects are negligible over time. Note Satellite readings are not contaminated by UHI
John…….I’m not sure you meant to let that cat out of the bag did you? You are saying then that all this paper is suggesting is that the general assumptions of the surface records are 0.3 – 0.4 ‘higher’ than they should be? Fair enough…I actually agree with you. Which means then that the surface record should really match UAH at 0.4C since 1979. Fair enough, I agree with you.
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C. Or…..if we junk half the range of surface records as suggested above….natural variability equals around 0.2C……..that leaves you some room for CO2 since the 1970s to give you another 0.2C.
hmmmm……….isn’t that pretty much what Lindzen etc have been saying all along…CO2 effect trivially true but essentially meaningless? Forgive me if I don’t get too worked up about 0.2C over 30+ years….
willis:
“of methods for homogenization, and a total of 15 such methods were compared. Some of these improved the quality of the data, and some did not. ”
Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, a product that isn’t used by anyone, a product that has been replaced by version 3. It would make sense to look at how PHA performed rather than SNHT which has known issues.
There is another aspect to temperature trends once explained to me by Steven Mosher when he was demoloshing an error I had made in regard to analysing NASA/GISS gridded data. I’ve been thinking about it since, and it seems this would be an appropriate thread to bring it up in.
IIRC, Mr Mosher explained to me that GISS doesn an “in fill” of missing grid data provided that over some percentage (50% I think) is avaiable for that grid cell for the time period in question. As an exampled, if a grid cell had temps of 10, nodata, and 11, GISS would for those three time segments, “in fill” the nodata with the average of the other two, for a temp of 11.5. At the time, it explained how GISS could show a value in a grid cell truncated to a specific point in time when taking a different time segment would show that grid cell as being empty. But the over all method has bothered me even since and this thread reminded me of it.
If the explanation as I understand it is correct, then is can have no other effect on a global basis across the entire time series but to warm the present and cool the past. That the earth’s temperature has been warming since the LIA is reasonably well accepted. If that is the case, we have to consider that there are is no more data being added to the “cold end” of the GISS record. We’ve got what we’ve got. But there IS data being added at the WARM end of the graph which is in the present. Each year, this extends the to timNe series of gridded data for which we have more than 50% of the gridded data over the entire time period. This in turn allows for older data with missing grid cells to be “in filled” that would otherwise be blank. But because the “in fill” is predicated upon new data that is BY DEFINITION warmer than the old data, the linear trend that calculates the previously empty grid cell in at the beginning of the temperature record must calculate an increasingly colder temperature for that cell. To illustrate:
1, 2, 3, 4, 5, 6 (year)
N,10, N, N 10, N (deg)
In this series, we have 6 points in time, with 4 missing values. The series as a whole is not a candidate for “in filling”. However, if we ran GISS’s reporting program to look at only poiints 2,3,4 and 5, we would meet the 50% threshold, and report that grid cell considered for those time periods ONLY as 10, 10, 10, 10.
Now let’s add a one more years of data” assuming newer years are warmer than older years”
1, 2, 3, 4, 5, 6, 7 (year)
N, 10, N, N, 10, N 12 (deg)
We still cannot “in fill” the whole record because we have less than 50% of the data points. But what if we were to only look at the last four? GISS would “in fill” those by calculating a trend from the two data point that alread exist giving:
4, 5, 6, 7 (year)
9, 10, 11, 12 (deg)
Looks almost sensible, does it not? But wait! what if we looked only at years 2,3,4,5 and applied the same technique? We’d get 10, 10, 10, 10! Year 4 would be a 10, not a 9! By adding one more year of data to the original 6, and looking only at the last 4, we can “cool” year 4 by one degree. Let’s expand the data series with some more years:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (year)
N, 10, N, N, 10, N, 12, 13, 14, 14, 14. (deg)
This is where I think the “system” as I understand it starts to fall down. If one looks at years 5 through 11, “in filling” a value of 11 deg for year 6 makes a certain amount of sense. What gets broken is that we now have enough data to in fill for the enture series. So, while if one looked at years 1 through 5 as raw data, one would most likely conclude that the first 5 years or so were “flat”. But in fill based on a linear trend due to the recent years being added to the record and “in fill” the whole thing, one would get something like:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (year)
9, 10, 9, 9, 10, 11, 12, 13, 14, 14, 14. (deg)
By all means, please correct me if my understanding of this whole snarled up system of “in filling” is wrong. But if I do understand it correctly, I don’t see how temps where are higher today being added each year to the temperarture record can have any other effect than to “cool the past” as the number of grid cells with data gets added in warmer periods of times and so increasingly biases the extrapolation to earlier time periods when we have less data. If we were adding data at the same rate to both ends of the scale, this would make some sense. But since we’re adding data at the WARM end (the present) only, cramming a linear trend through the data to “in fill” data at the “cold” end can only cool the past, and with no real justification for doing so.
interesting and relevant paper in current issue of Nature chimate change on temp trends over the past 2 millenia
http://www.nature.com/nclimate/journal/vaop/ncurrent/pdf/nclimate1589.pdf
{Reply: Thank you, but that has already been discussed at WUWT here. -REP]
Willis Eschenbach says:
“Victor, thanks for your comments. Nobody is saying that the inhomogeneities do not have a bias. What was said was that if you have a number of phenomena biasing the records (you point out several different ones above), you would expect the biases to cancel out, rather than reinforce. They may not do so, but that has to be the default assumption.”
As explained in a bit more detail above radiation errors in early instruments explain much of the effect and has a clear bias. With a small number of causes for inhomogeneities and these changes happening to most stations in a network, it would be very unlikely that they all cancel out.
Willis Eschenbach says:
“So for you to now claim that “homogenization improves the quality of climate data” is a total misrepresentation of the results. According to your study, some types of homogenization improved the temperature data, most didn’t improve the precipitation data, and people often misused the homogenization software.”
Thank you for the correction. Yes, the word “temperature” would have been more accurate as “climate”. Precipitation is a real problem, not only for homogenization also for climate modeling. As the discussion here was about temperature I had not thought of that.
I think the statement that homogenization improves temperature data is a fair one-sentence summary. Scientist know which the good algorithms are and these algorithms are used to homogenize the important data sets. For example, the method you care about most, the one used to homogenize the USHCN dataset performed very well. Some more obscure or new methods produced problems and some people homogenizing data for the first time made the data more inhomogeneous. In a scientific study you mention such detail, in a normal conversation you typically do not. For the details you can read the open-access study.
ugh… shoufd have said 10.5 in my initial example above, not 11.51
Anthony
“REPLY: Thank you for your opinion, reading your blog, clearly you wish to prevent opinion and discourse, unless its yours. See what Willis says below. See also what statistician Steve McIntyre has to say about it. Be sure of yourself before making personal accusations – Anthony”
I’m not sure what Steve McIntyre would say about a paper he hasnt read. I’ve spoken to him about this on occasion and he wasnt very interested in the paper or the question of late. Kenneth Fritsche ( from CA and Lucias ) is the most up to date blog commenter on this topic that I know. Also, I’m not sure that appealing to authority is the best argument.
The benchmarking study was a blind test a contest of sorts between various homogenization techniques. Its just the kind of test with synthetic data that a statistician like steve would approve of. basically, a true series of temperatures were created and then various forms of bias and error were added to stations. As I recall they ended up with 8 different versions of the world. The various teams then ran their algorithms on the corrupt data and they were scored by their ability to come closest to the “truth” data. As willis notes some did better than others.
He fails to note the success of the PHA approach which is really the question at hand because PHA is used on GHCN products.
Opinions and bias about homogenization are a good place to start a conversation. In the end the question is settled by some good old fashion testing. Create some ground truth data. Inject error and bias ( various forms ) into that ground truth data and test whether a method can find and correct the error or not. Every body has opinions, even opinions about who is the best person to appeal to. In the end, run the test.
If averaging several years at a time and very careful to look at non-fudged data, northern hemisphere temperatures rose around 0.5 degrees Celsius from the 1900s through the 1930s, declined around 0.5 degrees Celsius from the 1940s through the 1960s, and rose around 0.5 degrees Celsius from the 1970s through the 1990s (having not risen any more by now — 2012 — since then, not after the 1998 El Nino and 1997 albedo and cloud cover shift).
As +0.5 – 0.5 + 0.5 = +0.5, that would be a net northern hemisphere temperature rise of around a half a degree Celsius over the 20th century.
Arctic temperatures rose a little more than the northern hemisphere average but in a similar pattern. Southern hemisphere temperatures and thus global average temperatures rose a little less.
So 0.4 degrees global net temperature rise over the century in the Steirou and Koutsoyiannis paper would fit, with the preceding corresponding to these graphs of non-fudged data:
http://earthobservatory.nasa.gov/Features/ArcticIce/Images/arctic_temp_trends_rt.gif
(with the arctic varying more than the global average but having a similar pattern)
and
http://wattsupwiththat.files.wordpress.com/2012/07/nclimate1589-f21.jpg?w=640&h=283
(from http://wattsupwiththat.com/2012/07/09/this-is-what-global-cooling-really-looks-like/ )
and, especially, for the average over the whole northern hemisphere, the original National Academy of Sciences data before Hansen fudged it:
http://stevengoddard.files.wordpress.com/2012/05/screenhunter_1137-may-12-16-36.jpg?w=640&h=317
(from and discussed more at http://stevengoddard.wordpress.com/2012/05/13/hansen-the-climate-chiropractor/ )
Global temperatures having close to +0.4 degrees rise 1900s->1930s, -0.4 degrees fall 1940s->1960s, and +0.4 degrees rise 1970s->2012, making
recent years be 0.4 degrees warmer than a bit more than a century ago fits a 60 year ocean cycle (AMO+PDO) on top of solar/GCR activity change as in http://www.appinsys.com/globalwarming/GW_Part6_SolarEvidence_files/image023.gif plus shorter ocean oscillations, leaving very little room at all for manmade global warming. The net human effect (net with cooling effects of aerosols) could be up to a few hundredths of a degree warming, but there is not justification to ascribe even multiple tenths of a degree to it when the pattern predominately fits natural trends (like the cooling 1940s-1960s — very major cooling in non-fudged data, the reason the global cooling scare existed, before Hansen hid the decline — was during a period of continuous rise in human emissions but during a cooling period in the far more dominant natural influences). Again, such fits even how sea level rise was no more (actually less) in the second half of the 20th century than its first half despite an order of magnitude rise in human emissions meanwhile, as http://www.agu.org/pubs/crossref/2007/2006GL028492.shtml implies.
John Finn says:
July 17, 2012 at 10:00 am
Since UAH satellite temperatures show an increase of ~0.4 deg over the past 30 years then ALL warming over the past century must have been since the 1970
mikef2 says:
July 17, 2012 at 11:17 am
Funny thing though, is we have those same surface records showing half a degree swings in the early and mid part of last century….as you say, without any CO2 input, so natural variation can swing 0.4C.
I agree with Mike. They say a picture is worth a thousand words. Here is the ‘picture’.
http://www.woodfortrees.org/plot/hadcrut3gl/from:1900/plot/hadcrut3gl/from:1912.33/to:1942.33/trend/plot/hadcrut3gl/from:1982.25/to:2013/trend
That was a nice, straighforward example. Now…
Do we warm the past, or cool the present? By how much? Why?
What do you do about creeping UHI? How do you detect it?
I have seen examples here at WUWT and elsewhere where several decades worth of data are suddenly shifted by 0.5C. Exactly. For every month. Does this seem reasonable to you, or shouldn’t there be some variability in the corrections?
Perhaps it exists, but I’ve never seen any comments regarding adjusted data as to why it was adjusted. And they keep adjusting the past! You’d think once would be enough. And every re-adjustment of the past seems to make it colder. Perhaps my impression is wrong, but it seems to be widely held. A consensus, if you will ;-). Is there a genuine reason for this apparent adjustment trend?
Maybe Mosh missed this update added to this post:
Steve McIntyre concurs in a new post, writing:
My advice to Mr. Mosher is, “watch the skies”.
david
“You cannot make this statement unless you know to some degree of precision what the time constant is, which you do not. Further, even if you knew the time constant for one particular forcing, you must also know the time constants for all other forcings that are active to the point of still being significant, what their sign is, and how far along each is in terms of a total of 5 time constants. There are WAY too many factors all in place at the same time, we know the time constant of pretty much none of them, let alone which ones are at the beginning of the cycle and which ones are at the end.”
Sure one can make this statement with a precise knowledge of the time constant. You need to do more reading on how these can be estimated from data. You dont need precision and we dont have precision, by the ECR is like to be 1-2x of the TCR. Fingers crossed there is a really good paper on this, hoping it gets submitted by the end of the month. Until then you get to figure out the math on your own.
Nope anthony I didnt miss that. I’m talking about the paper on testing homogenization.
see victors reference.
Steven Mosher says:
“Then of course it would make sense to check the report and see how PHA did? Because they are looking at GHCN v2 here, ”
The pairwise homogenization algorithm used by NOAA to homogenize USHCN version 2, is called “USHCN main” in the article. It performed well. It has a very low False Alarm Rate (FAR). As there is always a trade of between FAR and detection power, the algorithm could probably have been more accurate overall. And the pairwise algorithm has a fixed correction for every month of the year. Inhomogeneities can, however, also have an annual cycle. For example, in case of a radiation error, the jump will be larger in summer as in winter. With monthly corrections USHCN would have performed better, especially as the size of the annual cycle of the inhomogeneities in the artificial data used in this study was found to be a little too large.
Steven Mosher says:
July 17, 2012 at 11:17 am
No, it would make sense for you to check the Venema report and see how PHA did. Why? Because when I look at the report, I see no reference to PHA anywhere in it at all.
Also, your point about GHCN V2 is curious. It was superseded by V3, it is true … but it was used for years, it was claimed to be a valid method, and folks say (I haven’t checked it) that V3 results are not much different. So errors in V2 seem like they are relevant to the discussion.
Not only that, but the Venema paper didn’t analyze the GHCN methods (either V2 or V3) at all, there’s not one mention of GHCN in the paper. Go figure …
w.
This paper Anthony
“I’m not sure what Steve McIntyre would say about a paper he hasnt read. I’ve spoken to him about this on occasion and he wasnt very interested in the paper or the question of late. ”
http://www.clim-past.net/8/89/2012/cp-8-89-2012.html
And as he notes in his post he’s not interested in looking into it.
REPLY: Thanks for clarifying what you said, always good to cite – Anthony
Steven Mosher;
Sure one can make this statement with a precise knowledge of the time constant. You need to do more reading on how these can be estimated from data. You dont need precision and we dont have precision, by the ECR is like to be 1-2x of the TCR. Fingers crossed there is a really good paper on this, hoping it gets submitted by the end of the month. Until then you get to figure out the math on your own.
>>>>>>>>>>>>
I’ll be very interested in the paper when it comes out. In the meantime, I submit to you that I am capable of calculating a time constant if, and ONLY if, I have sufficient data on ALL the processes involved and EACH of their time constants and EACH of their progress through 5 time constants. I think that’s pretty problematic. We’ve got dozens, perhaps hundreds or thousands of physical process all going on at the same time, and many causing feedbacks both positive and negative to each other. Isolating ONE factor (forcing from CO2 doubling for example) from all the others requires that I know what all the others are, what their time constants are, and when they started, and how they related to each other from a feedback perspective. This makes the hunt for park matter and the Higg’s Boson look like Grade 1 arithmetic. Now knowing what ALL the forcings are, what the time constant for EACH is, and WHEN each began makes the calculation of any single forcing from the data a fools errand. IMHO.