This is a repost of two articles from John Graham-Cumming’s blog. I watched with interest earlier this month where he and a colleague identified what they thought to be a math error related to error calculation when applied to grid cells. It appears now through a journalistic backchannel that the Met Office is taking the issue seriously.

What I found most interesting is that while the error he found may lead to slightly less uncertainty, the magnitude of the the uncertainty (especially in homogenization) is quite large in the context of the AGW signal being sought. John asks in his post: “If you see an error in our working please let us know!” I’m sure WUWT readers can weigh in. – Anthony
The station errors in CRUTEM3 and HadCRUT3 are incorrect
I’m told by a BBC journalist that the Met Office has said through their press office that the errors that were pointed out by Ilya Goz and I have been confirmed. The station errors are being incorrectly calculated (almost certainly because of a bug in the software) and that the Met Office is rechecking all the error data.
I haven’t heard directly from the Met Office yet; apparently the Met Office is waiting to write to me when they have rechecked their entire dataset.
The outcome is likely to be a small reduction in the error bars surrounding the temperature trend. The trend itself should stay the same, but the uncertainty about the trend will be slightly less.
===============================================
Something odd in the CRUTEM3 station errors
Out of the blue I got a comment on my blog about CRUTEM3 station errors. The commenter wanted to know if I’d tried to verify them: I said I hadn’t since not all the underlying data for CRUTEM3 had been released. The commenter (who I now know to be someone called Ilya Goz) correctly pointed out that although a subset had been released, for some years and some locations on the globe that subset was in fact the entire set of data and so the errors could be checked.
Ilya went on to say that he was having a hard time reproducing the Met Office’s numbers. I encouraged him to write a blog post with an example. He did that (and it looks like he had to create a blog to do it). Sitting in the departures lounge at SFO I read through his blog post and Brohan et al.. Ilya’s reasoning seemed sound, his example was clear and I checked his underlying data against that given by the Met Office.
The trouble was Ilya’s numbers didn’t match the Met Office’s. And his numbers weren’t off by a constant factor or constant difference. They followed a similar pattern to the Met Office’s, but they were not correct. At first I assumed Ilya was wrong and so I checked and double checked has calculations. His calculations looked right; the Met Office numbers looked wrong.
Then I wrote out the mathematics from the Brohan et al. paper and looked for where the error could be. And I found the source. I quickly emailed Ilya and boarded the plane to dream of CRUTEM and HadCRUT as I tried to sleep upright.
Read the details at JGC’s blog: Something odd in the CRUTEM3 station errors
I’ve noticed that there hasn’t been one substantive response to Tamino’s critique. Not one. All you do is make ad hominems and say that he won’t reveal his name, but that doesn’t make the merits of his criticisms less valid. The fact that you even make this criticism proves you are operating in bad faith. Anyone who was operating in good faith would not only respond directly to the criticism, but also would refrain from making redirecting arguments like “Tamino won’t reveal his name.”
Can anyone–ANYONE–prove Tamino is wrong?
REPLY: Probably, just not in insta-time. You want it IMMEDIATELY. Science isn’t about insta-results nor should any be expected. I haven’t even heard back from E.M. Smith yet. Did Tammy do his whole thing in an afternoon while at his day job office? I think not. Get some patience or bugger off – A
@ur momisugly Brandon Sheffield
Thanks for sharing that your comments were “moderated out” of “Tamino”s blog.
How revealing that someone who runs a blog called “Open Mind” is too “Closed-minded” to post comments that disagree with or question his position. Not gonna find any honest search for truth there! A leopard cannot change its spots.
Can someone tell me if measurement and calibration errors are considered in these global averages since 1850.
i.e. how accurate was global temperature measurement in 1850???
“You haven’t released your code” is such a cop-out. Anyone who uses those words should immediately lose credibility. If you can’t answer Tamino’s claims, do research of your own.
[Sorry, but releasing one’s code is the essence of scientific transparency and an essential requirement for attaining credibility…. you evidently are just a warmist advocate… – The Night Watch]
Off topic but temperature related: http://news.bbc.co.uk/2/hi/asia-pacific/8535792.stm
Extreme cold in Mongolia has killed so much livestock that the United Nations is starting a programme to pay herders to clean and collect the carcasses. More than 2.7m livestock have already died and another 3m carcasses are expected by June. The UN Development Programme cash-for-work programme aims to produce income for herders whose livelihoods have disappeared due to the weather. Concern is high for the risk of disease posed by piles of rotting dead animals. Once melting of the snow starts, this poses the threat of the spread of diseases such as anthrax and salmonella, infection and soil pollution.
In addition to the Minnesotans for Global Warming, I suspect there are a very many Mongolians for Global Warming, too.
Zud phenomenon
Persistent snowfall has created a blanket of snow over the entire country, with 60% covered by 20-40cm (8-16in) of snow, the UNDP says. This year is particularly harsh because of a phenomenon called Zud, which occurs when severely cold winters of below -50C (-58F) are preceded by dry summers that preclude sufficient grazing.
Fodder supplies have run out, resulting in the loss of millions of livestock in a country where a third of the population rely on herding and agriculture, the UNDP said.
I read John Graham-Cumming’s blog post and I can’t really make sense of it. It is probably my maths skills that are lacking but why is it that he says:
“You’d expect the error to be smaller with more samples in a grid square, but because of the division by the number of stations it’s actually getting larger.”
Dividing by a bigger number yeilds a smaller number, doesn’t it? I hope he can clarifiy his post a little more.
The funniest thing about you folks complaining that Tamino is anonymous and won’t release his code, is that this comment at Tamino predicted your response exactly:
http://tamino.wordpress.com/2010/02/25/shame/#comment-39779
Try to be more original next time.
@ur momisugly JMurphy (09:27:58) :
Per Tamino: He shows that two sets of over averaged data match in one period of time then asserts this means they must match in another period of time (when one of them is missing). That is a logic failure.
There are differences in response to long cycle events, such as the PDO for example, where areas can have similar behaviours in one part and opposite in another. look, for example, at the “loopy jet stream” we have now where being on one side or the other matters a great deal, yet a few years ago we had a flatter jet stream and the USA was more interchangeable. And no, I’m not saying that is THE answer, I’m saying that is AN example of the class of issue. I’ve seen these “contra areas” in several regional reports I’ve done. You are depending on luck and faith that they are balanced. They are not.
And averaging all of bucket A and all of bucket B simply hides too much. I’ve been down that path before and it’s a fruitless one. (For example, it hides that the Pacific Basin is not participating in ‘Global Warming’).
So take New Zealand. If you leave Campbell Island out of the entire set, you get no warming in the averages. The ’tilt” comes from having that lone cold island in the data during the early years, then out in later years. That’s with the “raw” data. It is clear that that single station biases the New Zealand set.
Now you could average New Zealand in with some other place that has a colder later year series and then say that you didn’t find anything, or you could average in a warmer station later and say you found ‘trend’ or you can do all kinds of over averaging that hides the fact that the basic individual station data for New Zealand is not warming significantly but the average set with a cold station dropout in the middle has a warming bias. But that would still be an error of over averaging. And you are depending on HOPE to get (or not get) any given trend. And HOPE is not a strategy.
Does CRUtemp(sub-n) or NCDC (whatever process) have a way of removing that inherent bias in the data? We don’t know because they have not released their code. (For GIStemp we know that it is not a perfect filter. I’ve benchmarked the code and a warming bias in station deletions leaks through as a warmer anomaly map. So for 1 out of 3 software sets at a minimum any bias in the input data leaks to the output). But that the base data is made biased is not in doubt. That’s what all those ‘average the data and look at the trends’ reports I made were all about. Measuring the measurements.
Some areas have no inherent warming to speak of, others have significant warming. And in many cases where there is warming, it only shows up as a “bolus” when thermometer drops happen. But to see it, you have to get finer grained than “average it all together”. (There is a reason a mix of all the soda flavors on a fountain is called a “suicide”… the result is “not good”.)
For what it’s worth, here is a (VERY preliminary) version of a dT/dt report for New Zealand. The dT/dt method differs from “First Difference” in that I do not reset a series of ‘anomalies’ to zero at a missing data item. I just carry the state forward until there IS a data item and then all that “Delta T” shows up in that time period. This puts more variation in individual yearly values if there are many missing years, but has the virtue of preserving overall trend better. I emphasis that this is “young code” and not completely QA’d yet, so subject to revision. It will also have a known tendency to “thermometer start of lifetime” bias in that if a thermometer-month first reports in a very cold year, it would report warming from that time forward due to that starting time bias. I’ve not finished the code to remove that bias, but to the extent thermometers arrive over time, that bias is reduced.
So here you can see that it’s possible to mitigate the warming bias introduced by that cold Campbell Island station if you do your anomalies “self to self” (but that is not how GIStemp does them). You can see that we had a very warm 1998 by about the same amount that we had a cold 1902, but in the end we are now just about back where we started (and not as warm as 1956 or 1916 in New Zealand).
New Zealand is not warming beyond what you would expect from UHI, Airport Tarmac (that warms without regard for landing numbers) and random wanderings (along with cyclical wanderings like the PDO, where we’ve just entered a cooling phase for the next 20 to 30 years.)
That second column “dT” is the “money quote”. That’s the cumulative average change of temperature from all thermometers over their lifetime to date. The second column is the change in that specific year.
In the “raw data simple average” of the New Zealand report ( at this url:
http://chiefio.wordpress.com/2009/11/01/new-zealand-polynesian-polarphobia/ )
you will find about 1.5 C of warming bias “kick in” when that station is dropped from the series. (Look for the “temperature series’ report in the link). And just below that one is a simple average with Campbell Island missing from the whole set. It shows little to no warming.
So you see, a thermometer drop most certainly DOES bias New Zealand. And all it takes is an existence proof of ONE to show that the same bias can show up in any of the data from the planet. But we also can see that if you do your anomaly calculations right (i.e. not cold season Apples to warm season Oranges) you can suppress that Survivor Bias in the basic data series.
If you look at similar reports for other regions, you can find one warming (Canada) while another is cooling ( Africa). Averaging all them together does not increase your knowledge, it decreases it. And you are just hoping that all those averages of averages have some meaning. Dividing those “mixed fruit baskets” into kept and tossed and averaging those two sets of “mixed fruit baskets” to compare against each other is just hiding what you are looking for.
So I don’t see any reason to “duplicate” something that is not very useful to begin with.
I could say more on that point, but I, too, am contemplating a potential to publish so the exact way in which other folks have ‘missed it’ is not something I’m willing to share just yet.
FWIW, the original “average the raw data and look at it” that shows an introduced bias in the data was for the purpose of seeing what had to be done to “clean it up” and for seeing what magnitude of “issue” was facing GIStemp. It was NOT to say that was exactly what happened to the planet, but rather “that was what happened to the data”. GIStemp was measured and found wanting to the task of removing that bias from the data.
To the extent the data were more representative of reality, GIStemp would work better. If the data set were about 1/2 to 3/4 C less “biased” GIStemp would be usable (IMHO – based on measured bias of data in and what was done to it by GIStemp).
The basic problem is that GIStemp does anomalies as “Basket A at time A” to “Basket B at time B” and those baskets are very non-representative now. You have about 1500 stations (about 1000 after GIStemp drops a bunch) that must fill 8000 boxes. Many boxes have only one station in them. So in many cases you are comparing one thermometer in the present to a different thermometer in the past (and in some cases they are fictional and ‘filled in’ from somewhere else…) Under those circumstances any “station survivor bias” will be amplified, not diminished. And station drops damage the GIStemp product, not improve it. For that reason alone, you need more stations in GHCN, not less.
Unless, of course, you wish to stop using GIStemp (which would be fine with me…)
Does anyone believe that it is possible to know the global temperature to an accuaracy +/-0.4 Deg C in 1850?????
I certainly don’t!
I have read Brohan et al, and it does not convince me!
BTW, I did send a copy of my original post question to E. M. Smith http://chiefio.wordpress.com/ ,but it must not have made it past his input filters. I sent it a couple of hours ago to this thread: “Canadian Concatenation Conundrum” and there have been posts added since then, but not mine.
I can understand that he might not have an immediate answer, but I don’t see why my post should be held up for that. I thought that’s what the comments on a blog are for.
So, since you suggested (twice) that my question related to alternative analyses that condradict Tamino’s analysis regarding station dopout effects should be sent to Mr. Smith, would you consider asking him to respond or at least release my comment from quaratine?
Ross M (16:42:56) :
Likely it is because they were dividing by a fixed number of stations in the grid box instead of the actual number that were there and the grid squares that they had all of the data for contained less than that fixed number of stations.
If you divide 100 by 30 you get 3.333 if you divide 100 by 25 you get 4 and the last I looked 4 > 3.333 but I might be wrong.
Mosher, you said, “Tamino will now not publish his paper. he’ll post a PDF or something.”
His latest comments are pre-emptively striking the odds of making it through peer-review…too “scientifically unimportant,” lol.
From my link above: “Determinate errors can be more serious than indeterminate errors for three reasons. (1) There is no sure method for discovering and identifying them just by looking at the experimental data. (2) Their effects can not be reduced by averaging repeated measurements. (3) A determinate error has the same size and sign for each measurement in a set of repeated measurements, so there is no opportunity for positive and negative errors to offset each other.”
This is why, when I have gained 5 lbs, and it shows up every morning, I subtract 5 lbs from my bathroom scale from then on. Than I average the lot together and can say with utmost confidence I haven’t gained a single pound. Do the math. It works.
PS, You say maths, I say math.
E.M.Smith,
If there is no evidence of divergence prior to station “dropping”, how on earth could NCDC who, in your words, “is seriously complicit in data manipulation and fraud…by creating a strong bias toward warmer temperatures through a system that dramatically trimmed the number… of weather observation stations” know which stations to drop a priori?
The match between past temperatures doesn’t necessarily preclude divergence in future temperatures, it merely suggests its unlikely. It does, however, put a damper on your conspiracy theory.
By the way, Lucia has a new post on the matter that is worth checking out: http://rankexploits.com/musings/2010/effect-of-dropping-station-data
carrot eater (14:29:47) :
If people are finding that Tamino moderates heavily, I’ve found the same of EM Smith. So unless one of the two open up, the only cross-discussion can occur here.
Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.
That’s what one of the things I like about slashdot, only one comment has ever been deleted and that was due to a court order, another is it not only community moderated, there is a meta-moderation to moderate the moderations.
Michael Jankowski (17:22:18) :
“His latest comments are pre-emptively striking the odds of making it through peer-review…too “scientifically unimportant,” lol.”
It is fairly unimportant. It’d be a minor paper, as it is now. Maybe he could expand the analysis a bit to something that hadn’t really been shown before.
WAG (16:25:52) : Anyone who was operating in good faith would not only respond directly to the criticism,
Or: “has a life and limited time and a dozen commitments already in queue and can’t jump just because somebody else wants it.”
So I was at the hospital today (it was relatively minor, but still I was sitting there…) and I was at the hospital (last night? night before? it’s a blur … ) with an elderly relative in the emergency room. Again for a long time. (She recovered “fine”).
Somewhere in between those two I was picking up a “Bigendian” box so I can run the final STEP of GIStemp (which also means about 2 to 4 days of sys admin work to make the box ready… yet to be done. But it has Open Office already on it so I’ll be able to do more graphs faster.)
All of which involve being Away From Keyboard (as my “about” tab says will happen on my blog). So frankly, the first time I realized some folks had their panties in a bunch and were ranting was about 1 hour before I posted the New Zealand Existence Proof above. I still have not read more than about 1/2 dozen of the comments in this thread (and probably will need to do that tomorrow… maybe… if I get some more time.)
So anyone who wants to demand specific services on a specific schedule will just have to join the trolls at the back of the queue. It’s just not a life priority right now. (Or you can pay my billing rate. $100 / hour for commercial operations. $200 / hour for “Climate Stability Deniers”. Discounts available for bulk purchase or for Friends of Anthony. And move to the head of the priority queue.)
The good news is that I got the dT/dt ‘merged modification flag’ code written while waiting in the emergency room… and had some good “think time” about how to best handle the “start of time” bias in dT/dt. So as of now I’ve got a good “characterization of the data” and measured bias in the data from thermometer drops over time by location; and I’ve got a decent tool to asses the actual warming / cooling in a region. I’ve also got GIStemp benchmarked through STEP1 fairly detailed and through STEP3 in rough form. I’m darned close to being able to do a proper end to end comparison of “what goes in” vs “what comes out” vs “what ought to come out”. And that is a bit more of a priority to me than how some one else has made The Usual Errors. [ over averaging, assuming things not in evidence, insufficient level of detail in investigation, “Hypothetical Cows” of all sorts, etc.]
Can anyone–ANYONE–prove Tamino is wrong?
No one needs to prove him / her / it wrong (do we know the gender of a Tamino?). They must prove they are right.
I’m “unimpressed” with the hand waving done. It’s over averaged and it depends on the faith that a trend at time A in bulk data continues into the period when the data is not in existence. At best it’s an inference, not a proof, of anything.
I’ve got a nice tidy little existence proof in New Zealand. It has the virtue that you can calculate it by hand if desired. (There are others, too, but one existence proof ought to be enough.) I’ve also got a benchmark of GIStemp using the whole USHCN set from 5/2007 to 11/2009. That is, you take the GHCN 136 USA thermometers and add in the rest of the (missing) USA and you get less warming. BTW, for the inevitable “no it doesn’t” per the USHCN.v2 addition point in 11/2009: USHCN.v2 is ‘warmer” than USHCN. There was a nice bit of magic wand usage there, where they took out USHCN in 2007 via letting it run obsolescent, let GHCN warm things, then add back in a pre-warmed USHCN.v2 so “nothing changes”. So you must compare Same to Same to see the bias. Not USHCN to USHCN.v2. But it is clear that the drop for the USA thermometers in GHCN from 1200+ IIRC to 136 did warm the USA data. Adding the USA back in cools the anomaly map. (There are also monthly changes that demonstrate warming of the data via thermometer drops, but that level of detail is a bit too much for this thread.)
Two existence proofs trumps one hand waving any day.
Oh, and a free sidebar: Note that the large dropping of USA stations in GHCN does not happen because the data are not available. NCDC has the data in the USHCN.v2 data set. It happens by a choice (or implicit choice) not for lack of the NCDC not reporting the data to itself…
“REPLY: Probably, just not in insta-time. You want it IMMEDIATELY. Science isn’t about insta-results nor should any be expected. I haven’t even heard back from E.M. Smith yet. ”
My apologies for my ‘sloth’, but know you know “the rest of the story”… I’ll get to my email queue a bit later tonight and respond to the (undoubtedly dozens of emails…) as time permits.
“Did Tammy do his whole thing in an afternoon while at his day job office? I think not. Get some patience or bugger off – A ”
May I buy you the beverage of your choice when next we meet ? 😉
I couldn’t have said it better myself…
FWIW, my priorities are very simple:
Family first.
Make money to support family.
(i.e. clients schedules, if any.)
Friends and commitments to friends.
Make money to support interests. Including classic Mercedes repairs 8-{
(i.e. stock trading)
Interests such as moving forward my deconstruction of GIStemp and GHCN change analysis. This includes writing the code I want written to learn the things I think need to be learned and to complete commitments that I’ve made to others. (This is all done “Space Available” and Space-A is not deterministic. Unlike what AGW folks chant, I get no funding from Exxon nor any other company. Some tips in the tip jar are about it. And a donated computer from a friend.)
Loads of other stuff like house maintenance, pet care, shopping with the spouse, catching up on sleep, oh, and I had to get a flat fixed Tuesday(?) too…
Then, after all that:
“Demands” by folks who could easily be making such demands only to disrupt my time or my productivity (or just because they are too impatient to chew their food before swallowing).
So let me make it perfectly clear to folks in that last category: I’m the driver of my own schedule. I work on my time to my priorities (and those of clients and friends) and not on your time. Things that serve to deflect from the priorities I’ve listed go to the bit bucket. Sorry to disappoint you, but your ‘needs’ are just not very important.
Science isn’t about insta-results nor should any be expected.
That’s a bit histrionic. Anthony, you’ve been making assertions about the missing station data for quite some time. Nobody should make any assumptions – pro or con – about Tamino’s study until he actually publishes something substantive. Both sides need to crank back on the triumphalism until there’s more to go on.
“Demands” by folks who could easily be making such demands only to disrupt my time or my productivity
Phil Jones, is that you?
carrot eater (14:29:47) : If people are finding that Tamino moderates heavily, I’ve found the same of EM Smith.
There is one of me. If I had a team of moderators who could keep things orderly, I could leave it more open. As it is, for simple reasons of limited time I can’t let it turn into a free for all. Sorry, but you have plenty of other places to run wild.
I’ve also seen a pattern of what appears to be organized graffiti and attempts to hijack threads. So no, you don’t get cart blanch. Get over it.
The topics covered at ‘my place’ are what interest me, and the comments and commenters that get through are those that support a positive and productive environment. AND don’t thread hijack or worse, hijack my time. If you don’t like that, you can always go somewhere else. Please.
So unless one of the two open up, the only cross-discussion can occur here.
Nope. I have very limited time for “cross-discussion’. My time goes preferentially into my work plan. So I’m going to be ‘out of time budget’ on this thread in about, oh, another 2 minutes. Then you can “cross-discuss” with someone else.
Frankly, I find it a complete waste of my time to explain where other folks have gotten something wrong. I really don’t care. Intelligence is limited but stupidity knows no bounds. So it’s an infinite time sink. One I choose not to indulge. It’s a manager thing… And I’ve been one for a few decades. It’s not a habit that it going away.
Which is unfortunate, because the topic of this particular thread is something entirely unrelated, and interesting in its own right.
So why don’t you get back to the topic of the thread.
Frankly, I had half decided to ignore the Tamino thread hijack, but it looked like Anthony was expecting me to say something. Otherwise I’d have just ignored it and stayed on topic. (A place where I intend to be shortly).
Tamino is of no interest to me. I would much rather be comparing my dT/dt reports (after dealing with the ‘start of time’ bias) to GIStemp anomaly maps and getting a true end to end bias benchmark.
I’m close enough now that I can do the comparison of “data with bias as computed from ‘unadjusted’ GHCN” and compare it with dT/dt by continent and make some very interesting observations about how much bias in in each subset as a measured value. Then compare that with GIStemp maps and see what bleeds through. Now that’s an interesting topic.
And about 2 weeks work elapsed time. Sigh.
Unfortunately, it will have to wait a day or two. Email queue calling.
So what you need to have as a “takeaway” is that I’m not interested in “cross-talk” as a matter of time management not as a matter of any agenda. If you don’t understand that, then I have a suggestion.
Get a Republican friend and a Democrat friend. BOTH of you try to call the Governor on the phone. That nobody gets through is because of time management, not because of a political orientation.
So I’m going to do what I’m going to do, and I’m not particularly interested in what the AGW folks think of it. And I’m negatively interested in “cross-discussion” that accomplishes nothing at the expense of very limited available time.
So with that, I’m going on to other topics.
E.M.Smith (16:52:31) :
“Per Tamino: He shows that two sets of over averaged data match in one period of time then asserts this means they must match in another period of time (when one of them is missing). That is a logic failure. ”
Slow down a bit, as this is quite a shift from the claim in the SPPI report. At that time, it was claimed that ‘systematically and purposefully’, certain stations were removed.
We have, from SPPI, “Calculating the average temperatures this way would ensure that the mean global surface temperature for each month and year would show a false-positive temperature anomaly – a bogus warming.”
This is saying that simply removing those stations, in itself, would ‘ensure’ a spurious warming, as the removed stations tended to be from ‘cooler’ locations. What Tamino has done shows that simply dropping those stations didn’t, in itself, ‘ensure’ any warming.
Yes, you could get an inaccurate read if the removed stations then would have started diverging from their grid-box neighbors, after the time of dropping. But to drop stations with that purpose in mind would require time travel. If the subset of dropped stations had been correlating with the other stations just fine up to that point, how would you know to drop that particular subset? Without time travel, you wouldn’t know.
So from Tamino’s results, we have no particular reason to think that dropping that subset of stations has in itself caused much more undersampling, where you miss local differences in trends because of a lack of stations. I suppose it’s possible, and some parts of Africa certainly do look undersampled. But we don’t know if the error due to undersampling would be warming or cooling; that’s what the error bars are for. Based on my reading of the SPPI report, this is a very different issue from that posed in the SPPI. It’s an issue that could be addressed by filling in data from SYNOPs, or the database that Dr. Spencer is now using. It sounds like NOAA is collecting more archived data for the next release of GHCN, as well.
By the way, you can see that nobody wants or wanted undersampling; see Peterson 1997, “Initial Selection of a GCOS Surface Network”. They wanted a nicely filled in map, including some high elevation stations where they were afraid a valley station would not correlate with the mountains.
sturat (16:58:48) : BTW, I did send a copy of my original post question to E. M. Smith http://chiefio.wordpress.com/ ,but it must not have made it past his input filters. I sent it a couple of hours ago to this thread: “Canadian Concatenation Conundrum” and there have been posts added since then, but not mine.
Well, I’ve been here, not moderating there… Yes, that “time management” issue… See the answer up thread here with two existence proofs.
BTW, some folks and some content goes “straight through” so some folks stuff ‘goes up’ even if I’m asleep… It’s not me actively doing anything in most cases.
So look up thread here and read what i’ve already written. And I’ll leave here and go look at your comment. I see it’s a duplicate of one here, so I’m not going to start a discussion of it under the Canada thread.
sturat
“I do agree with carrot eater, steven mosher, and others that it is unfortunate that these discussions can’t be held in a more civil manner.”
Steve Mosher is civil. But don’t be so quick to enable carrot eater’s dissembling. I’ve noticed his repeated comments on various alarmist sites, where he fully engages in calling skeptics “denialist,” “deniers” and similar terms when referring to commentators on WUWT and similar sites.
Is he being a chameleon? A hypocrite? Kissing up to closed-minded people like tamino?
You can decide for yourself. But I’ve got his number.
EM Smith
I am sympathetic regarding your recent trip the hospital, etc. Life does seem to get in the way of life sometimes, doesn’t it.
But, it seems you still had time to post several rambling, little value added posts that did little to add to the discussion.
I’m particularly intrigued by your accusation that Tamino is handwaving his arguments when all you have stated is that you have a couple of “tidy little existence proofs” that invalidate all of his work if we would only trust you. And, don’t bother you about asking for details since you are above question. Please, give us all a break.
Now, I can’t say myself whether Tamino’s analysis is absolutely correct, partially correct, or just down right wrong. But, at least he and others are willing to discuss the analysis and examine the disagreements. All I’ve seen on your site and this one is trust me, I’m too busy, I’m going to publish it someday, other people stole my data, and …
What I have asked for I believe is a reasonable request. Show a competing analysis on a comparable set of data that contradicts Tamino’s conclusion. I know Tamino took a couple of weeks to produce his results and that he continues to expand and refine the work. (He has stated that he open to incorporating RomanM’s comments into his analysis.) So, I don’t expect you or anyone else to be able to respond instantly. It would be civil to say that you are “interested” and would work on a response to be published (blogged?) in a couple of weeks. Assuming that it fits into your “time management” constraints.
That said, Mr. Watts, since it appears that your go to guy, EM Smith, is not interested in responding (or allowing similar questions on his blog) where is the reproducible analysis that shows that Tamino is wrong.
Oh, please don’t drag in the “court of law” meme. It really has not applicability here. Besides Mr. Stokes was convicted by a court of law.
EM Smith
Crossing posts. Wouldn’t you know it.
I just read your response on this site acknowleging my cross post to your site. I do (although I might not sound like it) understand about too many things to do and the lack of time. So, I see your point in not addressing my questions on your site.
I look forward to yours and others reaponses on thei site though.
Smokey
wrt carrot eater
A couple of phrases spring to mind
“A rose by any other name … willl still have thorns”
“If the shoe fits, …”
I realize that either of the above comments could be construed as close to name calling, and if you or someone else takes offence, then so be it.
I’m just asking for arguments be presented with as complete and robust of a technical, data driven nature as possible. Continued assertions that are not backed up with real analysis do not help anybody.