The 2009 weblog awards – zilched

Kevin Aylward makes a surprising announcement. Guess I’ll have to settle for “nominee with a boatload of nominations“.

Update – The 2009 Weblog Awards are off

4 Jan 2010

It is with a great deal of regret that I must inform everyone that The 2009 Weblog Awards are canceled.

Unfortunately the resources required to handle the load of voting (nearly 1,000,000 votes in 2008) could not be adequately provisioned. Even if the servers and bandwidth required appeared today it would be at least a few weeks before everything could be ready for voting.

This has always been a very, very labor intensive process, and in recent years with our success a very expensive competition to run. It’s hard to have unlimited resources provisioned for seven days of voting. In the past we’ve had to provision expensive servers for several months to prepare for the crush of voting.

We really tried to solve the resource problems with new ideas (such as cloud computing) this year, but ultimately we were not able to make things work to our satisfaction. I’m very proud of the voting platform we built for The Weblog Awards over the years, and the quality voting experience we’ve been able to deliver. The one thing I will not do is compromise that voting experience.

Rather than run a competition that had a high possibility of technically failing, the more prudent choice seemed to be regroup and consider our alternatives for hosting these awards going forward.

Thank you to those who have supported The Weblog Awards over the years, we appreciate your support, and our sorry we couldn’t make things work this year.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

69 Comments
Inline Feedbacks
View all comments
Håkan B
January 4, 2010 12:47 pm

DirkH (11:52:39)
You read too much into it. They simply underestimated this task and now they apologize this,. It’s not as simple as to rent more servers once you’ve run in to the overload, if you are serious and have already missed to count a lot of votes there’s no other way out, those people behaved honest, whether we like it or not and we still know how is the winner!

Mark
January 4, 2010 1:04 pm

Smokey (12:02:39) :
anna v (11:50:15) :
“Somebody had given a link a whie ago to a request to the organizers to exclude CA and WATTSUPWITHAT from the vote as not being ’scientific’.”
“Yet they put Pharyngula – a hate filled atheist blog – into the “science” category.”
Some of us are agnostic or Athiest. What is wrong with an athiest blog being put in the science category?

L Nettles
January 4, 2010 1:07 pm

Weblog Awards swamped by the urge to disintermediate. We want to have our say directly.

tallbloke
January 4, 2010 1:20 pm

And the winner would have been WUWT by a country mile.
It is more of a hybrid science/politics blog these days though, due to the climategate fallout.

wws
January 4, 2010 1:22 pm

There’s no need for anyone to worry about conspiracies – the people hosting them didn’t have the resources, for whatever reason. That happens in any number of small businesses every day, it may sound surprising but too much success can be just as dangerous as too little. (you build an enterprise expecting a certain level of response, but the response far exceeds your expectations and you haven’t invested in the capability to handle that, leading to much client/customer grief) Hope the people hosting them can figure out how to get them back off the ground.
Besides, although awards are fun, they don’t really mean anything in the real world. Nothing changes if it’s there or isn’t there. Sure, it’s a fun thing to throw around at some of the AGW partisans, but that’s about it, and really it doesn’t even do much there.
So regrettable, but not really all that big a deal, and no real reason to suspect that they’re not playing straight. Screw ups happen, and this looks like one of them.

January 4, 2010 1:23 pm

Kevin Aylward is one of the good guys, an original top conservative blogger that has contributed more to the success of the blogosphere than most know. If he couldn’t find a way to run the awards, it’s most likely due to financial and time considerations and concern over the legitimacy of the voting process. I know that tremendous volunteer hours went into the Weblog Awards, including auditing for automated voting.
I’d suggest an email to thank him, instead of the silly speculation about motives.

Steamboat McGoo
January 4, 2010 1:23 pm

Darn, and after several of us took the time to get RealClimate nominated as the Best Religious Blog – and got it done! When the polls closed RC was a clear top contender in the finalist noms.
That would have been priceless….

baldanders
January 4, 2010 1:34 pm

It does seem a bit odd that something like this would be too resource intensive to run, but we don’t know the details of their platform. I’ve also written database intensive applications that handled millions of hits a day, but they did a lot more reading than writing, and I was able to cache things in a lot of cases. This is pure speculation on my part, but if they are having problems with too much computation it might be that their platform doesn’t handle a lot of simultaneous database writes very well. That can actually be pretty tricky- naive implementations can almost always be brought to their knees by too many simultaneous writes. This can be gotten around in any number of ways, but it can require a bit of cleverness. And many programmers these days have taken the (true, within limits) maxim “programmer time is expensive, machines are cheap” a bit too literally.
The bandwidth thing is also puzzling. Bandwidth is pretty cheap these days, and it seems like they should be able to get by serving very lightweight pages.

January 4, 2010 1:39 pm

Well, I will just give my vote here instead then.
Done.
This could become a long thread 🙂

tallbloke
January 4, 2010 1:46 pm

Jabba the Cat (12:46:38) :
Smokey (12:02:39) :
“Yet they put Pharyngula – a hate filled atheist blog – into the “science” category.”
Debunking creationists and religious nutters that want to teach children pseudo science instead of biology does seem quite valid as a science orientated blog. I don’t see any difference between that and the new age religion of AGW debunking going on here and other similar places.

The difference is between the hate of Pharyngula and the humour of WUWT.

paullm
January 4, 2010 1:46 pm

I just witnessed Joe Romm on Cavuto’s FOX show predict 2010 has a better than 50/50 chance of being the hottest year on record. With diehard AGWers like JR I predict WUWT’s popularity will continue to outpace any other SCIENCE BLOG as long as it follows it’s familiar style and substance.
Neil challenged JR persistently as Romm sited all those ‘scientific organizations’ that have been so successful and continue to predict dire AGW consequences.
Anyway, Congrats Anthony! Yours and the Mods work continues to attract the best of Guests and Commenters. The science will always be there, here’s hoping WUWT will endure with it and it’s controversies.

Roger Knights
January 4, 2010 1:53 pm

JonesII (11:56:26) :
Blogs come from book-logs …

I’m pretty sure it’s a contraction of “web logs.”

SOYLENT GREEN
January 4, 2010 2:05 pm

McGoo and I and our accomplices take full responsibility.
http://tinyurl.com/yakeulq
Regrets to Steve, we all voted for him per your advocacy.

Bruce Cobb
January 4, 2010 2:09 pm

As I recall, last year the Pharyngula vote was simply the default anti-WUWT vote. Even with that blatant attempt to defeat WUWT, it still won.

E.M.Smith
Editor
January 4, 2010 2:19 pm

Robert Wille (11:47:08) : It’s pretty odd that lack of hardware is the reason behind not having the contest.
Depends entirely on the efficiency of the software involved. “Code Bloat” can consume new hardware faster than Moore’s Law can provide it. (See Microsoft as an example. They assume hardware advances are there to provide more ‘features’ and faster coding time, not ‘computes’ for you: IMHO based on observations of their entire history of development)
Object Oriented code can suck a box down in no time flat. All that re-usable and nested module ease of programming comes at a cost…
Heck, even Linux is getting code bloat (though not as bad). I keep some old 5.x and 6.x releases laying around for “small hardware”. (I have a router running on a “junk” laptop with 8 KB of memory… but newer releases want things measured in hundreds of MB of memory)
A relative who was a PhD at NASA had an interesting chart. It showed all the increase in productivity from improved hardware (double every 18 months) AND the increase from R&D into improved algorithms (in aeronautical engineering / modeling). The SW improvements were more important than the hardware…
An implied corollary to that is that poor SW can suck down all hardware advances and then some. Which is why your laptop of today running Vista is about the same responsiveness as the old x486 box with the MS OS release of that era even though the hardware will beat a low end supercomputer of the era…
And that is why I’m happy with BSD or Linux on a 400 Mhz AMD chip with 132 MB of memory… It does all I need and then some. ( Though I am having a serious lust problem over a new Mac … )
So, it’s hard to say if hardware really limited, but my guess would be that in the ‘rapid / easy develop with modern languages vs cost’ trade off, they chose easier development (and that usually means code that wants lots of hardware…). A very rational choice, btw, if you have limited and volunteer staffing… You can cut staffing to 1/10 with the right tools. And people cost more than hardware (except in Linux Land where people work for free 😉

paullm
January 4, 2010 2:23 pm

Mark (13:04:56) :
Some of us are agnostic or Athiest. What is wrong with an athiest blog being put in the science category?
The pre-requisite for a science blog should be the focus on science, not on religion/ideology , be it athieism, agnosticism, christianity, islamism, Real Climatism, etc. A blog primarily focusing on religion/ideology should be categorized as such, regardless.
WUWT hardly ever even touches r/i, thankfully. Politics and science? – as I’ve mentioned before when have they ever been exclusive of each other? However, either can be delved into on their own and science, tech, etc. is here at WUWT – the winner!

E.M.Smith
Editor
January 4, 2010 2:34 pm

Jabba the Cat (12:46:38) : edit
Smokey (12:02:39) : “Yet they put Pharyngula – a hate filled atheist blog – into the “science” category.”
Debunking creationists and religious nutters that want to teach children pseudo science instead of biology does seem quite valid as a science orientated blog.

Pardon, but I believe you will find that much science has been done by folks with a religious bent and that a reasonable case can be made that “atheism” is a religious belief, just a very specific one. I’ve frequently pointed out that Darwin was a religious person and felt his work displayed the way in which God worked. It is a false dichotomy to divide the two. (His first edition had some ‘thanks to God’ thing in it… need to find a copy…)
That said, I think I’m pushing the bounds of the “no religion discussion” guidance given here before by the moderators. So I’ll just close with this:
As an agnostic with atheistic tendencies I find enforced atheism under the guise of “science” to be an offensive act of imposition of a religion and not warranted by the material and substance of science.
I’ll put up a posting on my blog in a few hours so if folks want to thrash on this they can do so without cluttering up Anthony’s site with religious “stuff”.

E.M.Smith
Editor
January 4, 2010 2:46 pm

baldanders (13:34:17) : And many programmers these days have taken the (true, within limits) maxim “programmer time is expensive, machines are cheap” a bit too literally.
All you said is exactly right. Any database process is usually disk I/O limited not compute limited. Getting folks to buy into a hot RAID array is often not easy. They are conditioned to think CPU. Yet old IBM mainframes that were doing millions of credit transactions / day were often in the 10 MIP range (less than some calculators and palm tops today). But man did they have I/O channels to die for…

The bandwidth thing is also puzzling. Bandwidth is pretty cheap these days, and it seems like they should be able to get by serving very lightweight pages.

Lead time, lead time, lead time…
A telco often asks for 90 days lead time. If you pull teeth with a hammer you can get it down to 30 days. For ‘this week’ or ‘now’ you are SOL unless a very big account.
Oh, and if your router has a T1 interface card and you buy a T3 from the telco, that can be another month or three. If you have a slot to put it in, and don’t need a new 5 figure chassis…
So once you’ve sized and provisioned, if you get over run, you are ‘in the do’ for months. For a time critical event, you just drop packets and die.

baldanders
January 4, 2010 2:49 pm

E.M Smith: I’m laughing a bit about the 400 Mhz chip and the 132 megs of memory. I actually did most of the development and initial stress testing of a website that did indeed get upwards of 5 million database-driven page views a day at its peak on a PII 400 (Intel, not AMD) with 192 Megs of Ram running Debian. I think I bought that machine in early ’98. I used it as my main dev machine until Feb ’05. Part of my rationale was that if the app could handle stress on that machine then it could certainly handle stress on a more modern box. But I eventually gave it up because it was so different from the RAM-heavy quad-core Xeons we were using as servers that I could no longer draw any significant inferences from its performance. I still kind of miss that machine- I certainly got my money’s worth from it.
Anyway it’s very hard to believe that what these guys are doing _couldn’t_ be achieved on a cheap server. But it’s very easy to believe that they built an inefficient platform as a matter of expediency and weren’t able to retool fast enough. This happens all the time. This happened to me once, badly. I was head of tech for a startup, and the head guys said “limit what you spend preparing for contingencies”, basically. And “get the site up fast.” So I built it knowing that there was a limit to the traffic it could take. Then we went from 1000 page views a day to over 5 million a day in less than two hours (which must rank as one of the fastest ramp-ups of traffic in web history- perhaps the fastest.)
I solved the problems in less than 24 hours, but I was a full-time employee with a lot of experience dealing with high traffic, and I had a budget to work with and no mandate to break even in the short term (once we went to that many hits that fast people were begging to get in and offering money as their calling card.) Anyway, there’s no reason to assume a conspiracy here. It’s really easy to fail to recognize that your site is going to fall down under load, and I imagine it’s hard to fix with something as amateur hour as this poll. Better to cancel it than to compromise it.

E.M.Smith
Editor
January 4, 2010 3:21 pm

baldanders (14:49:25) :
E.M Smith: I’m laughing a bit about the 400 Mhz chip and the 132 megs of memory

Then I think you will really appreciate that it is my “GIStemp box” too. Yup, it is the hardware I ported GIStemp onto. Runs fine, too.
I wanted to be “period correct” in my hardware and to have the esthetics match the F77 code base 😉
Yes, I’m fond of “re-enactors” and have been to Civil War events…
The box started life as a x486 but I did a motherboard swap about a decade ago?… Kept the 5 inch floppy (the real old Really Floppy kind) but have no idea where to get media (as if I wanted to use it…).
I get a fond chuckle every time I run a GIStemp run or GHCN analysis just looking at it 😉
Super computer? Super Computer? We don’t need no steeenking SUPER Computer!!
FWIW, there is some later F90 code in GIStemp that looks to me like it is influenced in style by the Cray compiler (though it might be Sun Workstations, they agree on a lot of style points). But the idea of running this stuff on a generic white box was just too precious to pass up!
Oh, and I forgot to mention before:
At a prior employer, we were I/O bound on a Linux router/email/everything server appliance and traced it back to the network I/O stack. We got about a 10x improvement from tuning that code for volume… (rather like the Net BSD code base). I doubt the “out of the box” stuff has changed that way, though. So here is one specific example where to get the same performance would take at least 10 boxes in parallel… or you could tune a very low level bit of software… (or swap to BSD )
It was not CPU, nor memory, nor network bandwidth nor even disk I/O. It was the ethernet tcp/ip I/O stack.
The stuff folks never want to think about…
To know what to fix, you must measure and benchmark.

baldanders
January 4, 2010 3:25 pm

E.M.: sorry to double post but between moderation and composition time I had not seen your response.
You’re right that many apps are I/O bound or (more specifically) disk bound. That’s part of why apps that write so much can be so much more pricey. If all your app does is read, and if you can hold your whole db in memory, you’re golden. And machines these days have so much RAM that surprisingly large datasets can be held in memory.
But- let’s note that all web apps are potentially I/O bound. You have to have enough throughput to serve the pages after all ;). My major focus for the last couple of years has been in reducing the amount of overhead required for high-load web apps. That might seem kind of perverse- after all, machines are cheaper than my time. But when you multiply by all of the machines serving dynamic content on the web, my time starts to look pretty cheap.
You don’t have to have a relationship with a Telco to provision the kind of bandwidth these guys are talking about. You instead buy it from someone who has already provisioned it. I’ve built sites that had many times their bandwidth requirements and I’ve been able to just pay people to give me the bandwidth. And it’s not expensive. If they are not able to pay for the bandwidth with the level of traffic they are talking about then they are not good at business.

January 4, 2010 3:29 pm

tallbloke (13:46:11) :
“The difference is between the hate of Pharyngula and the humour of WUWT.”
It all hinges on where you find humour, however, I will leave this tangent at this point as I don’t wish to go off topic on this thread.

Håkan B
January 4, 2010 3:56 pm

Folks think of it, if it had been the team running into this problem they’d pushed up Michael Mann to show you treerings that by some magic could show you the counts while the system was overloaded, those people behaved differently!

deech56
January 4, 2010 4:00 pm

tallbloke (13:46:11) :
“The difference is between the hate of Pharyngula and the humour of WUWT.”
I do find WUWT to be quite humorous.

Dr.T G Watkins(Wales)
January 4, 2010 4:02 pm

EIGHT HUNDRED THOUSAND HITS IN SEVEN(?) DAYS.
Who needs an award ?
Still the MSM ignore you and the team, but the dam will break and there will be floods of tears.