A Few Tips for OSCON Attendees

If you’re attending the 10th Annual Open Source Convention, I’ve compiled just a few tips for you on this, “day 0″ of the event:

  • Don’t check bags. Everything is slower if you check bags, and if you’re packing more than three shirts, you’re crazy, because if history is any indicator, you’re going to be bombarded with shirts over the course of the week. One maximum size (22″ x 14″ + 9″) suitcase, and a bookbag with a laptop pocket is all I brought, and I’m confident I’ll have all I need. I’ll report back if things change :)
  • Request a room away from the ice machine. They can be loud. This year my room is the last room at the end of a long hallway. Ahhhhhhhh….
  • Don’t bring toiletries of any kind: you can’t bring a lot of them on board, and I’d rather just avoid it altogether and buy stuff when I get to my destination. Don’t use the Hotel store though – there’s a Dollar store about 2 blocks from the Lloyd Center Doubletree Hotel (on the back end of the Lloyd Center mall), and they probably have everything you’ll need. If not, walk another block north to the Safeway, and you can get anything, though I didn’t find any travel-sized stuff.
  • Show up to registration early: I’m leaving shortly for registration. Registration moves pretty quickly even if you go on Monday morning, but on Sunday night (from 5-7pm) there’s a nice, jovial, laid-back mood around the registration areas.
  • When you’re in Portland, know that you’re in an area that is something of a mecca for beer. Even if you don’t like beer, I urge you to join friends and at least have a look at the beers available. You’re in an area where even the hotel bar has an ok beer selection. Saying you don’t like beer is like saying you don’t like food. If “beer” to you means Coors Light (or similar), you have no idea what beer is – but that’s ok, because you’re now in a place that can grant you a PhD in beer snobbery in a matter of a weekend. Really. Take advantage of it!! (a hint: many people who “don’t like beer” really just don’t like the bitterness that comes from hops. Ask a bartender for a sample of their finest wheat beer. I’ll bet you’ll be hooked).
  • Don’t stay in your hotel room if you can help it. Engage. Look at the whiteboard that is probably in the registration area as I type this. Find the conference web site, irc channels, wikis, and everything else that you can. 75% of the value of coming to OSCON is finding and meeting people you’ll be in contact with well after you leave. It’s a commercial conference, yes — but it’s a community atmosphere.
  • Plan your day. You can try to plan everything you’re going to attend before you get here, but it probably won’t work very well, because you’ll inevitably hear someone talking about something else and decide to attend that instead. What might work better is if you try to plan the night before — but not after the parties — probably sometime between the last session of the day and dinner. At least have an idea what you’re doing the next day, because parsing the program on-the-fly is, imho, difficult, especially when ten people you know walk by and say hi and stuff.
  • Try to plan lunch in the city. This can be a little difficult, but you can hop on the light rail for free as soon as the conference breaks for lunch, and be downtown in no time. Last time I attended, I only made it out for two lunches downtown, and I’m kind of a foodie, so I would’ve liked to sample more of the local faire. Try to keep away from the chains (you can get that at home) and be adventurous!!

A Quick Look at ElementTree (and a bit about ‘sar’)

I’m working on a new project that will be open sourced if I can ever get it to be generically useful. It’s called “sarviz”, and it’s a visualization tool for output from the “sar” UNIX system reporting utility. I know tools like this exist, but please read on, as I’m looking to do something a bit different from what I’ve seen.

A quick, simple explanation of sar

System administrators typically run sar as a cron job, and each day sar will generate a report that lists the values of various system counters for a specified time interval throughout the day. So you end up with a text file that lists, for example, the cpu iowait value every 10 minutes throughout the day. There are maybe a dozen different categories of counters enabled by default, and more that aren’t (like disk-related counters). Anyway, you wind up with a text file that looks something like this:

23:30:01          CPU     %user     %nice   %system   %iowait    %steal     %idle
23:40:02          all      0.32      0.00      0.32      6.57      0.49     92.29
23:40:02            0      0.32      0.00      0.32      6.57      0.49     92.29
23:50:01          all      0.74      0.00      0.82      7.14      0.55     90.76
23:50:01            0      0.74      0.00      0.82      7.14      0.55     90.76
Average:          all      0.82      0.00      0.72     13.54      0.78     84.14
Average:            0      0.82      0.00      0.72     13.54      0.78     84.14

This is just a small part of one section of the file (this box has only one cpu, which is why the ‘all’ and ’0′ numbers are the same, btw). The whole file on one server, running with default configurations, is 4000 lines long.

There’s a ton of great information in here, but… it all looks like the above. There’s no graphical output to be had. This is bad, because it would be nice to use this (historical) monitoring output for things like capacity planning, problem tracking, etc. You would, of course, want to couple this type of monitoring with something else that’ll do real-time monitoring, alerts, dependencies, escalation, etc.

So I want to write an application that’ll generate graphs of all of this stuff. Furthermore, I thought it would be cool to do something like what planetplanet does, which is to say that I want sarviz to run as a cron job, parse all of this stuff, and generate static html files, with an index.html that’ll make it really easy to browse this information either by host, by date, by resource… whatever. Later on I can add features to actually do even more useful stuff like longer-term trending of resource usage (by aggregating across various ‘sar’ output files), and more.

Sar is not alone

Sar comes with some friends, and it turns out they can be extremely useful. The best one for my purposes here is called ‘sadf’, and it is used to basically format the sar output to make it more useful for programmatic processing. It can output the information in CSV format, or make it ready for insertion into a relational database, but what I’m currently using for sarviz (and it’s early, so this could change) is the XML output capability. With XML output, I won’t have to deal with parsing out column headers, scanning an entire file for information from a single sar run, dealing with the blank lines sar uses by default to make it easier to read on a console, etc. So with sadf I can get output that looks like this:

<timestamp date=”2008-06-15″ time=”07:10:01″ interval=”600″>
<processes per=”second” proc=”0.93″/>
<context-switch per=”second” cswch=”221.50″/>
<cpu number=”all” user=”1.77″ nice=”0.00″ system=”0.56″ iowait=”0.04″ steal=”0.08″ idle=”97.55″/>
<cpu number=”0″ user=”1.77″ nice=”0.00″ system=”0.56″ iowait=”0.04″ steal=”0.08″ idle=”97.55″/>
</cpu-load> ….

This is a bit nicer to deal with, and I was excited to use Python’s (now built-in) ElementTree module to do something from scratch after having dealt with it being somewhat abstracted in the Python tools for the GData API (which I used to write a command line client for Google Spreadsheets, for example).

Doing Simple Things with ElementTree

Well, as it turns out, I had kind of a hard time getting started doing what I thought were simple things with ElementTree, so I want to post a few examples of how I did them so that I and others have something to refer to online.

The first thing to know about ElementTree is that there are Element objects, and ElementTree objects. ElementTree objects are made up of a hierarchical collection of Element objects, and Element objects are the things you can actually get attributes from that you’re likely to want. For whatever reason, I was a little confused starting out, because I wanted to get an ElementTree object and then ask that object to “scan the tree and give me all of the “time” attributes of the “timestamp” elements in the tree. You might be able to do this with a one-liner, but I never found a document that said how.

So here’s how to load in an XML file, parse it, and return all of the timestamp elements in that tree (or, rather, this is how I did it, which seems reasonable):

strudel:sa jonesy$ python
Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17)
[GCC 4.0.1 (Apple Inc. build 5465)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from xml.etree import ElementTree as ET
>>> tree = ET.parse("sa15.xml")
>>> for ts in tree.findall("host/statistics/timestamp"):
...        isotime = ts.attrib["date"]+"T"+ts.attrib["time"]
...        print isotime


So, I imported the ElementTree module, fed my xml file to a method called “parse()”, and that gives me an ElementTree object. In that tree, I then ask for the timestamp elements which are under the root element at “host/statistics/timestamp”. You can then see that I create an ISO8601-formatted timestamp by asking for the “date” and “time” attributes of the timestamp element, and put a “T” between them. I would’ve used something like “T”.join, but there are other attributes in that element, and I only needed two, so I took the easy way out here instead of creating a list first and then doing the join on the list.

Of course, my real interest in the timestamps isn’t to print them, but to get the statistics for each sar run (represented by a timestamp, since sar records statistics for regular time intervals). So now let’s grab the 1-, 5-, and 15-minute load averages according to sar. I want all of this printed on one line along with the timestamp, because this output is going to be graphed using Timeplot, and that’s how Timeplot wants the data. Here goes:

>>>for ts in tree.findall("host/statistics/timestamp"):
...        isotime = ts.attrib["date"] + "T" + ts.attrib["time"]
...        for q in ts.findall("queue"):
...             qstat = [isotime, q.attrib["ldavg-1"], q.attrib["ldavg-5"], q.attrib["ldavg-15"]]
...             print ",".join(qstat)


The thing to note here, in case it escaped your eyeball, is that the second call to ‘findall’ feeds an argument relative to the ‘ts’ object rather than the ‘tree’ object.

This data is ready for Timeplot, and now it’s just a matter of somehow generating the files with the appropriate HTML and JavaScript in them to present the information. I have absolutely no clue how to easily use dynamic variables from Python to easily generate static HTML and JavaScript, so what I have in that area of my code is not something I want to share, out of sheer embarrasment. If someone has done that, let me know. PlanetPlanet does not output JavaScript, best I can tell, but it does output HTML, so I’ll be checking that part of the code out (probably uses BeautifulSoup I guess?). Input on that is hereby solicited!

Show Me Your Python SysAdmin One-Liners!

Ah, the lazyweb. Today, I’m putting together content for a class I’m teaching on basic Linux administration, but during my meeting with a group of trainees to determine the scope of the course, they requested that I completely skip any coverage of “perl -e” one-liners, and show them the Python equivalents. Of course, I found this page, which has a few, but I figured I’d put out the call for more, just to get a good collection of ideas, and a higher-level idea of how people are using Python for system administration for ‘quick-n-dirty’ jobs. If I get a bunch of interesting ones, I’ll collect them all somewhere for easy reference (or add them to the wiki linked above?), so link this callout wherever pythonistas can be found.

Oddly enough, my experience with Python has me going in the completely opposite direction: I don’t write as many one-liners as I did with perl. If it’s not obvious to me how to do something with sed, awk, grep, find, xargs, and the “regular” tools, I write a Python script. I’ve tried remembering some things I used nasty Perl one-liners for, but I guess they were sufficiently nasty that I’ve forgotten them.

By the way, if you’re a sysadmin who writes their tools using Python, do consider giving a talk at this year’s PyWorks conference in November!

Ugh – Syndication Gone Bad

My apologies to my friends at Planet Sysadmin!! I posted a personal post, did not categorize it as a “systems” post, and it wound up on planet sysadmin anyway. I’m really sorry about that. I *would* try to keep posts like this from being syndicated at *all*, but I also belong to local technology user groups, and those folks actually have some interest in posts like this (as the comments will show). I would guess that the solution is for PlanetSysadmin to subscribe only to the systems category feed, but if there’s something I can do on my end, I’m happy to oblige.

“High-end” Wegman’s is Cheaper!

I am shocked (shocked!) to report that, after doing some comparison shopping today, it turns out that the Wegman’s in my area is *cheaper* for *MANY* things than my local Stop & Shop! WTF?!

Today, my wife and I took our Stop & Shop receipt from our shopping trip two days ago over to Wegman’s, because I was starting to get really annoyed at Stop & Shop’s knack for discontinuing stuff I regularly buy, and because I had a theory that shopping at Wegman’s might actually be no more expensive than shopping at Stop & Shop.

In a sense, I was wrong. It isn’t just “no more expensive”, it’s actually quite a bit cheaper for everyday items that we buy all the time! Here are a few examples:

  • Baby red potatoes: We buy them in a little plastic container at Stop & Shop – 1.5lb for $3.99. Wegman’s sells them loose for $1.89/lb, so our 1.5 lbs would cost over $1 less at Wegman’s! I don’t honestly recall seeing baby reds loose at Stop & Shop. I’ll look next time I go.
  • Kraft Singles at Stop & Shop were $3.69. The same size at Wegman’s was $2.79.
  • Milk at both places was the exact same price.
  • I buy the Tropicana Orange/Tangerine juice. At Stop & Shop it’s $3.49. At Wegman’s it’s $2.79.
  • Welch’s Grape Juice at Stop & Shop was $3.59. Wegman’s had it for $3.19.
  • Plums and Peaches were each $.40/lb cheaper at Wegman’s, and were notably better looking as well, not to mention the fact that the selection of produce at Wegman’s kicks Stop & Shop’s ass. Corn looked really expensive to me at 5 for a buck, but we haven’t bought any yet this year. Last year corn at Stop and shop was 10 for a buck.
  • Thomas’s English Muffins were $3.99 at Stop & Shop, and $3.19 at Wegman’s.
  • Are you seeing a pattern? We’re not talking about saving 10 or 15 cents here. We’re talking about upwards of $.50 for lots of stuff. Shocked I tell you!!

So, of all of the things we checked, I think there was one thing that was more expensive: white turnips. It’s so rare that we buy them that we discounted it, but my wife did mention that they looked much better, and were much bigger than the ones at Stop & Shop.

Also note that we didn’t compare meat prices. We’ve been doing a pretty good job of cutting our meat consumption, really, and we have a lot of local farms that sell meat as well – there are lots and lots of meat options in the area we live in. I am an avid griller and smoker, and I read books like “How to Cook Meat” in my spare time. I’m no meat scientist, but I talk to butchers, and they are typically really happy to see someone under the age of 50 who actually knows what pork shoulder is and how to cook osso bucco. Wegman’s, I can tell you, has superior meat, but the prices for most of it are pretty high. I go there if I’m entertaining company or if I can’t get the cut I want at Stop & Shop. In general, Stop & Shop’s meat section is nothing fantastic compared to anywhere else. They don’t have a very big selection, though they do have some discount prices for some things, like huge packages of chicken thighs and stuff.

While I stay on top of meat, my wife stays on top of produce. She bought lemon basil at a farmer’s market this morning and the lady said “most people don’t even know what this is”. Same with fiddlehead ferns, which we love (if only they were in season longer).

I’ll have to follow up on this in the future. There were some things (like dog food) that I *believe* were significantly cheaper at Wegman’s (like $2 cheaper), but my memory could be faulty. The other nice thing about Wegman’s is that they carry most, if not all, of the things we used to buy at Stop & Shop until they discontinued them.

Useful stuff – 2008 – first half

Having a Google account is sometimes useful in ways you hadn’t planned for. For example, at a few different employers I’ve been at, I’ve had to prepare for reviews by providing a list of accomplishments to my supervisor. One decent tool for generating this list is email, though it can take some time. Another useful tool is the Web History feature of your Google account.

Though this isn’t necessarily indicative of everything I’ve accomplished in the first half of 2008 per se, it’s definitely indicative of the types of things I’ve generally been into so far this year, and it’s interesting to look back. What does your Web History say?

  • Gearman – this is used by some rather large web sites, notably Digg. It reminds me a little of having Torque and Maui, but geared toward more general-purpose applications. In fact, it was never clear to me that PBS/Maui couldn’t actually do this, but I didn’t get far enough into Gearman to really say that authoritatively.
  • How SimpleDB Differs from a Relational Database – Links off to some very useful takes on the “cloud” databases, which are truly fascinating creatures, but have a vastly different data management philosophy from the relational model we’re all used to.
  • Reblog – I found this in the footer of someone’s blog post. It’s kinda neat, but to be honest, I think you can do similar stuff using the Flock browser.
  • Google Finance APIs and Tools – did I ever mention that I had a Series 7 & 63 license two months after my 20th birthday? I love anything that I can think for very long periods of time about, where there’s lots and lots and LOTS of data to play with, where you can make correlations and answer questions nobody even thought to ask. Of course, soon after finding this page I found the actual Google Finance page, which answers an awful lot of potential questions. The stock screener is actually what I was looking to write myself, but with the data freely available, I’m sure it won’t be long before I find something else fun to do with it. I’m not a fan of Google’s “Feeds” model, but I’ve dealt with it before, and will do it again if it means getting at this data.
  • Bitpusher – it was recommended to me as an alternative to traditional dedicated server hosting. Worth a look.
  • S3 Firefox Organizer – This is a firefox plugin that provides an interface that looks a lot like an FTP GUI or something, but allows you to move files to and from “buckets” in Amazon’s S3 service.
  • Boto – A python library for writing programs that interact with the various Amazon Web Services. It’s not particularly well-documented, and it has a few quirks, but it is useful.
  • OmniGraffle – A Visio replacement for Apple OS X. I like it a lot better than Visio, actually. It has tons of contributed templates. You shouldn’t have any trouble making the switch. A little pricey, but I plunked down the cash, and have not been disappointed.
  • The Python Queue Module according to Doug – Doug Hellmann’s Python Module of the Week (PyMOTW) should be published in dead tree form some day. I happen to have some code that could make better use of queuing if it were a) written in Python, and b) used the Queue module. I was a little put off by the fact that every single tutorial I found on this module assumed you wanted to use threading, which I actually don’t, because I’m not smart enough…. though the last person I told that to said something to the effect of “the fact that you believe that means you’re smart enough”. Heh.
  • MySQL GROUP modifiers – turns out this isn’t what I needed for the problem I was trying to solve, but the “WITH ROLLUP” feature was new to me at the time I found it, and it’s kinda cool.
  • WordPress “Subscribe to Comments” plugin – Baron suggested that it would be good to have this, and I had honestly not even thought about it. But looking around, this is the only plugin of its kind that I found, and it’s only tested up to WP 2.3x, and I’m on 2.5x. This is precisely why I hate plugins (as an end user, anyway. Loghetti supports plugins) ;-)
  • Lifeblogging – I had occasion to go back and flip through some of the volumes of journals I’ve kept since age 12, wondering if it might be time to digitize those in some form. I might digitize them, but they will *not* be public I don’t think. Way too embarrassing.
  • ldapmodrdn – for a buddy who hasn’t yet found all of the openldap command line tools. You can’t use ‘ldapmodify’ (to my knowledge) to *rename* an entry.
  • Django graphs – I haven’t yet tried this, because I’m still trying to learn Django in what little spare time I have, but it looks like there’s at least some effort towards this out there in the community. I have yet to see a newspaper that doesn’t have graphs *somewhere* (finance, sports, weather…), so I’m surprised Django doesn’t have something like this built-in.
  • URL Decode UDF for MySQL – I’ve used this. It works really well.
  • Erlang – hey, I’m game for anything. If I weren’t, I’d still be writing all of my code in Perl.
  • The difference between %iowait in sar and %util in iostat - I use both tools, and wanted the clarification because I was writing some graphing code in Python (using Timeplot, which rocks, by the way), and stumbled upon the question. Google to the rescue!
  • OSCON ’08 – I’m going. Are you going? I’m also going to the Oregon Brewers Festival on the last day of OSCON, as I did in ’06. Wonderful!
  • Explosion at one of my hosting providers – didn’t affect me, but… wow!
  • hypertable – *sigh* someday…when there’s time…
  • Small-scale hydro power – Yeah, I’m kind of a DIYer at heart. I do some woodworking, all my own plumbing, painting, flooring, I brew my own beer, I cook, I collect rain in big barrels, power sprinklers using pool runoff to give my lawn a jumpstart in spring… that kind of stuff. One day I noticed water coming out of a downspout fast enough to leap over one of my rain barrels and thought there must be some way to harness that power. Sadly, there really isn’t, so I did some research. It’s non-trivial.
  • You bet your garden – I also do my own gardening and related experiments.
  • RightScale Demo – WATCH YOUR VOLUME – a screencast showing off RightScale’s features. Impressive considering the work it would take me, a lone admin, to set something like this up. The learning curve involved in effectively/efficiently managing/scaling/monitoring/troubleshooting EC2 is non-trivial.
  • Homebrew Kegerator – Maybe if this startup is bought out I can actually afford this thing to put my homebrewed beer in. The 30-year-old spare fridge in the basement is getting a little… gamey.
  • The pound proxy daemon – I use this. It works well enough, but I’ve crashed it under load, too. I’ve also had at least one hosting provider misconfigure it on my behalf, and I had to go and tell them how to fix it :-/
  • Droid Sans Mono – a fantastic coding font. Installing this font is in my post-install routine for all of my desktops.
  • Generator tricks for systems programmers – David Beazley has made available a lot of Python source code and presentation slides from what I imagine was a great talk (if you’re a systems guy, which I am).
  • The Wide Finder Saga – I found this just as I was writing Loghetti. There are still some things in Mr. Lundh’s code that I haven’t implemented, but it was a fantastic lesson.
  • Using gnu sort for IP addresses – I’ve used sort in a lot of different ways over the years… but not for IP addresses. This is a nice hack for pulling this off with sort, but it doesn’t scale very well when you have millions of them, due to the sort utility’s ‘divide and conquer’ method of sorting.
  • Writing an Hadoop/MapReduce Program in Python – this got me over the hump.
  • Notes on using EC2/S3 – This got me over some other small humps
  • BeautifulSoup – found while searching for the canonical way to screen scrape with Python. I’d done it a million times in Perl, and you can do it with httplib and regex and stuff in Python if you want, but this way is at least a million times nicer.

Well, that’s a decent enough summary I guess. As you can see, I’ve been doing a good bit of Python scripting. Most of my code these days is written in Python instead of Perl, in part because I was given the choice, and in part because Python fits my brain and makes me want to write more code, to push myself more. I’ve also been dealing with things involving “cloud” computing and “scalability” — like Hadoop, and EC2/S3. I haven’t done as much testing of the Google utility computing services, but I’ve used their various APIs for some things.

So what’s in your history?

Do I Even Care About the iPhone 3G?

Steve Jobs is one of the best presenters you could ever hope to see. He’s great at tapping into that part of your brain that makes you just want whatever it is he’s holding. But this time, it was a little different.

You see, I already have an iPhone. I bought one in February. I didn’t buy the very first model that came out because it was lacking some stuff that was really important to me – most notably, it only had Apple apps, which was severely limiting, and IMAP support was limited to Yahoo! accounts, which was absurd. With those two obstacles out of the way, I found it useful enough to spend my employer’s money on, but not my own. In the end, it was a business decision, and I still think an iPhone is a better deal than a Blackberry hands down. Especially the new 3G, which has addressed some “enterprise” concerns. Thing is, I don’t care about any of that.

I want some really really really simple things that I haven’t heard anything about, and I want one thing that is perhaps slightly harder but essential.

The slightly-harder-but-essential thing is voice commands. I can hardly believe that we’re on the third generation of a phone without having voice commands. You can get a $30 Nokia made in 2002 that has voice commands for crying out loud. Without voice commands, it’s unclear to me how this phone is useful in a hands-free environment at all. Have I missed a feature somewhere? Is there something I can plug my iPhone into while I’m driving that will parse the voice commands and do the right thing with the iPhone? I know that if you have an Acura TL the car itself parses the voice commands, but I don’t know if there’s some generic thing that *doesn’t* cost $40k that’ll do the same basic job? Anyone?

The other stuff consists mainly of small application features:

  • The ability to bookmark or otherwise somehow save “Directions” in the Maps application. This way, if I’m driving, following the directions in Maps, and need to search for a gas station or coffee shop, I don’t then have to go back and punch in the information again to get my directions back.
  • Why the heck doesn’t mail let you read in landscape mode?!?!?!
  • I’d *REALLY* like to be able to send and receive photos in text messages. I don’t use it often, but when you need it, you need it.
  • The ‘.com’ shortcut should pretty much *always* be visible on the keyboard.
  • Make email alerts a per-account setting instead of only alerting for the default or for all accounts or whatever it is the iphone does now. Let me treat email accounts like phone contacts and assign different alert settings for each account just like I set different ring tones for different contacts.
  • Let me bookmark phone numbers so I can just hit a button on my home screen to dial them (in the absence of voice commands).
  • Make the bluetooth support do some neat trick that’ll make it be actually worth turning on.
  • I do a lot of system administration, and I’d love a usable, locally-installed ssh client that I don’t have to perform surgery to install. I don’t want to hack my phone, really. I also refuse to use a web interface to access an ssh client. If you’re doing that, stop right now, and go change every password you have.

On the development front, it would actually be really nice if they supported maybe *one* parsed scripting language for iPhone development. Even if they did like AppEngine and provided a somewhat stripped version of Python it would be something I could use. But that’s a rant for another day. :)

Cloud computing hype overload

I’ve been working with what I used to call “utility computing” tools for about 6-9 months. However, for about the past 2 months, I’ve been seeing the term “cloud computing” all over the place, and there is so much buzz surrounding it that it’s reaching that magical point best described using Alan Greenspan’s words: “Irrational Exuberance”.

When Alan Greenspan used those words to describe the attitudes of investors toward the markets, what he was basically saying was that there were people who didn’t really know what they were doing, putting more money than they ought, into things they knew relatively little about. Further, he was saying that the decisions people were making with regards to where to put their money were a) bad, or at least b) not based on sound reasoning, or the ‘facts on the ground’.

This, I think, is where we are at with “cloud computing”. The blog post that put me over the edge is this one, for the record. I read Sean’s writings often enough, but this one strikes me as being a little off, a little sensationalistic, not based in reality, and a little misleading.

Maybe he just didn’t put enough qualifiers in there. His post might make more sense if he limited its scope and provided more facts, but I guess it’s just an opinion piece so he decided not to go that route, and that’s his prerogative I guess.

By limiting the scope, I mean he should’ve realized that there are millions of web sites currently scaling quite nicely without the use of cloud computing. In addition, some of the new ones that are having issues are also not using cloud computing, and when they hit bumps in the road, they make it through, and the great thing is that they also share their stories, and those stories indicate that a cloud (or, the current cloud offerings) wouldn’t have helped much (there’s lots of other evidence of that too). What would’ve helped is if they had paid more attention to:

  • monitoring
  • initial infrastructure design
  • their own app code and app design
These aren’t issues that cloud computing takes away. What’s more, cloud computing is something of a moving target, many of the solutions aren’t as mature as you’d want them to be if you’re betting the house on them (EC2 only recently got “elastic IPs” and persistent storage is still not there, AppEngine only supports Python and has some rather severe limitations on functionality of your app), and they introduce a potentially large learning curve both in terms of how the individual services work, as well as how the heck to make your app fit into the cloud solution of your choosing. Think SimpleDB scales? Well, it does, but it’s also not a relational database, and doesn’t guarantee…. much of anything, including data integrity. You can’t interface with it using the drivers, interfaces, and language you’re used to using, either, because it’s not just a mysql wrapper or something – it’s a new beast entirely. Enjoy!
This is not to mention, of course, that some people have absolutely no choice but to scale without the help of the cloud, because corporate policy, common sense, or other forces mean that they can’t have their data passing through non-corporate-owned machines and/or networks. Also, Sean omits any mention of the cost factor, which is often a huge driver in getting startups to use these services, but may not really make the move “worth it” in some cases.
Anyway, in short, all I’m really saying is that it’s disingenuous to say that the future of web computing is “the cloud” because “only the cloud can scale”. That’s just silly. Non-cloud infrastructures can scale fine depending on the balance between the demands of the application and the funds available. The future of web computing will probably involve shared, utility computing architectures, but the future doesn’t depend on cloud computing.

This is how I want all project web sites to look…

My brain has a set of rules that software project websites get tested against. Each time a project site fails to comply with a rule, I get ever-so-slightly more annoyed, and ever-so-slightly less likely to use the software in question (if there are alternatives, this is even maybe not so “slightly”). 

I thought I’d list these rules because I suspect others are like me: we’re extremely busy, we work too many hours, and are involved with too many projects to spend hours trying to figure out what some piece of code someone mentioned once in IRC actually does. 

But first, know that this site actually complies with just about every single rule there is, so it’s a great template to work from if your site needs brushing up. 

  • First and foremost, tell me, right away, what this thing does, the problem it solves, and (at a high level) the approach taken to solve the problem. 
  • Tell me the language it’s written in. If it’s open source, and it’s written in a language I hack in, *and* it solves a problem I need solved, maybe I can help out, or be encouraged that if something flakes, I can fix it, or at least speak the developer’s language if I have to describe the issue to the folks upstream. 
  • Tell me what OS is required, and preferably what OS/version is tested with. 
  • Give me a full list of dependencies with links to go get them, or give me a link to “Dependencies”, or to an install document that lists them. 
  • Tell me the current version, and the date it was released. Beta versions and dates are nice too. If there is a timed release schedule, tell me that. 
  • Keep the information up-to-date. I shouldn’t have to wonder if your software is going to work under OS X 10.5 or RHEL 5, or if your plugin will work under the latest version of Drupal/Django/Moodle/MySQL/Joomla/Firefox…
  • BONUS: a very simple architectural drawing that shows me exactly what components make up the whole. The one for CouchDB is as good as any I’ve ever seen (assuming it’s accurate). 
  • BONUS: if screenshots are applicable, use them. They communicate a million times more information using a million times less real estate and bandwidth. They can communicate things you didn’t even know you were communicating. Of course, that could be good or bad, but it keeps you honest, and customers like that :-) 
For kicks, here are a few things I see sometimes on project web sites that I wish they *wouldn’t* do: 
  • DON’T require me to understand how something like Trac or some other tool works in order to get at the information about your software project. Navigation should not assume I’m a developer, it should assume I’m a prospective user who will leave if they can’t read the menu. If you want to use a project management tool to do your work, more power to you, but as a prospective customer, it’s none of my business — don’t drag me into your personal hell! I just want the software! 
  • DON’T be satisfied with the Sourceforge page as your project’s “homepage”. The problem with doing that is twofold: first, Sourceforge kinda sucks, and occasionally becomes unusable. Second, it doesn’t provide a simple way for you to give me information, nor a simple way for me to find it even if you produce said information using their tools. Also, it’s bad form. If you haven’t committed to the project enough to give it a proper site, well… 
  • DON’T put some kind of “Coming Soon” page with a bunch of information with *NO DATE*, because I’m going to go ahead and assume that this thing is vaporware, and that the “coming soon” post is 3 years old. Nothing in this world is more annoying than time-sensitive information being plastered on a web site with no date. 
  • DO NOT — I repeat — DO NOT force me to download a 20MB tarball to get at the documentation. That’s not how things work. I get to see what I’m downloading *before* I download it. You’ll save me some time, and save yourself some bandwidth, and you’ll have more accurate statistics about how many people download and use your software, because the numbers won’t be skewed by folks who were forced to download the package to get at the documentation. 
All of that said, I probably won’t use CouchDB, even though I love their project’s site. Javascript makes my brain explode, so mixing them with something like a database, which to me is the digital embodiment of sanity itself, is… insane. But if you’re someone who can deal with this concoction, I encourage you to check out CouchDB — at the very least, you can figure out if it might be a fit for you without clicking from their home page a single time. That just rocks.