Category Archives: Scripting

What I’ve Been Up To

Historically, I post fairly regularly on this blog, but I haven’t been lately. It’s not for lack of anything to write about, but rather a lack of time to devote to blogging. I want to post at greater length about some of the stuff I’ve been doing, and I have several draft posts, but I wanted to list what I’ve been up to for two reasons:

  1. I use my blog as a sort of informal record of what I accomplished over the course of the year, hurdles I ran into, etc. I also sometimes use it to start a dialog about something, or to ‘think out loud’ about ideas. So it’s partially for my own reference.
  2. Someone might actually be interested in something I’m doing and want to pitch in, fork a repo, point me at an existing tool I’m reinventing, give me advice, or actually make use of something I’ve done or something I’ve learned and can share.

PyCon 2013

I’m participating again for the third year in the Program Committee and anywhere else I can help out (time permitting) with the organization of PyCon. It’s historically been a fantastic and really rewarding experience that I highly (and sometimes loudly) recommend to anyone who will listen. Some day I want to post at greater length about some actual instances where being really engaged in the community has been a win in real, practical ways. For now, you’ll have to take my word for it. It’s awesome.

I also hope to submit a talk for PyCon 2013. Anyone considering doing this should know a couple of things about it:

  1. Even though I participate in the Program Committee, which is a completely volunteer committee that takes on the somewhat grueling process of selecting the talks, tutorials, and poster sessions, it’s pretty much unrelated to my chances of having my talk accepted. In other words, submitting a talk is as daunting for me as it is for anyone. Maybe more so.
  2. Giving a talk was a really rewarding experience, and I recommend to anyone to give it a shot.

I just published a really long post about submitting a talk to PyCon. It’s full of completely unsolicited advice and subjective opinions about the do’s and don’ts of talk submission, based on my experiences as both a submitter of proposals and a member of the Program Committee, which is the committee that selects the talks.

Python Cookbook

Dave Beazley and I are really rolling with the next edition of the Python Cookbook, which will cover Python 3 *only*. We had some initial drama with it, but the good news is that I feel that Dave and I have shared a common vision for the book since just about day one, and that shared vision hasn’t changed, and O’Reilly hasn’t forced our hand to change it, which means the book should be a really good reflection of that vision when it’s actually released. I should note, however, that the next edition will represent a pretty dramatic departure from the form and function of previous versions. I’m excited for everyone to see it, but that’s going to have to wait for a bit. It’s still early to talk about an exact release date – I won’t know that for sure until the fall, but I would expect it to be at PyCon 2013.

PyRabbit

I’ve blogged a bit about pyrabbit before: it’s a Python client for talking to RabbitMQ’s RESTful HTTP Management API. So, it’s not for building applications that do message passing with AMQP — it’d be more for monitoring, polling queue depths, etc., or if you wanted to build your own version of the browser-based management interface to RabbitMQ.

Pyrabbit is actually being used. Last I looked, kombu was actually using it, and if memory serves, kombu is used in Celery, so pyrabbit is probably installed on more machines than I’m aware of at this point. I also created a little command shell program called bunnyq that will let you poke at RabbitMQ remotely without having to write any code. You can create & destroy most resources, fire off a message, retrieve messages, etc. It’s rudimentary, but it’s fine for quick, simple tests or to validate your understanding of the system given certain binding types, etc.

I have a branch wherein I port the unit tests for Pyrabbit to use a bit of a different approach, but I also need to flesh out more parts of the API and test it on more versions of RabbitMQ. If you use Pyrabbit, you should know that I also accept pull requests if they come with tests.

Stealth Mode

Well, ‘stealth’ is a strong word. I actually don’t believe much in stealth mode, so if you want to know just ask me in person. Anyway, between 2008 and 2012 I’ve been involved in startups (both bought out, by the way! East Coast FTW!) that were very product driven and very focused on execution. I was lucky enough to answer directly to the CEO of one of those companies (AddThis) and directly to the CTO of the other (myYearbook, now meetme.com), which gave me a lot of access and insight into the mechanics, process, and thinking behind how a product actually comes to be. It turns out I really love certain aspects of it that aren’t even necessarily technical. I also really find the execution phase really exciting, and the rollout phase really almost overwhelmingly exciting.

I’ve found myself now with an idea that is really small and simple, but just won’t go away. It’s kind of gnawing at me, and the more I think about it, the more I think that, given what I’ve learned about product development, business-side product metrics, transforming some stories into an execution plan, etc., on top of my experience with software development, architecting for scalability, cloud services, tools/technologies for building distributed systems, etc., I could actually do this. It’s small and simple enough for me to get a prototype working on my own, and awesome enough to be an actual, viable product. So I’m doing it. I’m doing it too slowly, but I’m doing it.

By the way, the one thing I completely suck at is front end design/development. I can do it, but if I could bring on a technical co-founder of my own choosing, that person would be a front end developer who has pretty solid design chops. If you know someone, or are someone, get in touch – I’m @bkjones on Twitter, and bkjones at gmail. I’m jonesy on freenode. I’m not hard to find :)

In and Out of Love w/ NoSQL

I’ve recently added Riak to my toolbelt, next to CouchDB, MongoDB, and Redis (primarily). I was originally thinking that Riak would be a good fit for a project I’m working on, but have grown more uncomfortable with that notion as time has passed. The fact of the matter is that my data has relationships, and it turns out that relational databases are actually a really good fit in terms of the built-in feature set. The only place they really stink is on the operations side, but it also turns out that I have, like, several years of experience in doing that! Where I finally lost patience with NoSQL for this project was this huge contradiction that I never hear anyone ever talk about. You know the one — the one where the NoSQL crowd screams about how flexible everything is and how it really fits in with the “agile” mindset, and then in another doc in the same wiki strongly drives home the message that if you aren’t 100% sure what the needs of your app are, you should really make sure you have a grasp on that up front.

Uhh, excuse me, but if I’m iterating quickly on an app, testing in production, iterating on what works, failing fast, and designing in the direction of, and in direct response to, my customers, HOW THE HELL DO I KNOW WHAT MY APP’S NEEDS ARE?

So, what I’m starting with is what I know for sure: my data has relationships. I’m experienced enough with NoSQL solutions to understand my options for modeling them & enforcing the relationships, but when the relational aspect of the data is pretty much always staring you in the face and isn’t limited to a small subset of operations that rely on the relationships, it seems like a no-brainer to just use the relational database and spend my time writing code to implement actual features. If I find some aspect of the data that can benefit from a NoSQL solution later, well, then I’ll use it later!

Unit Testing Patterns

Most who know me know I’m kind of “into” unit testing. It’s almost like a craft unto itself, and one that I rather enjoy. I recently started a new job at AWeber Communications, where I’m working on a next-generation awesome platform to the stars 2.0 ++, and it’s all agile, TDD, kanban, and all that. It’s pretty cool. What I found in the unit tests they had when I got there were two main things:

First, the project used bare asserts, and used dingus in “mock it all!” mode. Combined, this led to tests that were mostly effective, but not very communicative in the event of failure, and they were somewhat difficult to reason about when you read them.

Second, they had a pretty cool pattern for structuring and naming that gave *running* the tests and viewing the output a more behavioral feel to them that I thought was pretty cool, and looked vaguely familiar. Later I realized it was familiar because it was similar to the very early “Introducing Behavioral Driven Development” post I saw a long time ago but never did anything with. If memory serves, that early introduction did not introduce a BDD framework like the ones popping up all over github over the past few years. It mostly relied on naming to relay the meaning, and used standard tools “behind the curtain”, and it was pretty effective.

So long story short, those tests have mostly been ported to use the mock module, and inherit from unittest2.TestCase (so, no more bare asserts). The failure output is much more useful, and I think the pattern that’s evolving around the tests now is unfinished but starting to look pretty cool! In the process, I also created a repository for unittest helpers that currently only contains a context manager that you feed a list of things to patch, and it’ll automatically patch and then unpatch things after the code under test is run. It has helped me start to think about building tests in a more consistent fashion, which means reading them is more predictable too, and hopefully we spend less time debugging them and less time introducing new developers to how they work.

PyCon Talk Proposals: All You Need to Know And More

Writing a talk proposal needn’t be a stressful undertaking. There are two huge factors that seem to stress people out the most about submitting a proposal, and we’re going to obliterate those right now, so here they are:

  1. It’s not always obvious how a particular section of a proposal is evaluated, so it’s not always clear how much/little should be in a given section, how detailed/not detailed it should be, etc.
  2. The evaluation and selection process is a mystery.

What Do I Put Here?

Don’t fret. Here, in detail, are all of the parts of the proposal submission form, and some insight into what’s expected to be there, and how it’s used.

Title

Pick your title with care. While the title may not be cause for the Program Committee to throw out your proposal, you should consider the marketing aspect of speaking at a conference. For example, there are plenty of conference-goers who, in a mad dash to figure out the talk they’ll attend next, will simply skip a title that requires manual parsing on their part.

So, a couple of DOs and DON’Ts:

  • DOs

    • DO insure that your title targets the appropriate audience for your talk
    • DO keep the title as short and simple as possible
    • DO insure that the title accurately reflects what the talk is about
  • DON’Ts

    • DON’T have a vague title
    • DON’T omit key words that are crucial to understanding who the talk is for
    • DON’T create a title that’s too long or wordy

So, in short, keep the title short and clear. There’s no hard and fast rule regarding length, of course, but consider how many best-selling book titles have ever had more than, say, 7 words? I’m sure it’s happened, but it’s probably more the exception than the rule. If you feel you need a very long title in order to meet the goals of the title, describe your proposal by some friends or coworkers, and when they say “So… how to do ‘x’ with ‘y’, basically, right?”, that’s actually your title. Be gracious and thank them.

Within your concise title, you should absolutely make certain to target your audience, if your audience is a very specific subset of the overall attendee list. For example, if you’re writing a proposal about testing Django applications, and the whole talk revolves around Django’s built-in testing facilities, your title pretty much has to say “Django” in it, for two reasons:

  1. If I’m a Django user, I really want to make sure I’ve discovered all of the Django talks before I decide what I’m doing, and if your title doesn’t say “Django” in it, lots of other ones do. A reasonable person will expect to see a key word like that in the title.
  2. If I’m *not* a Django user and show up to your “Unit Testing Web Applications” talk, only to discover 10 minutes into a 25-minute talk that I’ll get nothing out of it, I’m going to really be peeved.

Finally, unless you totally and completely know what you’re doing, DO NOT use a clever title using a play on words and not a single technology-related word in the whole thing. There are several issues with doing this, but probably the most obvious ones are:

  1. You’re not your audience: just because you get the reference and think it’s a total no-brainer, it’s almost guaranteed that 95% of the attendees will not get it when quickly glancing over a list of 100 talk titles.
  2. You’re basically forcing the reader to read the abstract to see if they’ve even heard of the technology your talk is about. Don’t waste their time!

Category

The ‘Category’ form element is a drop-down list of high-level categories like ‘Concurrency’, ‘Mobile’, and ‘Gaming’. There are lots of categories. You may pick one. So if you have an awesome talk about how you use a standard WSGI app in a distributed deployment to get better numbers from your HPC system without the fancy interconnects, you might wonder whether to put your talk into the ‘HPC’ category or the ‘Web Frameworks’ category.

In cases such as this, it can be helpful to focus on the audience to help guide your decision. What do you think your audience would look like for this talk? Well, of course there are web framework authors and users who will absolutely be interested in the talk, but there isn’t a lot of gain for them in a talk like this, is there? I mean, what are the chances that someone in the audience has always dreamed of writing web frameworks for the HPC market? On the other hand, what are the chances that an HPC administrator, developer, or site manager would love to cut costs, ease deployment, reduce maintenance overhead, etc., of their HPC system by using a totally standard non-commercial web framework? There are probably valid arguments to be made for putting it in the ‘Web Frameworks’ category, but I can’t think of any. I’d put it in the ‘HPC’ category.

One more thing to consider is the other talks at the conference, or talks that could be at the conference, or talks from past conferences. Look at last year’s program. Where does your talk fit in that mix of talks? What talks would your talk have competed with? Is there a talk from last year that is similar in scope to your proposal? What category was it listed in?
Audience Level

There’s a ton of gray area in selecting your target audience level. I’ve never liked the sort of arbitrary “Novice means less than X years experience” formulas, so I’ll do my best to lay out some rules of thumb, but ultimately, what you consider ‘Novice’, and how advanced you think your material is, is up to you. Your choices are:

  • Novice:
    • Has used but not created a decorator and context manager.
    • Has possibly never used anything in the itertools, collections, operator, and/or functools modules
    • Has used but never had any issues with the datetime, email, or urllib modules.
    • Has seen list comprehensions, but takes some time to properly parse them
  • Intermediate:
    • Has created at least a decorator, and possibly a context manager.
    • Has recently had use for the operator module (and used it) and has accepted itertools as their savior.
    • Has had a big problem with at least one of: datetime, email, or urllib.
    • There’s only a slight chance they’ve ever created or knowingly made use of metaclasses.
    • Has potentially never had to use the socket module directly for anything other than hostname lookups.
    • Can write a (non-nested) list comprehension, and does so somewhat regularly.
  • Advanced:
    • Has created both a decorator and context manager using both functions and classes.
    • Has written their own daemonization module using Stevens as their reference.
    • Has been required to understand the implications of metaclasses and/or abstract base classes in a system.
    • May be philosophically opposed to list comprehensions, metaclasses, and abstract base classes
    • Has subclassed at least one of the built-in container types

Still not sure where your talk belongs? Well, hopefully you’re torn between only two of the user categories, in which case, I say “aim high”, for a few reasons:

  1. It’s generally easier to trim a talk to target a less experienced audience than you were expecting than to grow to accommodate a more experienced audience than you were expecting.
  2. Speaking purely anecdotally and with zero statistics, and from memory, there are lots more complaints about talks being more advanced than their chosen audience level than the reverse.

The Program Committee uses the category to insure that, within any given topic space, there’s a good selection of talks for all levels of attendees. In cases where a talk might otherwise be tossed for being too similar to (and not better than) another, targeting a different audience level could potentially save the day.

Extreme?

Talk slots are relatively short. Your idea for a talk is awesome, but way too long. What if you could give that talk to an audience that doesn’t need the whole 15-minute introductory part of the talk? What if, when your time started ticking down, you immediately jumped into the meat of the topic? That’s what Extreme talks are for.

I’d recommend checking the ‘Extreme’ box on the submission form only if your talk *could potentially* be an Extreme talk. Why? Two reasons:

  1. The number of Extreme slots is limited, and
  2. If your talk is not accepted into an ‘Extreme’ slot, it may still be accepted as a regular talk.

Duration

There are 30-minute or 45-minute slots, or you can choose ‘No Preference’. I recommend modeling your proposal around the notion that it could be in either time slot: your ability to be flexible helps the Program Committee to be flexible as well. If your talk competes with another in the process and the only difference of any use that anyone can find is that your talk has a hard, 45-minute slot requirement, you probably have a good chance of losing that battle.

If you’d like to have a 45-minute slot, then it might help you out to build your outline for a 30-minute talk first, and then go back and add bullet points to it that are clearly marked “(If 45min slot)” or something. Alternatively, you can create the outline based on a 45-minute slot, and just use the ‘Additional Notes’ section of the form to explain how you’d alter the talk if the committee requested you do the talk in 30 minutes.

Description

This is the description that, if your talk is accepted, people will be reading in the conference program. It needs to:

  1. Be compelling
  2. Make a promise
  3. Be 400 characters or less

Being compelling can seem very difficult, depending on your topic space. It might help to consider that you only need to be compelling to your target audience. So, while a talk called “Writing Unit Tests” is probably not compelling to the already-testing contingent at the conference, it might be totally compelling for those who aren’t but want to. Meanwhile, a talk called “Setting Up a local Automated TDD Environment in Python 3 With Zero External Dependencies” is probably pretty compelling to the already-testing crowd and not so compelling to those who aren’t yet writing tests.

Making a promise to the reader means that you’re setting an expectation in their mind that you’ll make good on some deliverable at some point in the talk. Some key phrases to use in your description to call out this promise might be “By the end of this talk, you’ll have…”, or “If you want to have a totally solid grasp of…”. The key that both of those phrases have in common is that they both imply that you’re about to tell them what they can expect to get out of the talk. It answers a question in every conference-goer’s mind when reading talk descriptions, which is “What’s in it for me?”. If you don’t answer that question in the description, it may be harder for people to guess what’s in it for them, and frankly they won’t spend a lot of time trying!

Abstract

The form expects a detailed description of the talk, along with an outline describing the flow of the talk. That said, it’s not expected that the talk you outline in August is precisely the same talk you deliver the following March. However, if your talk is accepted, the outline will be made public online (it will not be printed in the conference program), so you’d like to hit the outline as close as possible.

The abstract section will be used by the program committee to answer various questions about the talk, possibly including (but certainly not limited to):

  • Whether the talk’s title and description actually describe the talk as detailed in the abstract. Will attendees get pretty much what they expect if they only read the title and description of the talk?
  • Whether the talk appears to target the correct audience level. If you’re targeting a novice audience, your abstract should not go into topics that are beyond that audience level.
  • Whether the scope of the talk is realistic given the time constraints. If you asked for a 30-minute slot, your abstract should not make the committee think that it would be impossible to cover all of the material even given a whole hour. It’s not uncommon to be a little off in this regard, but being really far off could be an indicator that the proposer may not have thought this through very well.
  • Whether the talk is organized and has a logical flow that incorporates any known essential, obvious topics that should be touched on.

Additional Notes

This is a free-form text field where you can pretty much talk directly to the Program Committee to let them know in your own way why you think your talk is awesome, how you envision it coming off, and how you see the audience benefitting from it and finding value in attending the talk.

Although there are no hard requirements for this section of the submission form, you should absolutely, positively include any of the following that you can:

  • If your talk is about a new software project, a link to the project’s homepage, repository, and anything other relevant articles, interviews, testimonials, etc., about the project.
  • Links to any online slides or videos from any presentations you’ve given previously.
  • Comments discussing how you’d handle moving from a 45-minute to 30-minute slot, or from an Extreme slot to a regular slot, etc. In general, it helps the committee to know you’ve thought about contingencies in your proposal.

Great, so… How do I do this?

If you’ve never written a proposal before, and you’re not sure what you want to talk about, don’t have a crystal clear vision for a talk, have trouble narrowing the scope of your idea, and don’t know exactly where to start, I have a few ideas that might help you get the proposal creation juices flowing:

  • Write down some bullet points in a plain text file that are titles or one-line summaries of talks you’d like to see. Forget about whether you’re even willing or able to actually produce these talks – the idea is to start moving things from your brain onto a page. When you’ve got 5-10 of these points, reflect:
    • Could you deliver any of these yourself?
    • Could you apply an idea contained in a point to a topic you’re more familiar with?
    • Is there a topic related to any of the points that touch on things you know well?
    • Do any of these points jog your memory and make you think of projects you’ve worked on in the past that might be a source for a talk idea?
  • Do an informal audit of what you’ve done over the past year.
    • Were there problems you faced that there’s no good solution for?
    • Did you grow in some way that was really important, and could you help others to learn those lessons, and learn why those lessons are important?
    • Did you make use of a new technology?
    • Did you change how you do your job? Your development workflow? Your project lifecycle? Automation? Task management?
  • Go through the talks on pyvideo.org. It’s such an enormous, and enormously valuable trove. You could just scan the titles and see if something comes up. If that doesn’t work, click on a few, but don’t watch the talk: skip to the Q&A at the end. Buried in the Q&A are always these gems that are only tangentially related, and it is not uncommon to hear a speaker respond with “…but that’s another whole talk…”. They’re sometimes right.

Make it happen!

Sending Alerts With Graphite Graphs From Nagios

Disclaimer

The way I’m doing this relies on a feature I wrote for Graphite that was only recently merged to trunk, so at time of writing that feature isn’t in a stable release. Hopefully it’ll be in 0.9.10. Until then, you can at least test this setup using Graphite’s trunk version.

Oh yeah, the new feature is the ability to send graph images (not links) via email. I surfaced this feature in Graphite through the graph menus that pop up when you click on a graph in Graphite, but implemented it such that it’s pretty easy to call from a script (which I also wrote – you’ll see if you read the post).

Also, note that I assume you already know Nagios, how to install new command scripts, and all that. It’s really easy to figure this stuff out in Nagios, and it’s well-documented elsewhere, so I don’t cover anything here but the configuration of this new feature.

The Idea

I’m not a huge fan of Nagios, to be honest. As far as I know, nobody really is. We all just use it because it’s there, and the alternatives are either overkill, unstable, too complex, or just don’t provide much value for all the extra overhead that comes with them (whether that’s config overhead, administrative overhead, processing overhead, or whatever depends on the specific alternative you’re looking at). So… Nagios it is.

One thing that *is* pretty nice about Nagios is that configuration is really dead simple. Another thing is that you can do pretty much whatever you want with it, and write code in any language you want to get things done. We’ll take advantage of these two features to actually do a couple of things:

  • Monitor a metric by polling Graphite for it directly
  • Tell Nagios to fire off a script that’ll go get the graph for the problematic metric, and send email with the graph embedded in it to the configured contacts.
  • Record that we sent the alert back in Graphite, so we can overlay those events on the corresponding metric graph and verify that alerts are going out when they should, that the outgoing alerts are hitting your phone without delay, etc.

The Candy

Just to be clear, we’re going to set things up so you can get alert messages from Nagios that look like this (click to enlarge):

And you’ll also be able to track those alert events in Graphite in graphs that look like this (click to enlarge, and note the vertical lines – those are the alert events.):

Defining Contacts

In production, it’s possible that the proper contacts and contact groups already exist. For testing (and maybe production) you might find that you want to limit who receives graphite graphs in email notifications. To test things out, I defined:

  • A new contact template that’s configured specifically to receive the graphite graphs. Without this, no graphs.
  • A new contact that uses the template
  • A new contact group containing said contact.

For testing, you can create a test contact in templates.cfg:

define contact{
        name                            graphite-contact 
        service_notification_period     24x7            
        host_notification_period        24x7 
        service_notification_options    w,u,c,r,f,s 
        host_notification_options       d,u,r,f,s  
        service_notification_commands   notify-svcgraph-by-email
        host_notification_commands      notify-host-by-email
        register                        0
        }

You’ll notice a few things here:

  • This is not a contact, only a template.
  • Any contact defined using this template will be notified of service issues with the command ‘notify-svcgraph-by-email’, which we’ll define in a moment.

In contacts.cfg, you can now define an individual contact that uses the graphite-contact template we just assembled:

define contact{
        contact_name    graphiteuser
        use             graphite-contact 
        alias           Graphite User
        email           someone@example.com 
        }

Of course, you’ll want to change the ‘email’ attribute here, even for testing.

Once done, you also want to have a contact group set up that contains this new ‘graphiteuser’, so that you can add users to the group to expand the testing, or evolve things into production. This is also done in contacts.cfg:

define contactgroup{
        contactgroup_name       graphiteadmins
        alias                   Graphite Administrators
        members                 graphiteuser
        }

Defining a Service

Also for testing, you can set up a test service, necessary in this case to bypass default settings that seek to not bombard contacts by sending an email for every single aberrant check. Since the end result of this test is to see an email, we want to get an email for every check where the values are in any way out of bounds. In templates.cfg put this:

define service{
    name                        test-service
    use                         generic-service
    passive_checks_enabled      0
    contact_groups              graphiteadmins
    check_interval              20
    retry_interval              2
    notification_options        w,u,c,r,f
    notification_interval       30
    first_notification_delay    0
    flap_detection_enabled      1
    max_check_attempts          2
    register                    0
    }

Again, the key point here is to insure that no notifications are ever silenced, deferred, or delayed by nagios in any way, for any reason. You probably don’t want this in production. The other point is that when you set up an alert for a service that uses ‘test-service’ in its definition, the alerts will go to our previously defined ‘graphiteadmins’.

To make use of this service, I’ve defined a service in ‘localhost.cfg’ that will require further explanation, but first let’s just look at the definition:

define service{
        use                             test-service 
        host_name                       localhost
        service_description             Some Important Metric
        check_command                   check_graphite_data!24!36
        notifications_enabled           1
        }

There are two new things we need to understand when looking at this definition:

  • What is ‘check_graphite_data’?
  • What is ‘_GRAPHURL’?

These questions are answered in the following section.

In addition, you should know that the value for _GRAPHURL is intended to come straight from the Graphite dashboard. Go to your dashboard, pick a graph of a single metric, grab the URL for the graph, and paste it in (and double-quote it).

Defining the ‘check_graphite_data’ Command

This command relies on a small script written by the folks at Etsy, which can be found on github: https://github.com/etsy/nagios_tools/blob/master/check_graphite_data

Here’s the commands.cfg definition for the command:

# 'check_graphite_data' command definition
define command{
        command_name    check_graphite_data
        command_line    $USER1$/check_graphite_data -u $_SERVICEGRAPHURL$ -w $ARG1$ -c $ARG2$
        }

The ‘command_line’ attribute calls the check_graphite_data script we got on github earlier. The ‘-u’ flag is a URL, and this is actually using the custom object attribute ‘_GRAPHURL’ from our service definition. You can see more about custom object variables here: http://nagios.sourceforge.net/docs/3_0/customobjectvars.html - the short story is that, since we defined _GRAPHURL in a service definition, it gets prepended with ‘SERVICE’, and the underscore in ‘_GRAPHURL’ moves to the front, giving you ‘$_SERVICEGRAPHURL’. More on how that works at the link provided.

The ‘-w’ and ‘-c’ flags to check_graphte_data are ‘warning’ and ‘critical’ thresholds, respectively, and they correlate to the positions of the service definition’s ‘check_command’ arguments (so, check_graphite_data!24!36 maps to ‘check_graphite_data -u <url> -w 24 -c 36′)

Defining the ‘notify-svcgraph-by-email’ Command

This command relies on a script that I wrote in Python called ‘sendgraph.py’, which also lives in github: https://gist.github.com/1902478

The script does two things:

  • It emails the graph that corresponds to the metric being checked by Nagios, and
  • It pings back to graphite to record the alert itself as an event, so you can define a graph for, say, ‘Apache Load’, and if you use this script to alert on that metric, you can also overlay the alert events on top of the ‘Apache Load’ graph, and vet that alerts are going out when you expect. It’s also a good test to see that you’re actually getting the alerts this script tries to send, and that they’re not being dropped or seriously delayed.

To make use of the script in nagios, lets define the command that actually sends the alert:

define command{
    command_name    notify-svcgraph-by-email
    command_line    /path/to/sendgraph.py -u "$_SERVICEGRAPHURL$" -t $CONTACTEMAIL$ -n "$SERVICEDESC$" -s $SERVICESTATE$
    }

A couple of quick notes:

  • Notice that you need to double-quote any variables in the ‘command_line’ that might contain spaces.
  • For a definition of the command line flags, see sendgraph.py’s –help output.
  • Just to close the loop, note that notify-svcgraph-by-email is the ‘service_notification_commands’ value in our initial contact template (the very first listing in this post)

Fire It Up

Fire up your Nagios daemon to take it for a spin. For testing, make sure you set the check_graphite_data thresholds to numbers that are pretty much guaranteed to trigger an alert when Graphite is polled. Hope this helps! If you have questions, first make sure you’re using Graphite’s ‘trunk’ branch, and not 0.9.9, and then give me a shout in the comments.

The Python User Group in Princeton (PUG-IP): 6 months in

In May, 2011, I started putting out feelers on Twitter and elsewhere to see if there might be some interest in having a Python user group that was not in Philadelphia or New York City. A single tweet resulted in 5 positive responses, which I took as a success, given the time-sensitivity of Twitter, my “reach” on Twitter (which I assume is far smaller than what might be the entire target audience for that tweet), etc.

Happy with the responses I received, I still wanted to take a baby step in getting the group started. Rather than set up a web site that I’d then have to maintain, a mailing list server, etc., I went to the cloud. I started a group on meetup.com, and started looking for places to hold our first meeting.

Meetup.com

Meetup.com, I’m convinced, gives you an enormous value if you’re looking to start a user group Right Now, Today™. For $12/mo., you get a place where you can announce future meetups, hold discussions, collect RSVPs so you have a head count for food or space or whatever, and vendors can also easily jump in to provide sponsorship or ‘perks’ in the form of discounts on services to user group members and the like. It’s a lot for a little, and it’s worked well enough. If we had to stick with it for another year, I’d have no real issue with that.

Google Groups

I set up a mailing list using Google Groups about 2-3 months ago now. I only waited so long because I thought meetup.com’s discussion forum might work for a while. After a few meetings, though, I noticed that there were always about five more people in attendance than had RSVP’d on meetup.com. Some people just aren’t going to be bothered with having yet another account on yet another web site I guess. If that’s the case, then I have two choices (maybe more, but these jumped to mind): force the issue by constantly trumpeting meetup.com’s service, or go where everyone already was. Most people have a Google account, and understand its services. Also, since the group is made up of technical people, they mostly like the passive nature of a mailing list as opposed to web forums.

If you’re setting up a group, I’d say that setting up a group on meetup.com and simultaneously setting up a Google group mailing list is the way to go if you want to get a fairly complete set of services for very little money and about an hour’s worth of time.

Meeting Space

Meeting space can come from a lot of different places, but I had a bit of trouble settling on a place at first. Princeton University is an awesome place and has a ton of fantastic places to meet with people, but if you’re not living on campus (almost no students are group members, btw), parking can be a bit troublesome, and Princeton University is famous for having little or no signage, and that includes building names, so finding where to go even if you did find parking can be problematic. So, so far, the University is out.

The only sponsor I had that was willing to provide space was my employer, but we’re nowhere near Princeton, and don’t really have the space. Getting a sponsor for space can be a bit difficult when your group doesn’t exist yet, in part because none of them have engaged with you or your group until the first meeting, when the attendees, who all work for potential sponsors, show up.

I started looking at the web site for the Princeton Public Library. I’ve been involved in the local Linux user group for several years, and they use free meeting space made available by the public library in Lawrenceville, which borders Princeton. I wondered if the Princeton Public Library did this as well, but they don’t, actually. In fact, meeting space at that location can get pretty expensive, since they charge for the space and A/V equipment like projectors and stuff separately (or they did when I started the group – I believe it’s still the case).

I believe I tweeted my disappointment about the cost of meeting at the Princeton Public Library, and did a callout on Twitter for space sponsors and other ideas about meeting space in or near Princeton. The Princeton Public Library got in touch through their @PrincetonPL Twitter account, and we were able to work out a really awesome deal where they became a sponsor, and agreed to host our group for 6 months, free of charge. Awesome!

Now, six months in, we either had to come to some other agreement with the library, or move on to a new space. After six months, it’s way easier to find space, or sponsors who might provide space, but I felt if we could find some way to continue the relationship with the library, it’d be best not to relocate the group. We wound up finding a deal that does good things for the group, the library, the local Python user community, and the evangelism of the Python language….

Knowledge for Space

Our group got a few volunteers together to commit to providing a 5-week training course to the public, held at the Princeton Public Library. Adding public offerings like this adds value to the library, attracts potential new members (they’re a member-supported library, not a state/municipality-funded one), etc. In exchange for providing this service to the library, the library provides us with free meeting space, including the A/V equipment.

If you don’t happen to have a public library that offers courses, seminars, etc., to the general public, you might be able to cut a similar deal with a local community college, or even high school. If you know of a corporation locally that uses Python or some other technology the group can speak or train people in, you might be able to trade training for meeting space in their offices. Training is a valued perk to the employees of most corporations.

How To Get Talks (or “How we stopped caring about getting talks”)

Whether you’re running a publishing outfit, a training event, or user group, getting people to deliver content is a challenge. Some people don’t think they have any business talking to what they perceive as a roomful of geniuses about anything. Some just aren’t comfortable talking in front of audiences, but are otherwise convinced of their own genius. Our group is trying to attack this issue in various ways, and so far it seems to be working well enough, though more ideas are welcome!

Basically, the group isn’t necessarily locked into traditions like “Thou shalt provide a speaker, who shalt bequeath upon our many wisdom of the ages”. Once you’ve decided as a group that having cookie-cutter meetings isn’t necessary, you start to think of all sorts of things you could all be doing together.

Below are some ideas, some in the works, some in planning, that I hope help other would-be group starters to get the ball rolling, and keep it in motion!

Projects For the Group, By the Group

Some members of PUG-IP are working together on building the pugip.org website, which is housed in a GitHub repository under the ‘pugip’ GitHub organization. This one project will inevitably result in all kinds of home-grown presentations & events within the group. As new ideas come up and new features are implemented, people will give lightning talks about their implementation, or we’ll do a group peer review of the code, or we’ll have speakers give talks about third-party technologies we might use (so, we might have two speakers each give a 30-minute talk about two different NoSQL solutions, for example. We’ve already had a great overview of about 10 different Python micro-frameworks), etc.

We may also decide to break up into pairs, and then sprint together on a set of features, or a particularly large feature, or something like that.

As of now, we’ve made enough decisions as a group to get the ball rolling. If there’s any interest I can blog about the setup that allows the group to easily share, review, and test code, provide live demos of their work, etc. The tl;dr version is we use GitHub and free heroku accounts, but new ideas come into play all the time. Just today I was wondering if we could, as a group, make use of the cloud9 IDE (http://cloud9ide.com).

The website is a great idea, but other group projects are likely to come up.

Community Outreach

PUG-IPs first official community outreach project will be the training we provide through the Princeton Public library. A few of us will collaborate on delivering the training, but the rest of the group will be involved in providing feedback on various aspects of the material, etc., so it’s a ‘whole group’ project, really. On top of increasing interactivity among the group members, outreach is also a great way to grow and diversify the group, and perhaps gain sponsorships as well!

There’s another area group called LUG-IP (a Linux user group) that also does some community outreach through a hardware SIG (special interest group), certification training sessions, and participating in local computing events and conferences. I’d like to see PUG-IP do this, too, maybe in collaboration with the LUG (they’re a good and passionate group of technologists).

Community outreach can also mean teaming up with various other technology groups, and one event I’m really looking forward to is a RedSnake meeting to be held next February. A RedSnake meeting is a combined meeting between PhillyPUG (the Philadelphia Python User Group) and Philly.rb (the Philadelphia Ruby Group). As a member of PhillyPUG I participated in last year’s RedSnake meeting, and it was a fantastic success. Probably 70+ people in attendance (here’s a pic at the end – some had already left by the time someone snapped this), and perhaps 10 or so lightning talks given by members of both organizations. We tried to do a ‘matching’ talk agenda at the meeting, so if someone on the Ruby side did a testing talk, we followed that with a Python testing talk, etc. It was a ton of fun, and the audience was amazing.

Socials

Socials don’t have to be dedicated events, per se. For example, PUG-IP has a sort of mini-social after every single meetup. We’re lucky to have our meetings located about a block away from a brewpub, so after each meeting, perhaps half of us make it over for a couple of beers and some great conversations. After a few of these socials, I started noticing that more talk proposals started to spring up.

Of course, socials can also be dedicated events. Maybe some day PUG-IP will…. I dunno… go bowling? Or maybe we’ll go as a group to see the next big geeky movie that comes out. Maybe we’ll have some kind of all-inclusive, bring-the-kids BBQ next summer. Who knows?

As a sort of sideshow event to the main LUG meetings, LUG-IP has a regularly-scheduled ‘coffee klatch’. Some of the members meet up one Sunday per month at (if memory serves) 8-11AM at a local Panera for coffee, pastries, and geekery. It’s completely informal, but it’s a good time.

Why Not Having Talks Will Help You Get Talks

I have a theory that is perhaps half-proven through my experiences with technology user groups: increasing engagement among and between the members of the group in a way that doesn’t shine a huge floodlight on a single individual (like a talk would) eventually breaks down whatever fears or resistance there is to proposing and giving a talk. Sometimes it’s just a comfort level thing, and working on projects, or having a beer, or sprinting on code, etc. — together — turns a “talking in front of strangers” experience into more of a “sharing with my buddies” one.

I hope that’s true, anyway. It seems to be. :)

Thanks For Reading

I hope someone finds this useful. It’s early on in the life of PUG-IP, but I thought it would be valuable to get these ideas out into the ether early and often before they slip from my brain. Good luck with your groups!

pyrabbit Makes Testing and Managing RabbitMQ Easy

I have a lot of hobby projects, and as a result getting any one of them to a state where I wouldn’t be completely embarrassed to share it takes forever. I started working on pyrabbit around May or June of this year, and I’m happy to say that, while it’ll never be totally ‘done’ (it is software, after all), it’s now in a state where I’m not embarrassed to say I wrote it.

What is it?

It’s a Python module to help make managing and testing RabbitMQ servers easy. RabbitMQ has, for some time, made available a RESTful interface for programmatically performing all of the operations you would otherwise perform using their browser-based management interface.

So, pyrabbit lets you write code to manipulate resources like vhosts & exchanges, publish and get messages, set permissions, and get information on the running state of the broker instance. Note that it’s *not* suitable for writing AMQP consumer or producer applications; for that you want an *AMQP* module like pika.

PyRabbit is tested with Python versions 2.6-3.2. The testing is automated using tox. In fact, PyRabbit was a project I started in part because I wanted to play with tox.

Here’s the example, ripped from the documentation (which is ripped right from my own terminal session):

>>> from pyrabbit.api import Client
>>> cl = Client('localhost:55672', 'guest', 'guest')
>>> cl.is_alive()
True
>>> cl.create_vhost('example_vhost')
True
>>> [i['name'] for i in cl.get_all_vhosts()]
[u'/', u'diabolica', u'example_vhost', u'testvhost']
>>> cl.get_vhost_names()
[u'/', u'diabolica', u'example_vhost', u'testvhost']
>>> cl.set_vhost_permissions('example_vhost', 'guest', '.*', '.*', '.*')
True
>>> cl.create_exchange('example_vhost', 'example_exchange', 'direct')
True
>>> cl.get_exchange('example_vhost', 'example_exchange')
{u'name': u'example_exchange', u'durable': True, u'vhost': u'example_vhost', u'internal': False, u'arguments': {}, u'type': u'direct', u'auto_delete': False}
>>> cl.create_queue('example_queue', 'example_vhost')
True
>>> cl.create_binding('example_vhost', 'example_exchange', 'example_queue', 'my.rtkey')
True
>>> cl.publish('example_vhost', 'example_exchange', 'my.rtkey', 'example message payload')
True
>>> cl.get_messages('example_vhost', 'example_queue')
[{u'payload': u'example message payload', u'exchange': u'example_exchange', u'routing_key': u'my.rtkey', u'payload_bytes': 23, u'message_count': 2, u'payload_encoding': u'string', u'redelivered': False, u'properties': []}]
>>> cl.delete_vhost('example_vhost')
True
>>> [i['name'] for i in cl.get_all_vhosts()]
[u'/', u'diabolica', u'testvhost']

Hopefully you’ll agree that this is simple enough to use in a Python interpreter to get information and do things with RabbitMQ ‘on the fly’.

How Can I Get It?

Well, there’s already a package on PyPI called ‘pyrabbit’, and it’s not mine. It’s some planning-stage project that has no actual software associated with it. I’m not sure when the project was created, but the PyPI page has a broken home page link, and what looks like a broken RST-formatted doc section. I’ve already pinged someone to see if it’s possible to take over the name, because I can’t think of a cool name to change it to.

Until that issue is cleared up, you can get downloadable packages or clone/fork the code at the pyrabbit github page (see the ‘Tags’ section for downloads), and the documentation is hosted on the (awesome) ReadTheDocs.org site.

Thoughts on Python and Python Cookbook Recipes to Whet Your Appetite

Dave Beazley and myself are, at this point, waist deep into producing Python Cookbook 3rd Edition. We haven’t really taken the approach of going chapter by chapter, in order. Rather, we’ve hopped around to tackle chapters one or the other finds interesting or in-line with what either of us happens to be working with a lot currently.

For me, it’s testing (chapter 8, for those following along with the 2nd edition), and for Dave, well, I secretly think Dave touches every aspect of Python at least every two weeks whether he needs to or not. He’s just diabolical that way. He’s working on processes and threads at the moment, though (chapter 9 as luck would have it).

In both chapters (also a complete coincidence), we’ve decided to toss every scrap of content and start from scratch.

Why on Earth Would You Do That?

Consider this: when the last edition (2nd ed) of the Python Cookbook was released, it went up to Python 2.4. Here’s a woefully incomplete list of the superamazing awesomeness that didn’t even exist when the 2nd Edition was released:

  • Modules:
    • ElementTree
    • ctypes
    • sqlite3
    • functools
    • cProfile
    • spwd
    • uuid
    • hashlib
    • wsgiref
    • json
    • multiprocessing
    • fractions
    • plistlib
    • argparse
    • importlib
    • sysconfig
  • Other Stuff
    • The ‘with’ statement and context managers*
    • The ‘any’ and ‘all’ built-in functions
    • collections.defaultdict
    • advanced string formatting (the ‘format()’ method)
    • class decorators
    • collections.OrderedDict
    • collections.Counter
    • collections.namedtuple()
    • the ability to send data *into* a generator (yield as an expression)
    • heapq.merge()
    • itertools.combinations
    • itertools.permutations
    • operator.methodcaller()

* If memory serves, the ‘with’ statement was available in 2.4 via future import.

Again, woefully incomplete, and that’s only the stuff that’s in the 2.x version! I don’t even mention 3.x-only things like concurrent.futures. From this list alone, though, you can probably discern that the way we think about solving problems in Python, and what our code looks like these days, is fundamentally altered forever in comparison to the 2.4 days.

To give a little more perspective: Python core development moved from CVS to Subversion well after the 2nd edition of the book hit the shelves. They’re now on Mercurial. We skipped the entire Subversion era of Python development.

The addition of any() and all() to the language by themselves made at least 3-4 recipes in chapter 1 (strings) one-liners. I had to throw at least one recipe away because people just don’t need three recipes on how to use any() and all(). The idea that you have a chapter covering processes and threads without a multiprocessing module is just weird to think about these days. The with statement, context managers, class decorators, and enhanced generators have fundamentally changed how we think about certain operations.

Also something to consider: I haven’t mentioned a single third-party module! Mock, tox, and nosetests all support Python 3. At least Mock and tox didn’t exist in the old days (I don’t know about nose off-hand). Virtualenv and pip didn’t exist (both also support Python 3). So, not only has our code changed, but how we code, test, deploy, and generally do our jobs with Python has also changed.

Event-based frameworks aside from Twisted are not covered in the 2nd edition if they existed at all, and Twisted does not support Python 3.

WSGI, and all it brought with it, did not exist to my knowledge in the 2.4 days.

We need a Mindset List for Python programmers!

So, What’s Your Point

My point is that I suspect some people have been put off of submitting Python 3 recipes, because they don’t program in Python 3, and if you’re one of them, you need to know that there’s a lot of ground to cover between the 2nd and 3rd editions of the book. If you have a recipe that happens to be written in Python 2.6 using features of the language that didn’t exist in Python 2.4, submit it. You don’t even have to port it to Python 3 if you don’t want to or don’t have the time or aren’t interested or whatever.

Are You Desperate for Recipes or Something?

Well, not really. I mean, if you all want to wait around while Dave and I crank out recipe after recipe, the book will still kick ass, but it’ll take longer, and the book’s world view will be pretty much limited to how Dave and I see things. I think everyone loses if that’s the case. Having been an editor of a couple of different technical publications, I can say that my two favorite things about tech magazines are A) The timeliness of the articles (if Python Magazine were still around, we would’ve covered tox by now), and B) The broad perspective it offers by harvesting the wisdom and experiences of a vast sea of craftspeople.

What Other Areas Are In Need?

Network programming and system administration. For whatever reason, the 2nd edition’s view of system administration is stuff like checking your Windows sound system and spawning an editor from a script. I guess you can argue that these are tasks for a sysadmin, but it’s just not the meat of what sysadmins do for a living. I’ll admit to being frustrated by this because I spent some time searching around for Python 3-compatible modules for SNMP and LDAP and came up dry, but there’s still all of that sar data sitting around that nobody ever seems to use and is amazing, and is easy to parse with Python. There are also terminal logging scripts that would be good.

Web programming and fat client GUIs also need some love. The GUI recipes that don’t use tkinter mostly use wxPython, which isn’t Python 3-compatible. Web programming is CGI in the 2nd edition, along with RSS feed aggregation, Nevow, etc. I’d love to see someone write a stdlib-only recipe for posting an image to a web server, and then maybe more than one recipe on how to easily implement a server that accepts them.

Obviously, any recipes that solve a problem that others are likely to have that use any of the aforementioned modules & stuff that didn’t exist in the last edition would really rock.

How Do I Submit?

  1. Post the code and an explanation of the problem it solves somewhere on the internet, or send it (or a link to it) via email to PythonCookbook@oreilly.com or to @bkjones on Twitter.
  2. That’s it.

We’ll take care of the rest. “The rest” is basically us pinging O’Reilly, who will contact you to sign something that says it’s cool if we use your code in the book. You’ll be listed in the credits for that recipe, following the same pattern as previous editions. If it goes in relatively untouched, you’ll be the only name in the credits (also following the pattern of previous editions).

What Makes a Good Recipe?

A perfect recipe that is almost sure to make it into the cookbook would ideally meet most of the criteria set out in my earlier blog post on that very topic. Keep in mind that the ability to illustrate a language feature in code takes precedence over the eloquence of any surrounding prose.

What If…

I sort of doubt this will come up, but if we’ve already covered whatever is in your recipe, we’ll weigh that out based on the merits of the recipes. I want to say we’ll give new authors an edge in the decision, but for an authoritative work, a meritocracy seems the only valid methodology.

If you think you’re not a good writer, then write the code, and a 2-line description of the problem it solves, and a 2-line description of how it works. We’ll flesh out the text if need be.

If you just can’t think of a good recipe, grep your code tree(s) for just the import statements, and look for ideas by answering questions on Stackoverflow or the various mailing lists.

If you think whatever you’re doing with the language isn’t very cool, then stop thinking that a cookbook is about being cool. It’s about being practical, and showing programmers possibly less senior than yourself an approach to a problem that isn’t completely insane or covered in warts, even if the problem is relatively simple.

Slides, an App, a Meetup, and More On the Way

I’ve been busy. Seriously. Here’s a short dump of what I’ve been up to with links and stuff. Hopefully it’ll do until I can get back to my regular blogging routine.

PICC ’11 Slides Posted

I gave a Python talk at PICC ’11. If you were there, then you have a suboptimal version of the slides, both because I caught a few bugs, and also because they’re in a flattened, lifeless PDF file, which sort of mangles anything even slightly fancy. I’m not sure how much value you’ll get out of these because my presentation slides tend to present code that I then explain, and you won’t have the explanation, but people are asking, so here they are in all their glory. Enjoy!

I Made a Webapp Designed To Fail

No really, I did. WebStatusCodes is the product of necessity. I’m writing a Python module that provides an easy way for people to talk to a web API. I test my code, and for some of the tests I want to make sure my code reacts properly to certain HTTP errors (or in some cases, to *any* HTTP status code that’s not 200). In unit tests this isn’t hard, but when you’re starting to test the network layers and beyond, you need something on the network to provide the errors. That’s what WebStatusCodes does. It’s also a simple-but-handy reference for HTTP status codes, though it is incomplete (418 I’m a teapot is not supported). Still, worth checking out.

Interesting to note, this is my first AppEngine application, and I believe it took me 20 minutes to download the SDK, get something working, and get it deployed. It was like one of those ‘build a blog in -15 minutes’ moments. Empowering the speed at which you can create things on AppEngine, though I’d be slow to consider it for anything much more complex.

Systems and Devops People, Hack With Me!

I like systems-land, and a while back I was stuck writing some reporting code, which I really don’t like, so I started a side project to see just how much cool stuff I could do using the /proc filesystem and nothing but pure Python. I didn’t get too far because the reporting project ended and I jumped back into all kinds of other goodness, but there’s a github project called pyproc that’s just a single file with a few functions in it right now, and I’d like to see it grow, so fork it and send me pull requests. If you know Linux systems pretty well but are relatively new to Python, I’ll lend you a hand where I can, though time will be a little limited until the book is done (see further down).

The other projects I’m working on are sort of in pursuit of larger fish in the Devops waters, too, so be sure to check out the other projects I mention later in this post, and follow me on github.

Python Meetup Group in Princeton NJ

I started a Meetup group for Pythonistas that probably work in NYC or PA, but live in NJ. I work in PA, and before this group existed, the closest group was in Philly, an hour from home. I put my feelers out on Twitter, found some interest, put up a quick Meetup site, and we had 13 people at the first meetup (more than had RSVP’d). It’s a great group of folks, but more is always better, so check it out if you’re in the area. We hold meetings at the beautiful Princeton Public Library (who found us on twitter and now sponsors the group!), which is just a block or so from Triumph, the local microbrewery. I’m hoping to have a post-meeting impromptu happy hour there at some point.

Python Cookbook Progress

The Python Cookbook continues its march toward production. Lots of work has been done, lots of lessons have been learned, lots of teeth have been gnashed. The book is gonna rock, though. I had the great pleasure of porting all of the existing recipes that are likely to be kept over to Python 3. Great fun. It’s really amazing to see just how it happens that a 20-line recipe is completely obviated by the addition of a single, simple language feature. It’s happened in almost every chapter I’ve looked at so far.

If you have a recipe, or stumble upon a good example of some language feature, module, or other useful tidbit, whether it runs in Python 3 or not, let me know (see ‘Contact Me’). The book is 100% Python 3, but I’ve gotten fairly adept at porting things over by now :) Send me your links, your code, or whatever. If we use the recipe, the author will be credited in the book, of course.

PyRabbit is Coming

In the next few days I’ll be releasing a Python module on github that will let you easily work with RabbitMQ servers using that product’s HTTP management API. It’s not nearly complete, which is why I’m releasing it. It does some cool stuff already, but I need another helper or two to add new features and help do some research into how RabbitMQ broker configuration affects JSON responses from the API. Follow me on github if you want to be the first to know when I get it released. You probably also want to follow myYearbook on github since that’s where I work, and I might release it through the myYearbook github organization (where we also release lots of other cool open source stuff).

Python Asynchronous AMQP Consumer Module

I’m also about 1/3 of the way through a project that lets you write AMQP consumers using the same basic model as you’d write a Tornado application: write your handler, import the server, link the two (like, one line of code), and call consume(). In fact, it uses the Tornado IOLoop, as well as Pika, which is an asynchronous AMQP module in Python (maintained by none other than my boss and myYearbook CTO,  @crad), which also happens to support the Tornado IOLoop directly.

Book Review: Python Standard Library by Example

Quick Facts:

  • Author: Doug Hellmann
  • Pages: 1344
  • Publisher: Addison-Wesley (Developer’s Library)
  • ETA: June 5, 2011
  • Amazon link: http://www.amazon.com/Python-Standard-Library-Example-Developers/dp/0321767349/ref=sr_1_1?ie=UTF8&qid=1307109464&sr=1-1-spell

What this book says it does:

From the book’s description:

This book is a collection of essays and example programs demonstrating how to use more than 100 modules from Python standard library. It goes beyond the documentation available on python.org to show real programs using the modules and demonstrating how you can use them in your daily programming tasks.

What this book actually does:

This book actually kinda rocks, in part because of its a unique take on documentation of the Python standard library. The Python standard library documentation is actually a pretty good high-level reference, and this book doesn’t seek to duplicate what’s there. Instead, it specifically seeks out places in the existing documentation that are underdocumented, undocumented, don’t have clear enough examples, or just don’t provide the value to the end user that they should for whatever reason. Even as good as the standard library documentation is, Doug easily cranked out 1000+ pages of invaluable information that has given me a much greater insight into the standard library modules that I use on a regular basis (and plenty that I don’t use on a regular basis).

How it works

The book is simply laid out by module. Using the multiprocessing module? It’s right there in the Table of Contents. It’s as easy to use as the standard library docs from a navigational perspective, and the index, it could be argued, is an improvement over docs.python.org’s  search behavior.

When you get to the module you’re looking for, you’ll primarily see code. There is enough English text to explain what the code actually does, but the main illustrative tool in this book is the code. This is not an easy thing to accomplish, but Doug provides a very nice and balanced presentation of the real meaty parts of your favorite standard library modules.

What’s Great About it

First, it provides both depth and breadth, it’s easy to find whatever you’re looking for, and if it’s not needed (usually because it’s well-covered in the standard library docs) it’s not there.

Second, the book is written by an authoritative, knowledgeable, experienced, and prolific Python developer. While he’s a creative thinker, his work is balanced by a healthy dose of pragmatism and grounded in best practices. Contrived as the examples might get at times, you won’t typically find code written by Doug that would garner sideways glances by experienced Python developers.

Third, it’s not a rehashing of the docs. In fact it skips coverage of things that are well-documented in the docs. Yes, the book does contain simple introductory material for each module to give the uninitiated some context, but that’s different from a book that takes existing docs and just moves the letters around. Doug does a great job of getting you into the good stuff without much fluff.

Fourth, there’s almost zero fluff. I’d love to see a statistical breakdown of the number of lines of code vs. text in this book. And the good part really isn’t that he’s put so much code in the book, it’s that he presents with code alongside the text in a way that insures readers don’t get lost.

Fifth, this wasn’t a rush job. Doug has been writing this content in the form of his Python Module of the Week blog series for a few years now. Most of the work was editing, finessing, updating, and testing (and retesting) the code, not developing the content from scratch. So what’s there in the book is not just a braindump from Doug’s brain: it’s had the benefit of peer review and feedback from the blog, email, etc., and that adds a ton of value to the final product in my eyes.

What’s Not Great About it

I insist on including bad things about everything I review, because nothing is perfect, and the more people talk about things they don’t like, the more makers start to listen and make things better.

To be honest, the only thing I found lacking in this book is the index. This should not shock anyone who is a tech bibliophile. Most indexes, at least on tech books, are pretty bad (ironic since tech books often serve as references, which makes the index pretty crucial). Also, consider that this review is based on a review copy of the book, so it’s possible that the final version will have a totally awesome index.

Ah, one other thing (which also might be due to this being a review copy): there are no tab markers. Since Python defines scope using whitespace, not having indentation markers in any medium containing page breaks can lead to confusion if a code sample crosses page boundaries. The alternative to having the markers is to insure that the code samples don’t cross page boundaries. It’s not possible for me to know if they’ll do one or both of these before the final printing.

The Final Word

My petty complaints about the index and indentation markers are not only trivial, but they both may be fixed in the final printing. I saw nothing so bad that I wouldn’t highly recommend this book, and I’ve seen tons and tons of stuff that would make me highly recommend this book. I’m using this book myself on a fairly regular basis, and it’s an effective, easy-to-use tool that makes a great companion reference to the standard library.

Buy it.

‘Grokking Python’ Going to PICC Conference!

In conjunction with my involvement as co-author of the upcoming Python Cookbook, 3rd Ed. (not yet released), a tutorial at this year’s PyCon in Atlanta, an internal (and ongoing) lunchtime seminar series entitled ‘Snakes On a Plate’, and other recent Python-related projects, I’ve also been refining and revising what I can now call a completely awesome 3-hour introduction to the Python programming language.

If you’re a sysadmin, operations engineer, devops engineer, or just want to get your hands dirty with Python, I can’t think of a better more cost-effective way to do it than to attend the ‘Grokking Python’ tutorial at this year’s PICC conference, which is being held in New Brunswick, NJ, April 29-30.

While I do plan for the tutorial to run through the basics, I also assume attendees have programmed in some other language before. In addition, I firmly believe that, properly presented, most would find that Python is a very simple language to get to know and understand. That being the case, the most basic elements of the language (control statements, loops, etc) will be covered in the first hour (and the materials will be available for later reference).

Once we’re through that, it’s head first into what admin/ops engineers do for a living. Python was developed by a systems programmer for systems programming. As such, support for a huge swath of admin tasks (and far, far beyond) is baked into the language, and enormous tomes have been written covering third party tools and modules to do anything else you can possibly imagine.

We’re going to look at some of the more ho-hum parts of scripting, like accepting input from users, command line options and arguments, and file handling, but before it’s over we’re going to have a look at the basics of email, networking, multiprocessing, threading, coroutines, SSH, and more.

We’re also going to cover use of the Python interactive shell, which will not only help speed your mastery of the language and its standard library, but also holds promise as a sysadmin tool in its own right.

The blowing of minds is a goal of the tutorial. Bring a laptop, and bring some bandages ;-)

Lessons Learned Porting Dateutil to Python 3

The dateutil module is a very popular third-party (pure) Python module that makes it easier (and in some cases, possible) to perform more advanced manipulations on dates and date ranges than simply using some combination of Python’s ‘included batteries’ like the datetime, time and calendar modules.

Dateutil does fuzzy date matching, Easter calculations in the past and future, relative time delta calculations, time zone manipulation, and lots more, all in one nicely bundled package.

I decided to port dateutil to Python 3.

Why?

For those who haven’t been following along at home, David Beazley and I are working on the upcoming Python Cookbook 3rd Edition, which will contain only Python 3 recipes. Python 2 will probably only get any real treatment when we talk about porting code.

When I went back to the 2nd edition of the book to figure out what modules are used heavily that might not be compatible with Python 3, dateutil stuck out. It’s probably in half or more of the recipes in the ‘Time and Money’ chapter in the 2nd Edition. I decided to give it a look.

How Long Did it Take?

Less than one work day. Seriously. It was probably 4-5 hours in total, including looking at documentation and getting to know dateutil. I downloaded it, I ran 2to3 on it without letting 2to3 do the in-place edits, scanned the output for anything that looked ominous (there were a couple of things that looked a lot worse than they turned out to be), and once satisfied that it wasn’t going to do things that were dumb, I let ‘er rip: I ran 2to3 and told it to go ahead and change the files (2to3 makes backup copies of all edited files by default, by the way).

What Was the Hardest Part?

Well, there were a few unit tests that used the base64 module to decode some time zone file data into a StringIO object before passing the file-like object to the code under test (I believe the code under test was the relativedelta module). Inside there, the file-like StringIO object is subjected to a bunch of struct.unpack() calls, and there are a couple of plain strings that get routed elsewhere.

The issue with this is that there are NO methods inside the base64 module that return strings anymore, which makes creating the StringIO object more challenging. All base64 methods return Python bytes objects. So, I replaced the StringIO object with a BytesIO object, all of the struct.unpack() calls “just worked”, and the strings that were actually needed as strings in the code had a ‘.decode()’ appended to them to convert the bytes back to strings. All was well with the world.

What Made it Easier?

Two things, smaller one first:

First, Python built-in modules for date handling haven’t been flipped around much, and dateutil doesn’t have any dependencies outside the standard library (hm, maybe that’s 2 things right there). The namespaces for date manipulation modules are identical to Python 2, and I believe for the most part all of the methods act the same way. There might be some under-the-hood changes where things return memoryview objects or iterators instead of lists or something, but in this and other porting projects involving dates, that stuff has been pretty much a non-event most of the time

But the A #1 biggest thing that made this whole thing take less than a day instead of more than a week? Tests.

Dateutil landed on my hard drive with 478 tests (the main module has about 3600 lines of actual code, and the tests by themselves are roughly 4000 lines of code). As a result, I didn’t have to manually check all kinds of functionality or write my own tests. I was able to port the tests fairly easily with just a couple of glitches (like the aforementioned base64 issue). From there I felt confident that the tests were testing the code properly.

In the past couple of days since I completed the ‘project’, I ported some of the dateutil recipes from the 2nd edition of the book to Python 3, just for some extra assurance. I ported 5 recipes in under an hour. They all worked.

Had You Ported Stuff Before?

Well, to be honest most of my Python 3 experience (pre-book, that is) is with writing new code. To gain a broader exposure to Python 3, I’ve also done lots of little code golf-type labs, impromptu REPL-based testing at work for things I’m doing there, etc. I have ported a couple of other small projects, and I have had to solve a couple of issues, but it’s not like I’ve ever ported something the size of Django or ReportLab or something.

The Best Part?

I had never seen dateutil in my life.

I had read about it (I owned the Python Cookbook 2nd Edition since its initial release, after all), but I’d never been a user of the project.

The Lessons?

  1. This is totally doable. Stop listening to the fear-inducing rantings of naysayers. Don’t let them hold you back. The pink ponies are in front of you, not behind you.
  2. There are, in fact, parts of Python that remain almost unchanged in Python 3. I would imagine that even Django may find that there are swaths of code that “just works” in Python 3. I’ll be interested to see metrics about that (dear Django: keep metrics on your porting project!)
  3. Making a separation between text and data in the language is actually a good thing, and in the places where it bytes you (couldn’t resist, sorry), it will likely make sense if you have a fundamental understanding of why text and data aren’t the same thing. I predict that, in 2012, most will view complainers about this change the same way we view whitespace haters today.

“I Can’t Port Because…”

If you’re still skeptical, or you have questions, or you’re trying and having real problems, Dave and I would both love for *you* to come to our tutorial at PyCon. Or just come to PyCon so we can hack in the hallway on it. I’ve ported, or am in the process of porting, 3 modules to Python 3. Dave has single-handedly ported something like 3-5 modules to Python 3 in the past 6 weeks or so. He’s diabolical.

I’d love to help you out, and if it turns out I can’t, I’d love to learn more about the issue so we can shine a light on it for the rest of the community. Is it a simple matter of documentation? Is it a bug? Is it something more serious? Let’s figure it out and leverage the amazing pool of talent at PyCon to both learn about the issue and hopefully get to a solution.