Feedback and Boredom Result in 35% Performance Boost for Loghetti

Well, I got some feedback on my last post, and I had some time on my hands tonight, and Python is pretty darn easy to use.

As a result, loghetti is making great strides in becoming a faster log filter. To test the performance in light of the actual changes I’ve made, I’m asking loghetti only to filter on http response code, and I’m only asking for a count of matching lines. I’m only asking for the response code because I happen to know that it will cause loghetti to skip a lot of processing which once was done per-line on every run, but which is now done lazily, on an as-requested basis. So, for example, there’s no reason to go parsing out dates and query strings (two costly operations when you’re dealing with large log files) if the user just wants to query on response codes.

Put another way “Hey, I only want response codes, why should I have to wait around while you process dates and query strings?”

So, here’s where I was when this little solo-sprint started:

strudel:loghetti jonesy$ time ./loghetti.py --code=404 --count 1MM.log
Matching lines: 10096


real 5m52.103s
user 5m35.196s
sys 0m3.214s

Almost 6 minutes to process one million lines. For reference, that “1MM.log” file is 246MB in size.

Here’s where I wound up as of about 5 minutes ago:


strudel:loghetti jonesy$ time ./loghetti.py --code=404 --count 1MM.log
Matching lines: 10096


real 3m53.350s
user 3m50.498s
sys 0m1.641s

Hey, looky, there! I even got the same result back. Nice!

Ok, so it’s not what you’d call a ‘speed demon’, especially on larger log files. But testing with a 25MB log with 100k lines in it using the same arguments took 25 seconds, and at that point it’s at least usable, and I’m actually going to be using it to do offline processing and reporting, and it’ll be on a machine larger than my first-generation Intel MacBook Pro, and for that type of thing this works just fine, and it’s easier to run this than to sit around thinking about regular expressions and shell scripts all day.

I’m still not pleased with the performance – especially for simple cases like the one I tested with. I just ran a quick ‘grep | wc -l’ on the file to get the same exact results and it worked in about one half of one second! Sure, I don’t mind trading off *some* performance for the flexibility this gives me, but I still think it can be better.

For now, though, I think I might rather support s’more features, like supporting a comparison operator other than “=”, or specifying ranges of dates and times.

  • Toneby

    “grep | wc -l” isn’t the optimal way, use “grep -c” instead and you should get even shorter times since there are no extra processes and pipes involved. Great you are getting you log checker more optimized, I know it sucks waiting around to get the results back from investigation large logfiles. I fairly often have to do it with multi gigabyte files.

  • http://www.doughellmann.com/ Doug Hellmann

    This article should offer some other tips for boosting performance: http://effbot.org/zone/wide-finder.htm

  • http://kentsjohnson.com Kent Johnson

    Did you post the revised code?

    In _ApacheLogFileGenerator.Generator:
    – hoist the lookup of self.r.match out of the loop
    – Create the ApacheLogLine with
    log_line = ApacheLogLine(*m.groups())
    to cut out a bunch of method calls.

    BTW you could simplify this code by making Generator be ApacheLogFile.__iter__; the nested class and delegation from __iter__ are not needed.

    BTW for your grep test did you use the same regex or did you just grep 404?

  • http://www.protocolostomy.com m0j0

    Doug — Thanks – that’s a great article, as it talks about these optimizations in the context of *exactly* what I’m doing. Thanks again!

    Kent — I haven’t posted the revised code. I was hoping that the apachelogs author would make his project public so I could just pass on improvements and have them accepted or not instead of maintaining a separate version distributed with loghetti. In the meantime, I was hoping to pull most of my custom code *out* of that module so that there aren’t duplicate modules flying around. I’ll just put it somewhere in the Filter class, which is what I started doing to implement the ‘lazy’ features.

    Thanks for the tips above – they mirror some thoughts I’m having after reading the article linked by Doug. There’s lots of great stuff in there. Gimme another week, and I’ll post the revised code.

  • http://kentsjohnson.com Kent Johnson

    A smarter regex makes a huge improvement. The current regex has to do a lot of backtracking because of all the .* matches. Try this one:
    re.compile(r'(\d+\.\d+\.\d+\.\d+) ([^ ]*) ([^ ]*) \[([^ ]*) [^\]]*\] “([^”]*)” (\d+) ([^ ]*) “([^”]*)” “([^”]*)”‘)

    I have a few smaller improvements, too. Drop me an email and I will send them to you, it is a bit much for a comment.