The promise of Drizzle

I got to actually speak to Brian Aker for maybe a total of 5 minutes after his micro-presentation about Drizzle, which took place at the Sun booth at OSCON 2008. I was a bit nervous to ask what questions I had out loud, because the things I had wondered about were things I really didn’t see too much discussion about out in the intarweb. I’m happy to report that, if Brian Aker is to be considered any kind of authority (hint: he is), my ideas are not completely ridiculous, so maybe I’ll start talking a bit more about them.

UPDATE: lest anyone get the wrong idea, Brian Aker did, in fact, state that views are not on the short list of priority items for Drizzle, but he did say that views are one of the features he finds most useful, and that they’d probably be higher on any future priority list than, say, stored procedures. So, take my notes below about views with a grain of salt. It’s not necessarily “coming”.

My three ideas were these:

  1. Materialized views: my experience with views in MySQL is that they just plain old don’t scale well compared to other database systems I’ve used. I used Sybase in 2000 and views scaled better for me then than MySQL views do now, and I’m using them in mostly the same way (which is to say that I’m not using them to do evil things – I’m using them in the way most of the database community agrees they should be used). In the past, I thought materialized views were “nice to have”, but now that I’m working with much larger data sets, without a need for my reporting to be 100% real-time, materialized views would be great. To be honest, I win either way with Drizzle in all likelihood, as Brian Aker has proclaimed that views in Drizzle will not look like views in MySQL. He confirmed that materialized views would be a great thing to have a closer look at, and I was happy with that reply.
  2. “Query Fragments”: I didn’t know they were called query fragments. What I explained to Brian Aker was that I wanted to harvest subsets of cached result sets. So, for example, if I do a date range query (I do that a lot), and the result set is cached, and then my app does another query which is identical save for that the date range is a subset of the one in the cached result set, I’d like to grab that data from the cache. My actual question to Brian was “can this be built in such a way that this would be a reliable, trustworthy result coming from some middle tier component?” And that’s when he told me about query fragments. He said that this idea was not at all crazy, and was also worthy of further discussion.
  3. Vertical data-padding. Well… that’s what I’m calling it. Here’s the logic: I have lots of temporal data. Tons of it. My queries are largely things like “for this user, show me all of the foo’s bar’d — group by day — where day is between these two days”. MySQL is good at these kinds of queries (assuming you index well and, basically, that you’ve read “High Performance MySQL” a couple of times… per edition), but there’s something missing. When I get the result set back, any of the days for which no foo’s were bar’d, I don’t get a record. This is perfectly within the realm of reasonable behavior, but I’ve never known MySQL to let things like being reasonable get in the way of helping their users, so my suggestion was that MySQL, in order to do the comparisons for the “WHERE” clause, *MUST* know what dates fall within the dates in, say, a “BETWEEN” statement. It would then seem logical to have some way to tell MySQL “If one of those dates has no value, return a NULL” (or 0, or an empty string, or something). I don’t know the real name for this proposed feature, but I call it “vertical data-padding” because you’re padding columnar data, which, in my mind, is visualized vertically. Just like when you do a “GROUP CONCAT” or something, I would refer to that as horizontal data padding. I explained to Brian that one way I’ve seen this handled is to have a lookup table of static dates that gets joined to the main data table. You do a left outer join with the date lookup table on the left, and you get back a row for every date whether there’s data in the right-hand table or not. This works when ‘n’ is small (like everything else), but it’s hurrendous when you have, say, 10 million rows to deal with. Then you’re in what I call the “No Bueno Zone”. Brian seemed interested in the problem, and I’ll be discussing that with him further when his life settles down a bit (he’s been at OSCON, and he’s still settling in at Sun).

I want to thank Brian Aker for his enthusiastic attitude toward helping others, and for all of the work he pours into all of this stuff. I also want to say that, for me, Drizzle is really exciting, not so much because the feature set is more or less cherry picked to map onto what I do for a living, but also because it represents an opportunity to get ideas in the door before a lot of legacy cruft makes it impossible to implement these somewhat idealistic features without rocking the boat for the millions of users already launching it into production.

  • http://www.sheeri.com Sheeri

    Brian,

    Your idea of “vertical data-padding” is interesting. I think the solution to your dilemma might also be the solution to a “find whether ranges overlap” dilemma as well….ie, if you have two fields with “start date” and “end date” and you want to find whether that range overlaps with another date range….

  • http://farmdev.com/ Kumar McMillan

    I’ve found that a better alternative to materialized views is to maintain “fact” tables with post-insert/update triggers of some kind. The fact table would contain mainly foreign keys to all the tables you need data from with an index on each foreign key column (aka Star Schema because the ERD looks like a star). This avoids duplicate data from materializing a view, it’s self-maintaining, and you get all the speed boosts you would normally get by materializing a view if you set everything up right.

    http://publib.boulder.ibm.com/infocenter/rbhelp/v6r3/index.jsp?topic=/com.ibm.redbrick.doc6.3/wag/wag32.htm

  • m0j0

    @kumar

    I’m familiar with star schemas, having spent some time in the data warehousing world. If you are in a position to go this route, then it would probably be the one I’d recommend. For my needs, doing this brings up a few issues:

    1. A star schema typically involves either doing joins or multiple queries to get at all of the requisite data. This is typically not a problem, except that joins against tables with several million rows doesn’t perform well in MySQL. While many will say that denormalization is what all the cool kids are doing, I am *not* a fan of it, and blame MySQL for simply not performing well with huge amounts of data without all kinds of wizardry.

    2. I avoid triggers like the plague, because keeping up with the gotchas regarding triggers and replication is not something I want to spend time doing, or remembering to spend time doing.

    3. Post-update/insert triggers move some problems around instead of solving them if you have a “write mostly” database environment, which is what I have.

    As an aside, while I think I have a good understanding of your proposed solution, and have seen similar things deployed as long ago as 2001 (so it’s a pretty proven methodology), when dealing with huge data, we get back to “everything works great and performs well when ‘n’ is small”. Even with that aside, I’m not sure I understand why you say this is necessarily “better”, even with smaller data, and I’m not sure I understand your point about views being materialized with duplicate data. Could you elaborate on those points?

  • http://optimmysql.blogspot.com Parvesh Garg

    Just last week I was also asked on “query fragments” in one of my sessions on MySQL. I felt it to be good if used (and/or implemented) with caution.

    For me, something like “no query having more than 5-10% (or configurable) of table rows should be cached” kind of policy will work.

  • Pingback: Nome do Jogo » Artigo » Rails Podcast Brasil - Episódio 25

  • http://www.declan.com/frank Frank Flynn

    One problem “vertical data-padding” is the scale – if I’m studying genetics in a fruit fly population hours might be more appropriate; if I’m studying geology perhaps centuries. Should MySQL support every possible scale?
    You could create a table of dates at the granularity you were interested in (in your case it sounded like days) and open join to it. This gives you several other important advantages.

    I wrote a short tutorial on this at
    http://www.declan.com/frank/projects/dates/index.php
    look down about 2/3′s to see the specific application for your date-padding issue.

  • m0j0

    @Frank

    The scale doesn’t really matter in terms of the code involved. You should be free to vertically pad temporal data using whatever granularity is supported by the engine. So, in other words, if MySQL supports date comparison based on day, week, month, year, or century (I’m not aware of century support, but hacks can be had), you should be able to vertically pad that data, because in order to do a comparison, MySQL already has to know what values are valid within the range.

    As for the join, it’s discussed in the post, along with the issues that arise from that (the most obvious one in my case is, ironically, “scale”-related)

  • Pingback: Log Buffer #108: A Carnival of the Vanities for DBAs