family.append(daughter)

So, I took some time off starting last Wednesday. My daughter (and first-born child) was born on Thursday, May 24. It’s a completely overwhelming experience. Oh sure, I was like a lot of other guys, thinking I’d pass out at the sight of a baby being born, thinking it would be really cool to have a kid and all that stuff. Turns out that it’s impossible to predict how you’ll react to the birth of your child. Whatever you say you’re going to do – you’re not going to do that. Even if you *try*. It just won’t happen.

I’m someone who is known to be, in an extreme sort of way, “even keeled”. That’s a nice way of saying that I don’t show a lot of emotion in visual or dramatic ways. I’ve always been able to get my feelings across using words – either verbal or written – so I guess I developed that instead of the more outward visible signs of emotion. But when my daughter was born? Not so much. I was a blithering idiot, sobbing with happiness, excitement, and all kinds of other emotions that I don’t even know how to describe.

So anyway, I’ll get back to blogging now – just wanted to explain the hiatus :-D

Python: “Which of these variables is None?”

So I’m still getting past the neophyte stage with Python. The good news is that every time I need to apply some programming principle to something I’m doing in Python, it’s really very easy. Heck, I practically fell accidentally into OO programming on one of my Python projects, and everything else has been a dream so far as well.

One problem I was having with one of my projects was kind of an academic issue: I wanted a very concise way to tell what the status of two variables was. My code needed to do different things based on each possible scenario. After tossing about my brain for a while, I decided to take the easy way out and write an “if” block to handle this. Here’s the “if” block:


if a == '' and b == '':
indicator = 0
if a != '' and b == '':
indicator = -1
if a == '' and b != '':
indicator = 1
if a != '' and b != '':
indicator = 2

This does the job, but it’s long, and not the most elegant of possible solutions. Some people online started getting into some rather obfuscated solutions to the issue, and when people started asking if Python supported bit-shifting, I decided to just leave it and move on (I actually never did find out if it was supported). After all, someone else is going to have to read my code at some point!

Then I spoke to a fellow member of my local LUG and Python enthusiast Dr. Jon Fox, and he tossed out this solution that is so cool, and so readable, and so elegant, that it just made me want to slash myself in half with a bread knife. Here it is:

whoisNone = lambda a,b: (a == None) + 2*(b==None)

The above can, of course, be written as a function that takes any two variables you care to pass to it. The above one-liner returns 0 if neither is None. Returns 1 if the first is None but the second is not, Returns 2 if the first is not but the second is None. Returns 3 if both are None.

This is precisely what I was looking for. Thanks Dr. Fox!

Technorati Tags: , , ,

Social Bookmarks:

Regular Expressions with Python’s “re” Module

If you’re moving over from PHP, Perl, Ruby or something similar, don’t be intimidated by all the Python regular expression documentation. It doesn’t really have to be complicated or even all that much different from Perl (though it can be, if you want to go there).

Here’s a search and replace I ripped out of a Perl script for use in a Python script that replaces it. It insures that any MAC address fed to it has two digits in every field. So, for example, this would change “0:c:e:fe:d0:ae” to “00:0c:0e:fe:d0:ae”. This is good if you need to insert the value into a PostgreSQL column of type ‘macaddr’, or you just want to be consistent.

Perl: $macaddr =~ s/\b([0-9a-f])\b/0\1/ig

Python: macaddr = re.sub(r'(?i)\b([0-9a-f])\b', r'0\1', macaddr)

There are a few differences when moving to Python. First, there’s only one assignment operator in Python (to my knowledge – comment to correct me if I’m wrong) – so we’re calling a function instead of using “=~”. That’s fine with me. Less cryptic symbols are better.

Second, part of calling a function also means that the operation is explicit: we’re doing substitution using the “sub” method. There’s no “s/” like there is in Perl.

Third, there’s also no “/ig” in Python like at the end of the Perl example. The “i” means “ignore case”, and in Python, that indication (the “(?i)”) goes next to the pattern in question instead of at the end of the line. That’s easier for my brain to parse. I like to read what I’m doing in my native language (English), and if you think in that context, then reading regexes in Perl is kinda like reading in German, not English.

Finally, calling a function also means that the pattern and the thing you want to apply it to are separate arguments to the function instead of things that are delimited by more “/” characters. In fact, in Python, the only slashes of any kind appear only in the regular expression syntax. None of the actual language syntax contains a slash.

Though there are lots of differences in just this one very very simple example, I’ll also note that the actual regex syntax itself (the parts inside quotes for the Python example), are not different at all except for the addition in the Python example of the “ignore case” operator “(?i)”!

Technorati Tags: , , , , , ,

Social Bookmarks:

Using TRUNCATE to empty a PostgreSQL database

This is not something that’s any big hack or secret, but emptying a database of all data without dropping the structure along with it is one of those tasks that I do just often enough in my development work to be annoying. If you ask me, there should just be a big ol’ “EMPTY” statement you can apply to an entire database.

You *can* empty a database table in PostgreSQL using an unqualified DELETE statement, by the way – but it takes longer because it does a full scan of each table. TRUNCATE just nukes everything – and if you feed it the ‘CASCADE’ keyword, it’ll nuke everything in its path as well. This is nice, because I have a bunch of tables in my database, but I know that there are a relatively small collection of tables that everything else links to, so I can pass about 10 table names to TRUNCATE, and giving it the ‘CASCADE’ argument will wipe out about 2 dozen tables.

While I love writing code that creates stuff, writing code to do demolition is somehow amazingly satisfying as well.

Technorati Tags: , , , , ,

Social Bookmarks:

Can Technology Kill the NAR?

The NAR is the National Association of Realtors. They’re the main lobbying interest for pavement-pounding brick-and-mortar real estate agents. Of course, this is problematic for web-based real estate outfits like Redfin, because the NAR has the required influence to get legislation passed that can make life as a web-based real estate sales organization difficult, if not impossible.

NAR, Technology, and Legislation

The question is, at what point is the NAR going to step on its own toes? Does the NAR really believe that technology will play no part in the future of real estate? Well, of course not! In fact, the NAR is the keeper of the Multiple Listing System (MLS), which (when it became available) was a major technological advancement that greatly aided real estate agents in sharing data with other realtors. It made seller agents more productive because it provided a means of sharing new listings with an audience that basically encompassed the entire realty world. Since seller agents and buyer agents split commissions, it was now much easier to make your entire living based solely on getting listings. It made buyer agents more productive by enabling them to search listings on behalf of interested buyers, and being able to be kept up-to-date on new listings.

There’s also the realtor.com website, which is an interface to the MLS and (according to the site itself) “the official site of the National Association of Realtors”. So if it’s not about doing without technology, then it must be about ownership of the data.

Of course! After all, the NAR is really only able to justify 6% commissions if it is the sole keeper of the *inventory* of things for sale, *and* it can influence legislation as it applies to how real estate transactions take place. For example, they’ve had some success in making it illegal for brokers to offer rebates. This makes things very hard for Redfin, specifically, because a part of their model refunds a part (actually, most) of the commission it splits with a third party agent back to their client.

What if there were no realtors? (aka Real Estate as Travel industry)

Of course, the NAR can only (currently) control legislation regarding real estate transactions that involve registered NAR realtors, so going forward, if there’s a compelling service that becomes a de facto standard marketplace for real estate (or at least, some subset of “real estate” proper), it would seem that the NAR would have two choices: find a way to justify their existence by representing a larger portion of those involved in the transactions (like the buyers and sellers themselves), or find a way to pass legislation that *requires* that realtors be involved in every transaction. Sounds impossible, but we’ve seen some pretty wacky legislation in the past, haven’t we?

I don’t really think they’ll pass the legislation needed to guarantee work for pavement-pounding realtors. I also don’t think the NAR is a breeding ground for the kind of progressive, independent thought required to take a new direction. In all likelihood, what you’re looking at when you look at the NAR and the real estate industry in general is somewhat similar to the travel industry in 1998. It’s an organization and an industry that doesn’t even know what’s about to happen. It’s an industry that believes, like the travel industry did, that “real estate is all local”. It’s an industry that, just like the travel industry, keeps its technology largely to itself. It’s an industry, just like travel, that used technology to empower agents, not customers. It’s an industry, like travel, that has largely failed to recognize the emergence of technology and the web as tools that (unbenownst to them) didn’t just make advancements *possible*, it *necessitated* a change in how they interfaced with customers, suppliers, and each other.

The young wippersnappers, with their new-fangled whirligigs, are going to change the real estate market, both for their own benefit, and the benefit of the customers.

The Customer Service Myth

And if you’re a realtor, you can save all the happy horse crap about customer service. I’ve heard it all before, believe me. I’ve purchased a couple of houses myself, and grew up in the business besides. I’ve had lots of friends and family who have worked in various parts of the industry, and while I understand that customer service is *supposed* to be the lifeblood of the industry, the reality is quite different. Customer service, and all of the things that an agent does for either side of the transaction, are largely “feelgood” services that are, at best, smoke and mirrors – and that assumes the realtor’s intent is good. In many cases, it’s just an outright scam.

How many realtors are “certified staging experts”? Ever look into all of those accreditations and certifications that started to magically appear something like 10 years ago? They’re all invented, and issued, by the very companies whose agents hold them. No conflict of interest there, eh?

And what about research? When’s the last time you went to a showing for a house only to find that it failed to meet most if not all of the criteria you set out when you first spoke to your realtor? Realtors *rarely* pre-screen houses by actually *going* to the houses anymore.

On the seller side, how do you think the realtor comes up with a price to sell your house at? They come to you with “comps”, which a lot of times amounts to houses that are similar to yours only in geographic location. In many markets, a couple of blocks difference is significant enough to render them useless. Real estate agents are not real estate appraisers, are not trained in the various appraisal methods practiced by appraisers. What’s more, they don’t really care about any deeper notion of “value”. What they really want to do, more times than not, is “price to move”. They want to price your house so that it looks really attractive compared to the rest of the market, so it will sell more quickly, so they can collect their commission more quickly.

“But doesn’t that mean a smaller payday for them?” Well, if we’re talking about the transaction in a vacuum, yeah, it does. But if a realtor can sell 8 houses per month by pricing them below the market, they’ll make far more money than if they sold 4 houses priced at or slightly above the market.

You can actually do your own comps if you take a little time to understand those things that are relevant to pricing your house and comparing it to another house. It’s not rocket science, but there are some quirks that you need to know about. Get to know those, and the rest of the research can be done using online tools, including realtor.com.

Going it alone

In the end, there are lots of online tools to do just about anything you want. You can research a community, research a school system, research the housing market, even answer questions like “are there any registered sex offenders in this area?” all online. Just about 100% of this information can be had for free.

Online virtual tours are commonplace. You can now see satellite images of the house, and the entire neighborhood, where you’re interested in buying. All of the tools to research any aspect of the house and neighborhood are available. The only missing piece is an organized way to bring the buyer and seller together to settle on terms and complete the transaction.

Well, it *was* missing. Redfin and other online tools like it are working to close the gap. Check out the story on TechCrunch, which links to a great 60 Minutes piece about all of this.

By the way – I don’t think realtors are going to disappear. Just like travel agents, there will always be some 80-year-olds who haven’t figured out the internet and need some local presence to get things done, but that kinda ties the lifespan of these places to the lifespan of… really old people.

Good luck with that.

Technorati Tags: , , , , , , ,

Social Bookmarks:

Freebase: Your database is ready!

This is going to be really frickin’ cool. There’s just no other way to put it. Maybe I’m a little too much of a data geek, because I can’t seem to sit still since receiving my email letting me know that Freebase is now in alpha, and the account I requested months ago can now be activated. I logged and immediately started poking around. I’ve been doing that for about 48 hours straight now.

What is ‘Freebase’?

Well, the short answer is that Freebase is a public domain relational database maintained by the community. If this sounds like Wikipedia, don’t get too attached to that comparison. It’s true that Wikipedia is also maintained by its users, but that’s where the similarities end. You see, while Wikipedia stores information in a way that makes it attractive and easy for humans to find things, Freebase provides the kind of structure and relational characteristics that make it useful to application developers (programmatic access). It provides a relational database, which is typically used by programs, instead of an encyclopedia, which is used by people.

If you’re a DBA, your first thought might be that these are people who are trying to take your job. Not so. This is in no way suitable for internal, private, corporate, proprietary data. In fact, I don’t believe it’s even allowed. What it *is* good for is applications that can make use of publicly available and/or publicly maintained data. For example, a sample application called “Concierge” allows users to browse restaurants in their area by first telling the application the area they live in, and then the type of restaurant they’re looking for. The data about the restaurants is all stored in the publicly maintained Freebase database, and Concierge also provides an interface for users to add new restaurants, which adds the data back to Freebase.

Me as a working example of a typical Freebase geek

I myself am fleshing out the Freebase “beer”, “beer style”, and “beer style category” types, in the hopes that I can provide a web interface that allows beer enthusiasts and brewers alike to know more about beer, and helps them to develop recipes for their own beer. I’m using the BJCP (Beer Judge Certification Program) published beer style guidelines to flesh things out, and then other people can come in and start associating beers with styles and breweries and all kinds of other characteristics. Having the beer style definitions held in a public place means that, as the style guidelines evolve, people who care about such things are free to make updates to the data in Freebase. This means the data used by my application is always up to date, and I never have to push out updates to users just to update their local copy of this data.

It may also mean that I can afford to make the application available to large numbers of people for free, since I won’t have to find a hosting plan that lets me house however many gigabytes of data this thing grows to. I can eventually work in all of the properties of different hop varieties, different grain characteristics, yeast attenuation rates, water profiles of different beer brewing regions, etc. Further, I don’t have to be an expert on every facet of beer, because people who care about, say, yeast attenuation are probably going to populate that data for me anyway, whether its specifically for my application, or one of their own.

The Future of Freebase (Pre-”Total World Domination”)

Aside from simple publicly maintained data, I think there are other implications of this model. For example, with all kinds of applications using Freebase as the back end database, and considering that users can have their own “private domain” data type definitions, it makes sense for applications to use the users’ Freebase credentials to maintain application preferences data using that domain. This would seem to make Freebase a contender for a de facto standard OpenID or CAS portal.

If the model is Earth-shaking, it stands in contrast to the current state of the actual user interface. Oh, I’m getting by just fine, but that’s more in spite of the interface than because of it. Some of the ajax-y features hurt as often as they help, navigation needs a little improvement, and there’s no way to add massive amounts of data quickly that I’ve found yet without writing code.

Further, it’s still unclear to me how they plan to foster cooperation between people who want to relate different data types. For example, I quite naturally want to list, for each beer, the brewery that makes the beer. Someone else has already created a “brewery” type, and the community has done a darn good job at fleshing out the data for that type. However, when you go and look at a brewery definition, there is no listing of the beers produced by that brewery. Freebase certainly supports the idea of a “reciprocal link” that would cause beers to show up under “brewery” entries as people add beer definitions and fill in the “brewery” property of the beer. However, there’s no clear rules on how to get this reciprocal link to happen if you’re not the creator of all of the types involved.

What’s more, I’m not the only person who has created a “beer” type. Which one should the “brewery” type administrator link to? Well, this wouldn’t be as big a problem if I were allowed to add a property to an existing beer type! Then I wouldn’t have to create a competing type at all! Currently, this is not allowed. I cannot go redefining the properties associated with a type that’s maintained by someone else. As a result, in order to support properties of beer that brewers and enthusiasts care about, I have to strike out on my own and hope that in the long run, my “beer” type becomes “the” beer type.

This should really all be opened up, and people should be allowed to add properties that are submitted for approval by the type administrator. Reciprocal links should be put in a “pending” state, or maybe even a “probationary” state. These are features that would encourage more interaction between users who care about the same data, and foster a community around the data that community cares about.

I’m sure there are plenty of other things to think about as well. For example, will Freebase let me upload the Briess malt profiles, published by one of the biggest maltsters in the US? Briess may have a problem with that – but how will Freebase know without receiving a cease and desist from some friendly neighborhood lawyers? Then there are technical and financial details. Presumably, they’ll either charge applications that use Freebase for commercial gain, or they’ll have to charge for some higher service level in order to guarantee that data will be available for applications to use.

This is not a simple service. I’ll say this though: I wish I could buy stock in Freebase, if only to cash out when they are inevitably purchased by Google.

Technorati Tags: , , , , , , , , ,

Social Bookmarks:

A quick overview of common grammatical mistakes

Part of what I do for a living is write. I’ve co-authored a book, written a rather large number of technical articles, and I’ve also done professional editing, tech review, and manuscript review for magazines, newspapers, and publishing houses. Also, my wife is an English teacher. In short, though I make my fair share of mistakes, I have some clue what I’m talking about with regards to grammar. So, here are what I think are by far the most common mistakes I see people make, both in formal (articles and such) and informal (IRC and such) writing:

  • there, their, and they’re - “there” is a place. “their” is possessive. “they’re”, as implied by the fact that it’s a contraction, is two words: “they are”.
  • its and it’s – Never use “it’s” unless you want to say “it is”. Any other time, use “its”. At no time should you be thinking that “it’s” is somehow possessive. Some people think “it’s” can also mean “it has”. You’re free to make your own call on that. I would personally avoid that usage – and I do, regularly. The goal of language *usage* is not to raise eyebrows. The goal should be to use language to communicate ideas, and the *ideas* should raise the eyebrows. ;-)
  • your and you’re – again, only use “you’re” when you mean “you are”. Period, end of story.
  • kernel and kernal – kernal is just not a word. It’s kernel, not kernal. Kernal is never right.
  • too and to – “too” should only be used in a place where “as well” would also fit. I would generally chalk this up to typos, but I’ve just seen it misused too consistently. Nobody misses that last “o” *every* time!
  • ‘s - There are lots of places where this is useful, but it should never be used to make something a plural. To make something a plural, you either add an “s” or an “es” – but you never add ‘s to make something plural to my knowledge. It’s used in contractions, and possessives. Not plurals.

So really, that’s it – oh! By the way – “thats” is not a word. That’s always written “that’s”. I can’t think of an instance where “thats” is useful, but I’m not all-knowing either. It would be awkward, at best.

Anyway, there really aren’t a whole lot of them, but if everyone on the internet just paid attention to *just* those things – it would be a vast improvement.

Thumbs up for Synergy

I have heard a couple of people mention this tool on IRC and mailing lists, but I didn’t ever make time to try it myself for some reason. That is, until my buddy and coworker Steve gave me a quick demo of its functionality, and told me that it was brainless to install and get running. Once I saw what it could do, I ran back to my office and had it running, securely, over an SSH tunnel, in about no time!

Synergy lets me sit my laptop on my desk next to my workstation, and use one keyboard and mouse with both of them. So, if I’m researching some Linux issue and I happen to find the answer, but I’m on my Mac laptop, no problem! I can copy a command line example on the mac, drag my mouse over to my linux workstation, paste the command line, and be on my way! Yes, it’s cross-platform. It even works on Windows. The only issue I’ve had (which Steve *didn’t* have) is screen locking. It worked on Steve’s setup but not mine. I haven’t done any troubleshooting on this because there are plenty of pretty obvious workarounds.

Give this a shot if you haven’t already. Very cool!

Technorati Tags: , , , ,

Social Bookmarks:

The Future of IT and IT Policy

For those who didn’t already know, I work in academia. I work in the Computer Science Dept. at Princeton University. Every week I attend the “IT Policy Lunch”, which is a gathering of anyone on campus who is concerned with IT Policy. It’s hosted by Ed Felten, who heads up research into IT Policy. You may already be familiar with some of results of Ed’s work.

So this week’s lunch discussion topic was “Futurism”. About 20-30 people from around the department and the campus got together and spitballed ideas for what we think will be major IT Policy debates, technology advances, etc., over the next 5, 10, or 20 years. It was a lot of fun, and I really enjoy observing how technology-oriented people think about or expect of the non-technology-oriented masses. Often I think we fall into the habit of assuming that the masses will somehow care about things like being injected with RFID tracking chips.

Anyway, below is a quick rundown of some of the ideas that were being tossed around. I couldn’t remember how to explain all of the things that were written on the board, so I’ve also included a picture of the board itself :-D Enjoy

IMG_1066.JPG

5 years:

  • big content providers like Comcast and Verizon will be relegated to being solely a delivery mechanism, as most digital content will be delivered over the internet.
  • cyber-estate issues: what happens to your data when you die, are licenses for media, data, software, etc inheritable?
  • one person only half-jokingly predicted a Google antitrust suit within 5 years.
  • a suggestion about how taxes might come into play with regards to the internet and internet purchases leads to another discussion about how the “tax the ‘net” model could be used as leverage, or to scare people into compromising toward a tiered internet model.
  • As more digital devices come out, and more technology fits in the palm of our hands, not only is battery life (and, more generally, energy) an issue, but there’s also an implication that recycling is also going to be an issue.
  • Speaking of recycling, data retention/deletion policies are predicted to spark more and hotter debates in the coming few years as more court cases rely upon subpoenas for digital communications or other digitally-stored data.
  • Game addiction will become an important issue that requires attention.
  • client side applications will continue to move to internet-based delivery, a la Flickr for photo management instead of iPhoto, or Google Docs for word processing instead of Word.
  • digital surveillance, privacy, and all kinds of “Big Brother” type predictions were prevalent, painting a bit of a gloomy five year outlook. Sadly, this outlook did not seem to improve in the 10 and 20 year predictions.
  • identity theft and other types of digital fraud will become more prevalent and bigger parts of IT Policy and legislative debates.

10 years:

  • genomic privacy will begin to be an issue as DNA begins to be used for more day-to-day identification, especially in the healthcare field.
  • we’ll begin to see more lawsuits along the lines of “my cell phone gave me cancer” as more and more technology penetrates a greater and greater percentage of the global population.
  • the cost of music falls to zero, and movies follow shortly behind. Social implications are briefly discussed. It is mentioned, for example, that movie theaters will cease to exist – but movies, it is countered, are also a social activity. Whether this says something about how technology affects our social interactions isn’t really touched on, but there is mixed opinion about the end result.
  • ICANN will not exist, and if it does, it will look nothing like what it looks like now.
  • e-paper will finally become widely used. There are a couple of comments about how this might effect things like IRS tax forms, and banking/checking.

20 years:

  • My own prediction was that there would be internet access for most of Africa in 20 years. I was saddened to think it would take this long when I said it. I was more saddened by the fact that the idea drew one of the most pessimistic reactions of the entire discussion. :-/ The idea of ‘access as human right’ is briefly discussed.
  • My other prediction for 20 years out got a bit of a better reaction – I said that in 20 years, there will be a large number of people in Congress who have some clue about how technology works.
  • Someone said we might need a “.earth” TLD to differentiate Earth-bound entities from those of other planets.
  • Ok, so most of my predictions were for 20 years. I suggested that by then, nanotech will make it possible for us to power devices we might be carrying around then using our clothing or ‘solar satchels’ that collect power from the sun.
  • human genomes will be the de facto standard in identification.
  • quantum computing will be a practical reality.
  • offloading things you currently memorize into some storage device, from your brain, will be a reality.

Technorati Tags: , , , , , , , , , , ,

Social Bookmarks:

Not feelin’ the Joost

Joost is getting ready to launch out of beta and unleash themselves unto the masses. I participated in the beta, and I have to say that there are some good things that I think will result from Joost, but I think ultimately Joost will fall from favor within 12 months from their public launch.

First, the good: I would personally love to see a cross platform portal that makes my computer, whether it’s running Mac, Linux, or some other OS, into a television. To me, it just makes sense that someone who is *not* a cable or phone company design the interface for viewing, browsing, and managing media. I mean, have you *seen* the interfaces for doing this stuff from either Comcast or Verizon’s FiOS TV? They’re both horrible interfaces, barely a step up from an old green-screen, which is, I’m sure, what the workers of those companies are using to perform more tasks than they’d like to admit.

The Joost interface isn’t *wonderful* mind you. It’s chock full of mystery meat navigation elements, but even that is better than Verizon or Comcast, because Joost at least attempts to empower the user by putting lots of data and features close at hand.

In addition, I’d like to see the likes of Verizon and Comcast lose some of the control they have over the *content*. Sure, they’ll probably always have something to do with its delivery – most of the available mediums for delivering content are owned by either Comcast or Verizon – but neither company is particularly good at giving us anything other than the digital media version of the McDonald’s value menu. At least McDonald’s gives you the option of choosing a la carte (and being unceremoniously ripped off – still – at least it’s an *option*).

Now the bad: Joost’s image quality is complete crap. I’m not a digital media guru who can tell you all there is to know about the origin of the pixel or anything, but I know this: NBC, ABC, and CBS all have browser-based media players, and every one of them kicks Joost’s *ass* where picture quality is concerned. No contest. Further – it’s not unusual to see Joost video/audio fall *way* out of sync. Seriously, talking strictly from a media quality perspective, Joost isn’t a whole lot different from having a fat client interface to YouTube. One thing that absolutely amazed me is that Joost’s picture quality actually *stays* bad even if you exit full screen mode! It’s like they’ve done work to *insure* that the picture stays bad no matter what size the viewing window is!

I wonder if Joost is doing this on purpose during the beta so they can later justify charging us some outrageous amount of money to get some upgraded version of Joost “Now with acceptable resolution!” or something.

So, I’m glad that Joost has set a bare minimum benchmark that other competitors know they have to beat. I just wish they would’ve set the bar a bit higher.

Technorati Tags: , , , , ,

Social Bookmarks: