OmniFocus and the Way of the (Support) Ninja

Linda asked lil' ol' me to provide the second post in our ongoing OmniFocus: What We've Learned So Far series. Whether she will come to regret this, only time can tell. Without further ado, the post beginneth thusly:Okay, I'm long-winded; sue me. Before I can really tell you how well the OmniFocus test process is going, though, I feel like I have to supply a bit of background on how we've handled this process for previous projects. It went a little something like this:

  1. Development team produces build of application OmniFoo they're mostly happy with.
  2. Marketing Weasel writes press release, webmaster nee Web Editor updates site with brand-new OmniFoo Beta page. We push new version of website.
  3. VersionTracker, etc. pick up on new build.
  4. Support Ninjas are crushed under a big pile of electronic mail for the next three months. All that is heard from them is a soft but desperate honking noise. Think baby penguin at the bottom of a deep ice-crevasse.
This has a couple of negative effects: first of all, when you're buried in raw oscillating electrons up to your neck, it's really hard on the skin; not at all like that 'drink from the glowy pool of water' scene in Tron would have you believe.More seriously: we get plenty of eyeballs on the new application, which is a good thing. Unfortunately we also got all those eyeballs on the new app at the exact same time. So that thing that's forehead-smackingly obviously broken (which, of course, we failed to catch in the gajillions of hours we spent staring at the app before we pushed it out) gets reported 200 times.Now, the first report of an issue? Good. The tenth? Gold - then we know that this isn't random gremlin activity; there is something here we need to figure out. This holds up till, oh, somewhere between 30 and 50 reports. Beyond that, it's a problem we know about, that we know we need to fix, but from there on out the additional utility of the reports drops off fast. The time it takes the Ninjas to process them doesn't, however.Result: stressed-out ninjas, frustrated engineers (because they're not getting reports of the problems in the newest builds; we're still looking at launch day reports), and the folks with the test builds are having to wade through issues that we haven't fixed because we're still sorting and writing up reports.In short, it works, but it's painful for everyone involved. So this time, we did something better. Process this time:
  1. A couple months before we're ready to start testing, let folks know we'll be ready to start testing in a couple months. Set up an email list to join if they'd like to participate.
  2. Produce build of application we're ready to start testing.
  3. Make build available to some of the folks on said list.
  4. Fix the problems they find, including forehead-smackers.
  5. Return to step 3, above.
Advantages: Many. We get feedback in manageable quantities. Testers get fixes that bear some resemblance to their reports. Support ninjas get fewer ulcers. World shares cola beverage, sings in perfect harmony.What do we need to do differently next time? To begin with, we need to give customers the ability to help us prioritize their mail, by at the very least sorting it into “bug report”, “feature request”, and “oh god, where is my download login” buckets. The other thing? If we choose to implement any more apps based on the current 800-pound gorilla of personal productivity methodologies, I'm just going to start hiring and never, ever stop. ;-)Which, of course, provides me with a perfect opportunity to link point interested parties over to our Want Work? page, newly updated as of yesterday.