Whoa, $100,000 in under five days!

by Ken Case on November 21, 2007

I just wanted to write a quick note to thank you all for your support!  In less than five days, we've already received over $100,000 in preorders for OmniFocus, making this one of our strongest product launches of all time.

We're still hard at work (as you can see if you've been following along with the multiple beta updates we're pushing out each day)—but it's inspiring to see that the time and resources our team has invested into this over the last 16 months has earned your vote of confidence.

Thank you!


OmniFocus public beta/introductory pricing

by Linda Sharps on November 16, 2007

BIG NEWS OVER HERE PEOPLE. After over 500 sneaky peek releases, which so many of you have been kind enough to give us feedback on, we are finally drawing the OmniFocus early release cycle to a close, with a bright and shiny final release date in mind: January 8, 2008.

As Ken wrote in his message to the OmniFocus mailing list, “We could probably go on indefinitely in this state:  you continue to give us lots of great ideas for ways in which we could improve the software further, and it's hard to resist implementing a good idea when we hear it.”

For REAL. This whole process has taken a lot longer than we had initially guessed, partially because of all the amazing feedback we received along the way. Oh, the spirited conversations that OmniFocus has sparked as we've tweaked the way the application works, and that's just here in the office. I won't get into details, but take it from me: you do not want to use the term “bucket” around here for a while—lest you trigger a frothy-mouthed debate, liberal use of the Caps Lock key, and eventual frantic emailing back-and-forth of walrus images.

Anyway, we've decided not only to commit to a final ship date, but also offer you a special deal. From today until January 8, you can pre-purchase OmniFocus at its introductory rate of $39.95. Once the final version ships, OmniFocus will sell for $79.95—so buy now, and save 50%.

But Omni, you might be thinking. What will I actually get if I buy it now? This sounds like one of those BS marketing schemes where if I buy in the next hour I'll also get a set of steak knives.


Ha ha! Come on, you know us better than that! We would never give you steak knives, because then you might use them to stab us.


If you buy now, you'll get a license that will work in the final version of the software. We'll send you an email when the final version ships, so you'll know exactly when it becomes available.


In the meantime, you can continue to use the beta version, which we're opening up to the general public today. While the betas will still be expiring (so you're encouraged to download more recent versions and not be stuck with bugs we may have fixed), you can easily set your OmniFocus preferences to automatically grab the most recent builds.


Also, if you are an OmniOutliner Professional license owner, you get an additional 25% discount on top of the current introductory pricing. Quantity discounts, educational, and family pricing are all available on our online store.


Thank you for helping us make OmniFocus such a great piece of software, and thank you for your patience during our development cycle.


And now, the relevant links:


OmniFocus and the Way of the (Support) Ninja

by Brian on October 2, 2007

Linda asked lil' ol' me to provide the second post in our ongoing OmniFocus: What We've Learned So Far series. Whether she will come to regret this, only time can tell. Without further ado, the post beginneth thusly:Okay, I'm long-winded; sue me. Before I can really tell you how well the OmniFocus test process is going, though, I feel like I have to supply a bit of background on how we've handled this process for previous projects. It went a little something like this:

  1. Development team produces build of application OmniFoo they're mostly happy with.
  2. Marketing Weasel writes press release, webmaster nee Web Editor updates site with brand-new OmniFoo Beta page. We push new version of website.
  3. VersionTracker, etc. pick up on new build.
  4. Support Ninjas are crushed under a big pile of electronic mail for the next three months. All that is heard from them is a soft but desperate honking noise. Think baby penguin at the bottom of a deep ice-crevasse.
This has a couple of negative effects: first of all, when you're buried in raw oscillating electrons up to your neck, it's really hard on the skin; not at all like that 'drink from the glowy pool of water' scene in Tron would have you believe.More seriously: we get plenty of eyeballs on the new application, which is a good thing. Unfortunately we also got all those eyeballs on the new app at the exact same time. So that thing that's forehead-smackingly obviously broken (which, of course, we failed to catch in the gajillions of hours we spent staring at the app before we pushed it out) gets reported 200 times.Now, the first report of an issue? Good. The tenth? Gold - then we know that this isn't random gremlin activity; there is something here we need to figure out. This holds up till, oh, somewhere between 30 and 50 reports. Beyond that, it's a problem we know about, that we know we need to fix, but from there on out the additional utility of the reports drops off fast. The time it takes the Ninjas to process them doesn't, however.Result: stressed-out ninjas, frustrated engineers (because they're not getting reports of the problems in the newest builds; we're still looking at launch day reports), and the folks with the test builds are having to wade through issues that we haven't fixed because we're still sorting and writing up reports.In short, it works, but it's painful for everyone involved.  So this time, we did something better. Process this time:
  1. A couple months before we're ready to start testing, let folks know we'll be ready to start testing in a couple months. Set up an email list to join if they'd like to participate.
  2. Produce build of application we're ready to start testing.
  3. Make build available to some of the folks on said list.
  4. Fix the problems they find, including forehead-smackers.
  5. Return to step 3, above.
Advantages: Many. We get feedback in manageable quantities. Testers get fixes that bear some resemblance to their reports. Support ninjas get fewer ulcers. World shares cola beverage, sings in perfect harmony.What do we need to do differently next time? To begin with, we need to give customers the ability to help us prioritize their mail, by at the very least sorting it into “bug report”, “feature request”, and “oh god, where is my download login” buckets. The other thing? If we choose to implement any more apps based on the current 800-pound gorilla of personal productivity methodologies, I'm just going to start hiring and never, ever stop. ;-)Which, of course, provides me with a perfect opportunity to link point interested parties over to our Want Work? page, newly updated as of yesterday.


OmniFocus: What We've Learned So Far (Engineering)

by Linda Sharps on September 17, 2007

Today's post is the first in an ongoing series I'm calling OmniFocus: What We've Learned So Far (or OF: WWLSF, if you prefer acronyms). As we move slowly but steadily towards a feature freeze and public beta, I thought it would be interesting to get some input from various people here at Omni on things that have gone well, as well as things that have sucked challenges we didn't anticipate—basically, the ups and downs behind building a piece of commercial software.

We're going to start out in the technical arena, so I apologize if code-talk makes you yawn so hard you accidentally drool a little. Here is Omni's engineering perspective on an important lesson learned during OmniFocus's development process, which can be boiled down to: we ♥ CoreData, but not as a primary file format. 

With more on this subject, here is Tim Wood, hater of Aeron chairs, terror of the Unreal Tournament battlefield, and OmniFocus team lead:


There are many things that are great about CoreData, but using CoreData as a user-visible file format was really painful. Since inception, our xcdatamodel file has had 92 revisions, with most of those exposed to several thousand people via our automated builds. Most of these changes aren't things that users would notice; we often add or remove precalculated summaries, denormalize data or generally change the underlying CoreData representation to make our app easier to implement and tune. Yet, with CoreData, the SQLite mapping would be busted beyond hope by adding or removing a column.

Manually building code to migrate between model versions is really not an option. If CoreData had a Rails-like migration facility where columns could be added and removed trivially via SQL ALTER statements, it might be feasible, but it still wouldn't be good. CoreData explicitly disclaims any support for direct access to the various stores, so it isn't a public file format and hinders our users from easy access to their data. In practical terms, we all know that a liberal application of the letter 'Z' will get you most of the way to accessing your data. Still, this isn't ideal.

What CoreData is great for is building an optimized cache of your data, fetching against it and then binding it to your interface.

A couple of other key observations are that we already needed a public file format for Export (we chose a custom XML grammar, but that's merely a detail). And, using a variant of the public file format for the pasteboard format is a great way avoid writing and testing more code (as is using your pasteboard archive/unarchive code to implement your AppleScript 'duplicate' support…)

Given that, I tweaked our XML archiving to support writing a set of CoreData inserts, updates and deletes as a transaction. We can then write out a small fragment of our content in a new gzipped XML file inside our document wrapper. The structure of our XML transactions is very simple with a key feature being that we can trivially merge a big batch of transaction into a single XML document that contains only the final set of objects as inserts.

On startup of OmniFocus, it scans the transaction log in the user's document and builds a cache validation dictionary that contains:

• Version of Mac OS X

• CoreData's version

• SVN revision of the application

• The last transaction identifier

We then open up the CoreData SQLite persistent store and peek at its metadata. If it isn't an exact match, we close up the persistent store, and rebuild the entire thing by importing our coalesced transaction log in exactly the same way we would import one of our backup files.

There are many extra implementation details (locking, catching the insert/update/delete notification, undo/redo vs. AppleScript, two-phase commit between the XML and SQLite, ...), but we are really happy with the central approach.

Some of the fun things this gives us:

• You can run the same build of the application on 10.4 and 10.5, switching regularly and not worry if CoreData is going to ignite your SQLite store.

• You can run multiple builds of OmniFocus on the same data and not lose anything (more work may be needed for major file format upgrades if there ever is one).

• If we do screw up one of our automated builds and mess up cache updating code, the user's data doesn't get touched and it's just fine on the next build.

• Until the transaction log is compacted, we actually have the full record of edits and we could hypothetically implement persistent undo, allowing the user to rollback to yesterday's version…

• ... or calculate the changes they've made since some point in time.

The last point is really interesting and I'm hoping to make good use of that in the future for things like computer-to-computer synchronization (no, I'm not promising anything)!