Wednesday, April 23, 2003
So there are 3 MacOS X APIs that I'm currently looking at to start hooking into EvilToaster:
All three of them have similar requirements, and have to be hooked to some of the same things.
Most people don't realize that Speech Recognition has been built into the MacOS since even before System 7.5 (about 10 years ago!). Speech can be used in the Finder and any scriptable application to issue commands like "Open my Documents folder", though for dictation you need a commercial speech engine like [iListen] or [Via Voice]. From what I have seen and heard, iListen is the better product.
At any rate, few applications directly support the Apple Speech Recognition API. Why? Because it's always been somewhat difficult for developers to work with. Unfortunately, that hasn't changed with the introduction of MacOS X. While there are a lot of new speech capabilities, the API open to developers is Carbon and essentially unchanged from the old API that no one used. It's really unfortunate, because that means at least for now Evil Toaster won't be supporting Speech recognition. It would have been nice to be able to say "Read me my new mail" and have it do that, but Apple's managed to make supporting speech enough work that I can't do it yet (the design of their API is such that you really have to redesign a lot of your code to access speech, and I suppose that's why not even Apple's own apps use it). Speech should really be transparent, and even just a layer on top of Accessibility, but it doesn't seem to be a priority with Apple.
Accessibility under MacOS X is great, there's a good article on it [here].
[ 4/23/2003 09:18:00 PM ] [