>Dasher is a zooming predictive text entry system, designed for situations where keyboard input is impractical (for instance, accessibility or PDAs). It is usable with highly limited amounts of physical input while still allowing
high rates of text entry.
Ada referred me to this mind bending prototype:
D@sher Prototype - An adaptive, hierarchical radial menu.
>( http://www.inference.org.uk/dasher ) - a really neat way to "dive" through a menu hierarchy/, or through recursively nested options (to build words, letter by letter, swiftly). D@sher takes Dasher, and gives it a twist, making slightly better use of screen revenue.
>It also "learns" your typical useage, making more frequently selected options larger than sibling options. This makes it faster to use, each time you use it.
One important property of Dasher is that you can pre-train it on a corpus of typical text, and dynamically train it while you use it. It learns the patterns of letters and words you use often, and those become bigger and bigger targets that string together so you can select them even more quickly!
Ada Majorek has it configured to toggle between English and her native language so she can switch between writing email to her family abroad and co-workers at google.
Now think of what you could do with a version of dasher integrated with a programmer's IDE, that knew the syntax of the programming language you're using, as well as the names of all the variables and functions in scope, plus how often they're used!
I have a long term pie in the sky “grand plan” about developing a JavaScript based programmable accessibility system I call “aQuery”, like “jQuery” for accessibility. It would be a great way to deeply integrate Dasher with different input devices and applications across platforms, and make them accessible to people with limited motion, as well as users of VR and AR and mobile devices.
Dasher is really impressive. I really like the idea of bringing it into VR and maybe it can be taken even further. If you turn the exploration into a 3D graph of X/Y options into the Z direction as opposed to 2D graph of Y options in the X direction, combined with eye tracking of newer VR headsets, you should be able to get a decent improvement to accuracy and would be able to increase the speed.
http://www.inference.org.uk/dasher