Advertisement
wolfmanstout

Untitled

Jul 30th, 2017
103
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 43.70 KB | None | 0 0
  1. Coding without a mouse?
  2. Easy, just use any decent text editor.
  3. But coding without a mouse or a keyboard?
  4. That's where things get interesting.
  5. About three years ago, I set up Dragon NaturallySpeaking on my home computer to give my hands a chance to heal from RSI (Repetitive Strain Injury) on the weekends.
  6. I mostly used it for web browsing and occasionally writing emails.
  7. It was a great relief; finally I didn't have to feel guilty for using my computer on weekends.
  8. But it was also very limited, especially when it came to coding.
  9. I'm a professional software engineer, and I liked my job, but I also wanted to work on side projects.
  10. I tried one of the leading Dragon extensions to help with this, VoiceCode, but after a few hours I decided it was too frustrating and slow, and the software just wasn't there yet.
  11. A couple years later I watched a great talk by Tavis Rudd, Using Python to Code by Voice.
  12. I was put to shame.
  13. Not only did he manage to do it, he did it professionally!
  14. He managed to cure his RSI completely over several months, but he never stopped using his voice.
  15. He was even faster using both voice and keyboard than keyboard alone.
  16. I decided to give it another try, using the Dragonfly library that Tavis recommended.
  17. Unlike VoiceCode, which is a complete solution, Dragonfly is just an improved Python API to Dragon.
  18. It doesn't come with too many built-in commands, but it's extremely easy to extend.
  19. With a powerful Python API to Dragon in hand, I started experimenting with different commands and grammars to use in Emacs and elsewhere.
  20. For the first time I started to see that it was possible to become productive.
  21. This was good timing, because my RSI was only getting worse.
  22. About six months ago, I decided it was time to start using my voice environment at work.
  23. At first I was very slow.
  24. It took a lot of mental effort to do the most basic things, and I definitely had moments where I questioned whether I would ever become efficient.
  25. But with each week I got faster and faster.
  26. And something else changed: I started having fun!
  27. Adding new features was building my future.
  28. Getting faster was gaining freedom.
  29. I felt like Bruce Wayne building his Batcave!
  30. I'm not Batman yet, but I think it's time I start to share what I've learned to help others do the same.
  31. I could just publish my code, but that wouldn't reveal all the little lessons I learned along the way that you will need to know to build your own custom environment.
  32. I'm also hoping to hear from others, so please post your own ideas in the comments and tell me what I could do better!
  33. A hands-free coding environment has a lot of moving parts, which can be overwhelming at first.
  34. This post will teach you how to set up the basic voice recognition environment.
  35. I also use eye tracking, but I'll cover that in a separate post.
  36. To begin with, install Dragon NaturallySpeaking, the voice recognition engine.
  37. Sadly, it's only available for Windows, so you'll have to do Linux development using a virtual machine or remote access (see my post for advice).
  38. I recommend Windows 7, because Dragon NaturallySpeaking still has a lot of bugs in Windows 8.1 (another post for that).
  39. I recommend Dragon 12 over 13.
  40. Dragon 13 only supports select-and-say in particular apps, which is a huge limitation.
  41. I used Dragon 13 for several months before downgrading thanks to readers' advice in the comments, and I don't miss a single feature.
  42. Any edition is fine; I use the premium edition.
  43. I recommend investing in a good microphone; the usual recommendation is Sennheiser ME3.
  44. It's not cheap ($200), but it matters a lot: Dragon is pretty frustrating, so you want to do everything you can to minimize that.
  45. Next, install NatLink, an extension to Dragon that makes it possible to add custom Python commands.
  46. Follow the instructions here.
  47. If everything works, you'll see a window pop up after starting Dragon titled "Messages from NatLink".
  48. It's common to run into problems installing NatLink, so read the instructions carefully.
  49. For your first installation, I highly recommend using their prepackaged version of Python to avoid trouble.
  50. Finally, install Dragonfly, a cleaner Python interface to NatLink.
  51. The prepackaged binaries are several years out of date, so I recommend cloning their git repository.
  52. Run python build_develop.py to install it.
  53. It's just a Python library, so if it worked you should now be able to import dragonfly from Python.
  54. To get started with Dragonfly, I recommend looking at some example modules.
  55. You can check out the original repository of examples or modules mentioned in the docs.
  56. For voice coding purposes, you'll want to familiarize yourself with the multiedit module.
  57. Just drop a module into your NatLink MacroSystem directory, turn your microphone off and on, and NatLink will attempt to load it.
  58. If it's not working, check the messages window to see if there are any error messages.
  59. Of course, this is just the beginning.
  60. The interesting part is extending the Dragonfly modules and writing your own to support a full-featured voice coding environment.
  61. I'll cover that in future posts!
  62. You can do a lot just using your voice, but there are still a few times you'll find yourself reaching for a mouse.
  63. It's often for the silliest little things, like clicking in the empty space within a webpage to change the keyboard context.
  64. If you're serious about not using your hands, you can use an eye tracker to eliminate these last few cases.
  65. This post will teach you how to get started.
  66. Make sure you've read my introductory post on voice coding, since will be building upon that.
  67. Eye trackers used to cost several thousand dollars, but now you can grab a cheap one for less than a couple hundred bucks.
  68. I use the Tobii EyeX dev kit, which retails right now for $139.
  69. Its major competitor is the $100 EyeTribe dev kit, a kickstarter-funded project.
  70. I haven't played around with that yet, but I'd love to hear if you have in the comments.
  71. The basic idea behind eye tracker interaction is that you look somewhere on the screen and then use some other method for clicking or "activating" the item you're looking at.
  72. It's generally too distracting to have the pointer follow wherever you're looking, so usually a keypress instead of a click is used.
  73. For our purposes, of course, we'll want to use a voice command.
  74. The tricky part is integrating it with Dragonfly.
  75. It really ought to be easy, except that right now there's an outstanding bug where their software does not listen for virtual keypresses.
  76. There's a thread in their forums complaining about this, but it sounds like it won't be fixed until the consumer version, which doesn't have a release date yet.
  77. The workaround is surprisingly elaborate, but the good news is I've already done the heavy lifting.
  78. The basic idea is that we will call into their C API from Python.
  79. The raw API is extremely complicated for our needs, so I wrote a simple wrapper DLL with a few basic functions to connect to the eye tracker, get position data, and activate the current gaze point.
  80. You can get the source code and binary distribution of the wrapper from my github repository.
  81. Python makes it a breeze to call into a DLL.
  82. It's useful to have separate commands for moving the pointer and clicking, because the eye tracker accuracy isn't always perfect.
  83. Foot pedals are another alternative to voice commands.
  84. I often use a voice command to move the mouse based on my gaze point, then use my foot to click.
  85. I recommend the Omnipedal Quad.
  86. These are also great for scrolling, which is pretty awkward with dictation.
  87. There's a lot more you can do with a tighter integration with the eye tracking API.
  88. The major shortcoming of my simple approach is that it doesn't work well with small click targets.
  89. The full API lets the application describe all the click targets, so the closest one will be automatically picked.
  90. Of course, this usually requires access to the application source code (or at least an extension), so it's less generic and harder to get up and running.
  91. Please post in the comments if you come up with something!
  92. To be an efficient hands-free coder, you'll need to learn how to move the cursor around a file quickly.
  93. There are two challenges: first, since you can't use a mouse, you can't just click to the location to move to.
  94. You can try using an eye tracker to accomplish this, but the precision isn't quite high enough.
  95. Second, with a keyboard you can hold a movement key and release when you reach your location, but this doesn't translate well to voice control, which has too much latency (although you might try measuring the latency and adjusting for it).
  96. Beginner's note: to get started, check out the multiedit grammar for dragonfly.
  97. This gives you commands to start with and a nice framework for repeating commands quickly.
  98. One approach to movement is to search within the file, for example using Emacs incremental search.
  99. This works well if you need to jump somewhere offscreen.
  100. The trouble with using this for all onscreen movement is that recognition accuracy isn't always perfect, and that identifier might be repeated several times.
  101. This would work better if there were an Emacs extension that numbered incremental search results so it's easy to jump to a particular one.
  102. Let me know in the comments if you make this!
  103. Let's break the problem down.
  104. Every location onscreen is at a particular line and column, so if we can navigate to each of these quickly, we can jump to anywhere quickly.
  105. Let's start with jumping to a particular line.
  106. Any decent editor can show you line numbers and let you jump to a specific line.
  107. But try editing a file with several hundred lines, and you'll find that this is pretty clunky.
  108. There are a couple ways to improve on this: you can show line numbers relative to your current position, or you can show the numbers modulo some value.
  109. For example, if you never have more than 100 lines on screen at once, you could just show the last two digits of the absolute line number.
  110. Personally, I prefer to use relative line numbers since these work well with relative motion commands such as "up ten" or "down five".
  111. Also, the number of syllables scales nicely with the amount of movement.
  112. The main advantage to using modulo is that you can chain together successive commands easily (e.g.
  113. "select between line X and line Y").
  114. If you use Emacs and go with the relative line numbers approach, check out the package linum-relative.
  115. Next, we have to jump to a particular column.
  116. This is a bit trickier because it is awkward to number every column in an editor.
  117. I suppose you could write the numbers vertically; please post in the comments if you come up with something to do this.
  118. In the meantime, I use a few different approaches, but the key advantage we can exploit is that you generally want to jump to the boundaries of symbols and words.
  119. This means that relative motion commands that move by a symbol or word often work well unless the line is long.
  120. In Emacs, familiarize yourself with subword mode.
  121. I bind commands for moving across an entire symbol and across a single subword.
  122. When the line is long, I use a different strategy where I name the character at the beginning or end of a symbol I want to jump to, and use a custom command to jump there.
  123. Searching for a character instead of a full word greatly improves recognition accuracy, especially when dealing with unusual words.
  124. See the Emacs lisp for this at the bottom of the post.
  125. Note that the Emacs extension "Ace jump mode" works similarly to this, but I prefer my approach because I don't have to wait for an overlay to appear before issuing my command.
  126. Voice dictation latency is high enough that it's almost always an advantage to accomplish everything in a single command.
  127. I also use a few more commands for quick movement to frequently visited places.
  128. I use the directions North, South, West, and East to move to the top/bottom of a file and left/right within a line.
  129. I use the mark ring in Emacs to jump to previous locations, and registers to save locations and jump to them quickly later.
  130. And of course I use page up and down to scroll through file, although I prefer to use foot pedals for this.
  131. Finally, as promised, my code for jumping within a line:
  132. One of the best ways to get started writing Dragonfly macros is to set up web browsing by voice.
  133. Thanks to the extensibility of modern browsers, this works surprisingly well.
  134. Note that Dragon does have built-in support for web browsing, although I find it doesn't work very well.
  135. The extension tends to cause pages to hang, and it requires that you speak the link you want to click on, which introduces ambiguities and doesn't work well for all clickable elements.
  136. And of course, it's not very customizable.
  137. I do recommend you try it first to see if it works for you, and to think about what you would like to improve in your custom version.
  138. To begin with, you'll want to decide between Firefox and Chrome.
  139. Both of these support the extensions you'll need, so it is really a matter of personal preference.
  140. Firefox is probably the easiest to get started with, although I prefer Chrome.
  141. First, you need to install an extension that labels clickable elements on the page, so you can speak a label to click on an element.
  142. I recommend Mouseless Browsing for Firefox and Vimium for Chrome.
  143. If you are using Firefox, try out this sample Dragonfly module.
  144. If you are using Vimium, you will need to bind one command to label the clickable elements, and another to actually click a particular element.
  145. Since you'll be using these a lot, make them as terse as possible.
  146. I use "links" for the former and I simply speak the number for the latter.
  147. To make it even faster, I only allow one-syllable numbers to be used in labels.
  148. I also recommend binding the Vimium shortcuts that let you quickly open a bookmark.
  149. Next, you'll want to enumerate tabs so you can quickly jump between them.
  150. I think the Firefox extension already does this, and I created one for Chrome, Tab Namer.
  151. My extension also extracts the hostname and appends it to the tab name, so you can easily use it to define contexts in Dragonfly (e.g.
  152. to bind keyboard shortcuts for specific sites).
  153. The last extension I rely on is Keyboard Shortcuts to Reorder Tabs.
  154. The Linux version of Chrome binds these shortcuts by default, but not the Windows version.
  155. As always, if there are extensions you find useful, please post them in the comments!
  156. When you are ready for even faster browsing, check out my post on Custom web commands with WebDriver.
  157. When I first started using Dragon, I was bummed to be restricted to Windows.
  158. Fortunately, there are lots of ways to work around this limitation and use it with whatever operating system you want.
  159. I'll cover the method I use and describe some alternatives.
  160. My home set up is simplest, because I cheat and just use Cygwin to provide a UNIX-like environment.
  161. This lets me launch GUI Emacs to write code and edit my macro files (which have to live on Windows).
  162. I highly recommend starting with this approach, since it gives you many of the niceties of a Linux environment, but you also benefit from the estimable built-in support Dragon/Dragonfly has for Windows, such as contexts, task switching, and window management.
  163. For task switching in particular, though, I don't use the built-in Dragon commands.
  164. Instead, since I almost always run the same set of apps, I pin each of these to the taskbar and bind labels to "Windows key + number" shortcuts to quickly jump to any app.
  165. This gives me full control over the naming of the apps, and is generally less error-prone than using the built-in "switch to" command.
  166. My work setup is necessarily more complicated, because the software I develop has to build and run on a special company-specific installation of Ubuntu.
  167. To do Linux development, I rely on NX, a remote desktop solution.
  168. The huge advantage of NX is that it supports "rootless" mode, which makes the remote Ubuntu windows look like native Windows applications, each with a named pinnable taskbar button.
  169. This works great with Dragonfly contexts and my task switching solution described earlier.
  170. This solution also works well if you want to install a Linux distro on Windows via virtualization.
  171. I've done this before with VirtualBox with good results.
  172. VirtualBox provides its own rootless mode, but when I tried it about a year ago it didn't work nearly as well as using NX on my local machine.
  173. The trick to setting this up is to configure port forwarding within VirtualBox for the NX port, and then connect to localhost within Windows.
  174. If you are installing NX yourself, the process is a bit complicated due to recent changes that have limited the free feature set in version 4 (the latest version).
  175. I recommend using an open source NX server, such as FreeNX, and version 3 of the official NX Client for Windows.
  176. You can also try open source NX clients, but make sure to choose one that supports rootless mode.
  177. The final component of my setup is to use Putty for SSH port tunneling.
  178. As I will describe in another post, I run an HTTP server within Dragonfly, so that I can send contextual information to Dragon from other apps, such as Emacs.
  179. Thanks to reverse port tunneling, I can expose this HTTP server running on Windows to my Linux machine in a completely secure way.
  180. I also use standard port forwarding when I want to expose servers running on my Linux machine as if they were running on my Windows machine.
  181. There are plenty of guides available online that explain how to set this up.
  182. The solution I've described is to use Windows as your host, and Linux as your guest.
  183. There are ways to swap those if you wish, although I think it is a bit of an uphill battle because Dragon is so closely integrated with Windows.
  184. One solution that is gaining popularity is Aenea.
  185. I haven't tried it myself, but if you think I'm missing out, please let me know in the comments!
  186. I've been a little hesitant to publish a complete repository of all my Dragonfly commands because I think the journey that got me there is more useful than the raw code.
  187. If you just read the code, you'll miss out on why I made certain decisions, you won't know about all the stuff I tried and deleted, and you won't know how I actually use all the commands in combination.
  188. That said, I do think it is a helpful supplement to this blog, so I decided to go ahead and make it available on GitHub.
  189. You can find it here, or linked from the navigation sidebar on every page.
  190. While I'm laying down disclaimers, I should also mention that the code is a work in progress and isn't as clean and modular as I would like, but I decided it was better to just get the code out there and improve it later.
  191. If you make improvements, please send me pull requests!
  192. Once you've gotten used to the basic commands and extensions to control Google Chrome, you may start to hunger for a faster way to control websites you use frequently.
  193. Some sites have keyboard shortcuts you can bind easily, but others don't.
  194. This post will describe how to set up commands to control any webpage.
  195. Selenium WebDriver provides a powerful but simple API to control webpages.
  196. It is used primarily for automated testing, but it will work perfectly for our needs, with a few tweaks.
  197. Start by installing the Python bindings and ChromeDriver.
  198. By default, WebDriver will create a new instance of Chrome with a special profile to run any WebDriver commands.
  199. This is great for sandboxed web testing, but not so great for controlling your existing Chrome sessions with all your custom extensions.
  200. Fortunately, you can configure Chrome to set up a debugging server by opening your Chrome shortcut properties and adding --remote-debugging-port=9222 to the end of the Target field, after the final quote.
  201. To start the server, first quit out of Chrome completely (Ctrl-Shift-Q) then reopen it with your custom shortcut.
  202. Note that closing all open windows does not adequately quit Chrome, and you may need to repeat this procedure if you restart your computer and Chrome starts automatically.
  203. Then go ahead and bind these functions to voice commands and try them out.
  204. If all goes well, the test_driver function will navigate Chrome to the Google homepage.
  205. But there's a problem that you may have noticed if you had multiple tabs open: it does not necessarily work on the currently active tab.
  206. WebDriver does provide a way to change the current tab it operates on, but it doesn't provide a way to find which tab is active.
  207. Fortunately, we can query the chrome debugger API directly from Python and get this information.
  208. This technique doesn't work perfectly when multiple windows are open, but it works most of the time.
  209. If you have a more robust solution, please let me know in the comments!
  210. Navigating to Google isn't terribly exciting, so let's add something more useful.
  211. WebDriver provides several ways of finding an element on a webpage, so you can reuse this action to create a shortcut for nearly any button or link on any webpage.
  212. Of course, this is just the start of what you can do with WebDriver.
  213. You can also execute sequences of commands, even waiting for particular elements to appear in the page.
  214. Please post your favorite commands in the comments!
  215. If you'd like to see how I integrate this with my voice commands, please check out my GitHub repository.
  216. Hi, I'm James, and I created handsfreecoding.org to share techniques and software that allow me to code and enjoy my computer without using my hands.
  217. There are a lot of powerful tools and libraries out there, but it can be overwhelming to learn how to put all the pieces together.
  218. I've been at it for over two years and I'm still discovering new ideas, so I hope this blog will be useful to newcomers and experts alike.
  219. To read more about my story, check out my first post, Adventures in Hands-Free Coding.
  220. If you want to send me private feedback, please fill out the following form.
  221. If you want to share public feedback that others can comment on, please fill out the form at the bottom of this page.
  222. For a site titled Hands-Free Coding, I haven't written much about How To Actually Write The Code.
  223. It turns out this is easier than you might expect.
  224. Before reading this post, please familiarize yourself with my getting started guide and how to move around a file quickly.
  225. There are two basic approaches to dictating code: using custom grammars such as Dragonfly, or using VoiceCode, (not to be confused with VoiceCode.io for Mac, which I just discovered and haven't used yet).
  226. VoiceCode is much more powerful out-of-the-box, but is also harder to extend and more restrictive in terms of programming language and environment.
  227. You might say that VoiceCode is Eclipse, and Dragonfly is Emacs.
  228. You could also consider Vocola for your custom grammars; it is more concise but not quite as flexible because you can't execute arbitrary Python.
  229. Since I prefer Dragonfly, I'll cover that approach.
  230. The multi-edit module is a good place to start.
  231. For example, I can say "score test word" to print "test_word", or "camel test word" to print "testWord".
  232. 2) I use short made-up words to dictate common symbols.
  233. For example, I can print "()" with "leap reap".
  234. I made these words up over time, but if I were starting new I would probably use a standard language such as ShortTalk.
  235. 3) I use templates in my text editor to quickly generate boilerplate syntax, such as the skeleton of a for loop.
  236. In particular, since I use Emacs, I use the yasnippet package.
  237. 4) I rely on automatic formatting in my text editor to keep the code neat.
  238. 5) Most importantly, I structure my grammar so that I can dictate continuously, instead of having to stop and start after every keyword or symbol.
  239. This is the hardest part of my setup, because there are many trade-offs between supporting continuous commands and keeping performance high, which I will cover in another post.
  240. If you follow these basic techniques, the biggest problem that remains is misrecognized words.
  241. You can avoid this in your own code by preferring easily recognized identifiers, but it's much harder when working with someone else's code or library.
  242. I find that the best way to combat this issue is to use Dragon's built-in Vocabulary Editor.
  243. The moment you find yourself spelling out a word, stop right away and add this word to the Vocabulary Editor.
  244. If the problem is a variable or function with multiple misrecognized words, add the whole phrase to your vocabulary.
  245. For example, if you regularly use a class named SimDataManager, add "sim data manager" to your vocabulary and then you can type it in any style using the prefix commands.
  246. I have also experimented with a fancier solution to this problem, where I dynamically add words from nearby code into my Dragonfly grammar.
  247. Unfortunately, I haven't found a way to seamlessly integrate this into my vocabulary without incurring a significant performance penalty, so I only call upon this dynamic grammar explicitly.
  248. So it's not quite as powerful as you might expect, and most of the time I rely on the built-in vocabulary.
  249. It's better than nothing, though, so I will cover this in a later post.
  250. That covers the basics, but much of the challenge of writing code is editing code you (or someone else) have already written.
  251. I'll save that for later posts!
  252. As you build on your grammars over time, you start to run into all kinds of problems.
  253. Commands get confused with each other, latency increases, and your grammars become giant disorganized blobs.
  254. This is particularly challenging with Dragonfly, which gives you the power to repeat commands in a single utterance, but leaves it up to you structure your grammar accordingly.
  255. In this post I'll discuss the techniques I use to wrangle my grammars.
  256. If you are a beginner and don't mind pausing after each command, you can stop reading now and use the following simple patterns: flat grammars, one grammar per file, one file per application.
  257. The hard part is designing a grammar that works with commands that can be repeated within a single utterance.
  258. We'll take the multiedit grammar as a starting point.
  259. As you add commands, one of the first problems you'll run into is recognition errors, particularly with any command that allows raw dictation.
  260. The trouble is that this grammar allows raw dictation to be mixed with any other commands, so if your raw dictation contains any of those command words, it will get recognized as a command instead of dictation.
  261. The simplest way around this is to move infrequently used commands into a separate (repeated) grammar that doesn't contain any raw dictation.
  262. The downside of this approach is that you will have to remember to pause between commands of these two different classes.
  263. I take a slightly different approach: I allow a sequence of commands from the larger group to be immediately followed by sequence of commands from the smaller group which includes dictation.
  264. Hence there is no mixing between these command groups, but they can be spoken in a single utterance.
  265. It does impose some constraints on command ordering, but this isn't a big problem because frequently you want to pause after several arbitrary dictations to make sure the output is correct.
  266. Of course, this only reduces the severity of the problem.
  267. As long as you have some command words mixed with arbitrary dictation, you will occasionally want to use those command words within the dictation.
  268. To handle these edge cases, I use an escape word which forces the dictation command to be the last in the sequence, preventing these words from being recognized as a successive command.
  269. As long as you keep your commands pretty simple, you shouldn't have to worry too much about performance.
  270. It becomes a problem when you add specialized commands that contain repetition, such as a command to quickly speak a sequence of numbers or letters.
  271. Every repeated element multiplies its component size by the number of allowed repetitions.
  272. If you allow nested repetition, you get O(n^2) growth, which will quickly slow things down.
  273. I avoid this entirely, but I don't force the commands to be in a separate utterance.
  274. Instead, between my two top-level repeated elements, I allow zero or one instance of any of my specialized repeated elements.
  275. The trouble is that Dragonfly contexts can only be used in an exported rule or grammar, so it's not easy to apply them to specific components of a repeated element.
  276. It turns out that this isn't just a Dragonfly limitation, it is inherent to the way Dragonfly grammars map to NatLink grammars, and the limitations imposed on those.
  277. But fortunately, we can work around it: we will create complete exported rules for every configuration we want to support, and associate them with contexts that ensure mutual exclusivity.
  278. This isn't exactly cheap, but it is scalable if we assume hierarchical contexts.
  279. Creating complete exported rules requires that we refactor RepeatRule to use a proper constructor instead of setting values statically.
  280. We also have to give each rule a different name, to make Dragonfly happy.
  281. There are lots of other ways you could organize a grammar, and I'm sure I will make more improvements over time.
  282. Please add comments if you have ideas!
  283. Dragonfly is so powerful that it's easy to forget that Dragon does some things well out-of-the-box.
  284. To maximize your efficiency, it's important know when it's not worth it to reinvent the wheel.
  285. In this post, I'll describe when I prefer to use built-in Dragon functionality.
  286. Dragon's strength is natural language dictation and editing.
  287. It allows you to speak continuously for any length of time, and it's very convenient to correct words and move the cursor by referring to previously dictated text.
  288. But as soon as a Dragonfly command is uttered in an unsupported text field, Dragon will no longer let you make corrections.
  289. So when dictating English text, I generally try to avoid using Dragonfly commands.
  290. If I want to dictate variable or function names from code, I just use "cap", "no caps", and "no space" to help.
  291. This way I can keep dictating without any pauses.
  292. I will occasionally switch to Dragonfly commands when making small corrections, but I try to avoid it.
  293. When possible, I try to use the built-in "correct ..." commands, and when that is not sufficient I use "go back" after making an edit to quickly jump back to where I was.
  294. Whether dictating text or code, I prefer to add misrecognized words directly to the Dragon vocabulary instead of putting these into my custom grammar.
  295. This makes it easy to train the words, and I have designed my grammar so that it works in harmony with the Dragon vocabulary.
  296. For instance, if I have a common class name "SimDataManager", I would add the phrase "sim data manager" to my vocabulary and then I can easily style it with underscores or camel case using my Dragonfly commands, without having to treat it as a special case.
  297. Similarly, if I have a frequently misrecognized command that I don't want to change, I will train it using Dragon, taking care to not add it to the vocabulary (using "train word"), which would create ambiguity between command and dictation.
  298. I recommend familiarizing yourself with the Dragon Command Browser ("open Command Browser") so you know what is available.
  299. Note that the "Show all" button is helpful for expanding ellipses in commands.
  300. Some of my favorite short commands are "display text" to open the dictation box, "edit all" to select all and open the dictation box, and "edit words" to open the Vocabulary Editor.
  301. When do you prefer to use built-in Dragon functionality to custom commands?
  302. Please post in the comments!
  303. P.S.
  304. I would like to acknowledge the help of Mark, one of my readers/commentors, who influenced me on this subject and pointed me to some of the most useful built-in commands.
  305. I recently came across PCByVoice SpeechStart+, a small but interesting extension to Dragon that adds some nice functionality that would be hard or impossible to implement with Dragonfly.
  306. It costs $40 (after a 15-day trial), so I will help you decide whether it is worth the money.
  307. For example, this works with all the buttons and file icons in Explorer, everything in the taskbar and system tray, and the bookmark buttons in Chrome.
  308. It supports three different styles of overlays, although I prefer the circles.
  309. Note that in general I recommend avoiding applications that don't have good keyboard shortcuts, but it's inevitable you will have to use them occasionally.
  310. You can turn the microphone on and off using only your voice.
  311. Is much less sensitive to spurious awakenings than the sleep mode built into Dragon.
  312. You can even require a second confirmation when turning the microphone on, but I haven't needed that.
  313. You can start (and restart) Dragon using only your voice.
  314. Paired with the above feature, you can start Dragon and turn on the microphone using only your voice even if Dragon is not configured as a startup program.
  315. Also, this feature seems to work well even when Dragon is hung.
  316. My only minor gripe is that it is apparently implemented by killing the Dragon process, which bypasses any prompts to save the user profile and perform maintenance.
  317. On the upside, this makes restarts very fast and I haven't had any trouble with it leaving Dragon in a bad state.
  318. There are a few other features that are also convenient, such as window placement and maximization commands that are a bit faster and more reliable than the Dragonfly counterparts I've worked with.
  319. In general, this extension doesn't try to do too much and "just works".
  320. There are only two significant complaints I have: the commands cannot be remapped (fortunately I don't have any conflicts), and I find that Dragon hangs when quitting unless I use the SpeechStart+ command to quit out of both SpeechStart+ and Dragon.
  321. For better or worse, this means I always use SpeechStart+ to quit and no longer regularly perform profile maintenance.
  322. I haven't noticed any downsides to this, but presumably Dragon recommends this cleanup for a reason.
  323. Overall, I highly recommend this extension if you are a power user of Dragon and want to eliminate that sense of defeat when you occasionally have to reach for the mouse.
  324. You can download the trial and purchase it from KnowBrainer if you are in the USA and PCByVoice if you are anywhere else.
  325. As always, if you have experiences with SpeechStart+ or similar products, please share them in the comments!
  326. When I find myself writing or editing something sufficiently long, I like to have full support for Select-and-Say.
  327. I used to use "open dictation box", since that's the obvious choice, until I discovered that using Notepad is much faster.
  328. It's kind of unbelievable that forking a process and starting a third-party editor could be faster than using the feature Nuance designed for this precise purpose.
  329. But amazingly it is, by a factor of about 2X!
  330. By my measurements, the dictation box starts up in about two seconds, whereas Notepad starts up in one.
  331. Transferring text from the dictation box takes about one second versus a half second for Notepad.
  332. And you get an easy way to transfer it out which works perfectly well with Emacs.
  333. Also, unlike the dictation box, this doesn't prevent you from interacting with the existing app you have open (e.g.
  334. to visit another tab in Chrome).
  335. And if you accidentally mess up and paste it to the wrong place, it is still in your clipboard so you can easily recover it.
  336. Of course, this same technique would work with another editor, it just needs to start up quickly and have full Select-and-Say support.
  337. Ideally, I would use it with an editor that has unlimited undo history.
  338. And if the editor can start in the background, this could be even faster by avoiding the process fork.
  339. If anyone knows a good candidate, please post in the comments!
  340. Like many an Emacs user, I am enamored with Org-Mode.
  341. Every great coding session begins with organizing your thoughts, and Org-Mode is an excellent tool for the job.
  342. If you're tracking New Year's resolutions, it's great for that too.
  343. Since Org-Mode already has an excellent compact guide, I'll focus on my voice bindings and finish with a bonus section on how I like to structure my personal to do lists.
  344. Org-Mode comes with a bunch of built-in commands that you will want to make use of.
  345. But if you blindly map the commonly used keyboard commands to voice commands, it will be slow to work with.
  346. The first problem is that several of the commands are designed to be chained together rapidly.
  347. This is easy enough to work around; we just need to consider what the most likely combos are and create specific commands for them.
  348. The second problem is that some of the commands cycle through several options and rely on visual feedback to select the right one.
  349. This doesn't play well with the slow feedback loop of voice input.
  350. Fortunately some of the commands let you specify prefixes to jump to a specific option.
  351. Unfortunately, Org-Mode doesn't provide a prefix option to specify a particular result.
  352. I looked briefly into writing my own command for this, but the implementation would have to be tied to Org-Mode internals, so I decided it wasn't worth the effort.
  353. The "org (West|white)" command runs some custom lisp which goes to the beginning of the line, skipping after stuff like asterisks and to do's.
  354. I'm sure that hard-core Org-Mode users would want a lot more bindings, but this is enough for my purposes right now and ought to be enough for a beginner to get started.
  355. If I'm missing out on some awesome functionality, please let me know in the comments!
  356. Once you get familiar with Org-Mode you will want to use it for everything.
  357. I use it to track my personal to do list, which is full of everything from boring chores to exciting new things I want to try.
  358. In the past I've had trouble prioritizing stuff in this list, because it's hard to compare something necessary and mundane with something I'm excited about.
  359. What I realized a few months ago is that I don't have to.
  360. At different times of the week I have different motivations.
  361. Sometimes I just want to relax, sometimes I want to take care of business, and other times I want to do something good for the world.
  362. So I simply structure my list around my motivations and situations.
  363. Time sensitive: I need to do this soon.
  364. Practical: I need to do this at some point.
  365. Fun: When I just want to have a good time.
  366. Outside: When I feel like getting out of the house.
  367. Chill: When I just want to relax.
  368. Good: When I want to do good for the world.
  369. Intellectual: When I want to learn something new.
  370. Professional: When I want to build my resume.
  371. Health: When I want to improve my health and well-being.
  372. Social: When I feel like spending time with others.
  373. I also have categories for close friends containing stuff I want to do with them or tell them about.
  374. With this kind of organization, I find it's much easier to prioritize within a category.
  375. If I'm having trouble categorizing something, that's often a sign that I'm not really motivated to do it all and it doesn't belong in the list to begin with.
  376. Later, when I am looking for stuff to do, I can easily focus in on what I feel like doing and not get distracted.
  377. What do you like to use Org-Mode for?
  378. Let me know in the comments!
  379. Not related to coding, but hands-free coders need to have some fun too.
  380. I don't even use a custom grammar and it works very well.
  381. If you have a good experience, you can thank Blizzard on this thread I started.
  382. Hopefully I didn't just set the voice coding community back by a few months!
  383. If you know of other games that play well with hands-free control, please post in the comments.
  384. Tobii has released a new consumer eye tracker, the Tobii Eye Tracker 4C for $150.
  385. Although I haven't found eye tracking to be nearly as helpful as speech recognition, it is handy for those occasional situations where you just want to click a button or change context and you don't have any command to do so (see my earlier post for details).
  386. I have been pretty happy with the Tobii EyeX, but it isn't perfect, so I was excited to try out this new device.
  387. It arrived just a few days after I ordered it, and came with a complementary USB extension cable.
  388. This is an essential addition if you want to use the eye tracker with a desktop, because the built-in cable is very short (although that's nice when using with a laptop, where it reduces the bulk considerably).
  389. The first thing you will notice after setting it up is that the lights are much less bright than the EyeX.
  390. I got used to the bright red lights, but I'm happy to not have those blaring at me all the time anymore.
  391. The new lights are primarily using infrared wavelength, although you will still see a red glow.
  392. The main new feature that Tobii is touting is a dedicated chip for performing processing on-device.
  393. This is supposed to reduce host CPU load and makes it possible to use USB 2.0 instead of 3.0.
  394. Even though I have USB 3.0 ports, I'm happy to see this change because I have found them to be less reliable, occasionally requiring driver updates to work.
  395. Also, USB 3.0 doesn't work with standard USB hubs.
  396. As for the CPU load reduction, in theory this is helpful when using with a laptop to reduce fan spinning and power drain, although in practice I haven't noticed much change (including when looking at CPU usage within the task manager, where it hovers at around 1-2%).
  397. Of course, the most important question is whether it actually works better than the old eye tracker.
  398. Indeed, it has improved in a couple ways: higher precision and larger tracking area.
  399. After performing calibration, the precision is noticeably better (approximately 2-3X, with about 5 mm diameter noise), although it still exhibits consistent error (as much as 2 cm) towards the periphery.
  400. It has always been a mystery to me why this bias cannot be fixed by calibration, because it is so consistent.
  401. For this reason, I would still recommend sticking with a screen of up to 24 inches, even though the requirements technically now support up to 27 or 30 inches depending on aspect ratio.
  402. The other improvement is in the tracking area.
  403. I use a motorized sit/stand desk, and I used to find that the change in the position of my head in standing position was enough to go out of range of the tracker.
  404. That is no longer a problem, indicating a major improvement.
  405. Overall I'm happy with the new device, enough that I ordered one for both home and work.
  406. My biggest complaint at this point remains the error when looking towards the sides of my screen, which seems like it ought to be fixable in software.
  407. Tobii does update the software pretty regularly, so hopefully they will get to this at some point!
  408. I learned about a couple very exciting new developments this week in open source speech recognition, both coming from Mozilla.
  409. The first is that a year and a half ago, Mozilla quietly started working on an open source, TensorFlow-based DeepSpeech implementation.
  410. DeepSpeech is a state-of-the-art deep-learning-based speech recognition system designed by Baidu and described in detail in their research paper.
  411. Currently, Mozilla's implementation requires that users train their own speech models, which is a resource-intensive process that requires expensive closed-source speech data to get a good model.
  412. But that brings me to Mozilla's more recent announcement: Project Common Voice.
  413. Their goal is to crowd-source collection of 10,000 hours of speech data and open source it all.
  414. Once this is done, DeepSpeech can be used to train a high-quality open source recognition engine which can easily be distributed and used by anyone!
  415. This is a Big Deal for hands-free coding.
  416. For years I have increasingly felt that the bottleneck in my hands-free system is that I can't do anything beneath the limited API that Dragon offers.
  417. I can't hook into the pure dictation and editing system, I can't improve the built-in UIs for text editing or training words/phrases, I'm limited to getting results from complete utterances after a pause, and I can't improve Dragon's OS-level integration or port it to Linux.
  418. If an open source speech recognition engine becomes available that can compete with Dragon in latency and quality, all of this becomes possible.
  419. To accelerate progress towards this new world of end-to-end open source hands-free coding, I encourage everyone to contribute their voice to Project Common Voice, and share Mozilla's blog post through social media.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement