I was recently in a discussion with a friend about the challenges that face folks like me that spend a LOT of time at a keyboard. I’ve had many friends in the industry that have been diagnosed with various levels of repetitive stress injuries (RSI) and I’ve seen first-hand how debilitating these conditions can be. Just trying to do everyday things, especially writing code, can range from slightly painful to downright impossible depending on the severity of the condition.
I made the statement that it would really be nice if modern software development tools were more “voice-aware”. My friend forwarded me a link to this video from a couple of years ago on YouTube of a presentation given by Tavis Rudd showing how he changed his daily workflow and toolset to be able to do ~60% of his daily tasks with his voice.
While he has a LOT of work invested in customizations and training himself to use his new workflow, the amount of things he can do just by speaking to his computer is pretty amazing. He mentions that after a few months his RSI symptoms were completely gone. That to me is the proof that his hours spent training and customizing his system to “work” for his situation were well spent.
With the recent rise in popularity of Amazon Alexa and Google’s recent release of Google Home, I can only hope that in the near future the process gets easier to adopt for a wider range of developers.