Designers and engineers working in the computer-aided drafting (CAD) and computer-aided engineering (CAE) domains routinely interact with specialized computer software featuring three dimensional (3D) work environments. These professionals must manipulate virtual objects or components within this 3D work environment, but typically use traditional interaction devices with outdated technology that are more suitable for 2D work tasks. Current CAD and CAE software is designed to accommodate outdated interaction technology, but this functionality comes at the cost of efficiency in the virtual workspace. A new class of affordable interaction devices with characteristics and specifications of high-end virtual reality interaction devices is now available to consumers. These commodity VR interaction devices monitor the position and orientation of a user’s hands through space to control aspects of desktop software in ways that are impossible with the traditional mouse and keyboard pair. They can be integrated with CAD or CAE software to allow gestural control of objects throughout a 3D work environment.
To evaluate the feasibility of gestural control for 3D work environments, a commercially available commodity VR interaction device was selected and integrated with specific 3D software. Gestures to control aspects of the software are developed and organized into a taxonomy. Select gestures are integrated with the software and evaluated against traditional interaction methods, using the Natural Goals Operators Methods Selection Rules Language (NGOMSL) concept. The evaluation results show that gestural interaction is efficient for object manipulation tasks, but a traditional keyboard or mouse is more efficient for basic tool selection tasks. Estimated learning times for each input method indicate gestural control takes about 30 seconds longer to learn than traditional interaction methods.
In the image of classic Mac OS shown on the left side of the below image, dragging a folder icon somewhere is a relatively straightforward operation – mouse translation is directly mapped to on-screen icon translation. In contrast, the CAD style environment shown to the right is more ambiguous. If a user tries the same action as before and translates the red cube with a mouse motion, how should the software interpret that 2D action for 3D object in a 3D space? It turns out that software often gets this wrong, which hinders work and frustrates users.
Software companies understand this is a problem, and include solutions within their software to constrain manipulation to only one or two dimensions at a time. Users are now accurate, but the problem with this is that it slows down the workflow.
Nowadays, we have many interaction options that empower us in new ways beyond the standard mouse and keyboard duo. There are many VR devices out there, with plenty more on the way. Some, like the highly publicized Oculus Rift HMD are used for visualization, while others, like Microsoft’s Kinect, provide novel interaction with 3D spatial tracking. This new technology is available, affordable, and relatively simple to integrate with software. With all of this technology available, I argue that we can improve our interaction with desktop computers for certain tasks. My master’s thesis explores this topic and provides a “best practices” guide for people looking to implement VR at their own business.
My thesis discusses the the history of PC interaction and software solutions to the dimensional mismatch in 3D software afforded by the mouse. I go on to explore the user’s needs with respect to gestural interaction, answering questions like “how do you create gestures that are memorable, salient, and fatigue-free?”. An entire gesture taxonomy is developed to assist designers, and an example of gestural implementation is shown. Finally, the new gestural interaction is compared to traditional mouse interaction through NGOMSL analysis.
The result: gestures take a bit longer to learn than traditional interaction, but there is a tremendous payoff once they are learned. Users can create their designs much faster with 3D interaction methods than with the 2D mouse.