Evaluation of a commodity VR interaction device for gestural object manipulation in a three dimensional work environment
Master’s thesis – 2014
Designers and engineers working in the computer-aided drafting (CAD) and computer-aided engineering (CAE) domains routinely interact with specialized computer software featuring three dimensional (3D) work environments. These professionals must manipulate virtual objects or components within this 3D work environment, but typically use traditional interaction devices with outdated technology that are more suitable for 2D work tasks. Current CAD and CAE software is designed to accommodate outdated interaction technology, but this functionality comes at the cost of efficiency in the virtual workspace. A new class of affordable interaction devices with characteristics and specifications of high-end virtual reality interaction devices is now available to consumers. These commodity VR interaction devices monitor the position and orientation of a user’s hands through space to control aspects of desktop software in ways that are impossible with the traditional mouse and keyboard pair. They can be integrated with CAD or CAE software to allow gestural control of objects throughout a 3D work environment.
To evaluate the feasibility of gestural control for 3D work environments, a commercially available commodity VR interaction device was selected and integrated with specific 3D software. Gestures to control aspects of the software are developed and organized into a taxonomy. Select gestures are integrated with the software and evaluated against traditional interaction methods, using the Natural Goals Operators Methods Selection Rules Language (NGOMSL) concept. The evaluation results show that gestural interaction is efficient for object manipulation tasks, but a traditional keyboard or mouse is more efficient for basic tool selection tasks. Estimated learning times for each input method indicate gestural control takes about 30 seconds longer to learn than traditional interaction methods.
An Application of Conceptual Design and Multidisciplinary Analysis Transitioning to Detailed Design Stages
AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Aviation Forum 2015
Paper about the use of 3D collaborative design software for the design of large scale vehicles. More details soon.
Fusing Self-Reported and Sensor Data from Mixed-Reality Training
Military and industrial use of smaller, more accurate sensors are allowing increasing amounts of data to be acquired at diminishing costs during training. Traditional human subject testing often collects qualitative data from participants through self-reported questionnaires. This qualitative information is valuable but often incomplete to assess training outcomes. Quantitative information such as motion tracking data, communication frequency, and heart rate can offer the missing pieces in training outcome assessment. The successful fusion and analysis of qualitative and quantitative information sources is necessary for collaborative, mixed-reality, and augmented-reality training to reach its full potential. The challenge is determining a reliable framework combining these multiple types of data.
Methods were developed to analyze data acquired during a formal user study assessing the use of augmented reality as a delivery mechanism for digital work instructions. A between-subjects experiment was conducted to analyze the use of a desktop computer, mobile tablet, or mobile tablet with augmented reality as a delivery method of these instructions. Study participants were asked to complete a multi-step technical assembly. Participants’ head position and orientation were tracked using an infrared tracking system. User interaction in the form of interface button presses was recorded and time stamped on each step of the assembly. A trained observer took notes on task performance during the study through a set of camera views that recorded the work area. Finally, participants each completed pre and post-surveys involving self-reported evaluation.
The combination of quantitative and qualitative data revealed trends in the data such as the most difficult tasks across each device, which would have been impossible to determine from self-reporting alone. This paper describes the methods developed to fuse the qualitative data with quantified measurements recorded during the study.
Comparing Training Performance with Vibrotactile Hit Alerts vs Audio Alerts
Live, virtual, and constructive training that integrates dismounted warfighter training with convoy training, pilot training, and other systems has been demonstrated to reduce training time, and studies have shown that a high level of immersion and the illusion of presence in a VR environment contribute to this success. However, current force-on-force training simulators lack one major quality that is needed to impart this strong sense of presence for warfighters: the consequence of getting shot.
Simulated return-fire systems have been developed for different purposes including military, police and entertainment. Some use projectiles, but that approach is usually limited to a shoot house configuration rather than outdoors. Other methods use on-body electrodes to provide electric shock or tactors that physically strike the body using solenoids or pneumatics. These systems face challenges of either low body localization (with a small number of tactors) or tethering (if a real-time connection to electricity or air is needed to power the tactors).
In this paper, a tactile vest containing commercially available vibrating pagers is evaluated. These pagers allow a focused alert to be given to a Warfighter, indicating the bodily location of the shot and appropriate direction for return fire. They are also cost-effective and easily replaceable. The evaluation included a simple training mission while receiving either vibrotactile feedback vs. auditory spoken alerts of virtual sniper hits and direction of fire. Results showed that a tactile vest made from commercial off-the-shelf pagers performed well as an indicator of fire and could be viable option for integration with future LVC training, especially given its low cost. Also, results suggested that there may be strong individual differences between people in terms of their ability to process vibrotactile vs. auditory feedback while cognitively loaded.