Experimental Assessment of Human Input Modalities Under Different XR Tasks for the HoloLens 2
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Recently, Human–Computer Interaction (HCI) has been revolutionized by Extended Reality (XR) technologies, which overlay interactive virtual content onto the physical world. Yet developers still struggle to choose the most effective input modality, such as hand gestures, to controller buttons, to voice commands, to gaze–and-dwell, based on different XR tasks. This thesis proposes the hypothesis that certain modality–task pairings systematically enhance both user performance and experience. We developed a Unity based XR application for the HoloLens 2 to implement six representative tasks, including System Control, Instantiation, Selection, Transformation, 3D Modeling, and Text Input, and enabled interaction via four modalities: (1) Hand Gestures, (2) Joystick, (3) Speech Recognition, and (4) Gaze–Dwell. In a within-subject study with 33 participants, we measured objective completion times, subjective workload (NASA–TLX), and perceived usability (SUS). Results revealed that joystick input consistently produced the fastest, most reliable performance and highest usability ratings, while hand tracking proved the least efficient and most frustrating. Workload ratings clustered into low-demand (System Control, Instantiation, Selection) and highdemand (Transformation, 3D Modeling, Text Input) tiers. These findings establish evidence-based guidelines for selecting input methods that optimize efficiency, reduce workload, and maximize usability in future XR applications.
Description
Keywords
Extended Reality, Augmented Reality, Mixed Reality, User Interaction, Human Computer Interaction, User Experience, Input Modalities