2015_12_I/ITSEC - Extending Intelligent Tutoring Beyond the Desktop to the Psychomotor Domain: A survey of smart glass technologies
Today, Intelligent Tutoring Systems (ITSs) are generally authored to support desktop training applications with the most common domains involving cognitive problem solving tasks (e.g., mathematics and physics). In recent years, implementations of game-based tutors based on the Generalized Intelligent Framework for Tutoring (GIFT), an open-source tutoring architecture, provided tailored, militarily-relevant training experiences in desktop applications (e.g., Virtual Battlespace and Virtual Medic). However, these game-based desktop tutors have been limited to adaptive training for cognitive tasks (e.g., problem solving and decision-making), whereas the military requires adaptive training to extend beyond the desktop to be compatible with the physical nature of many tasks performed by soldiers. This paper examines how commercial smart glass technologies could be adapted to support tailored, computer-guided instruction in the psychomotor domain for military training in-the-wild, locations where no formal training infrastructure is present. We evaluated the usability and system features of 10 commercial smart glasses including Atheer One, CastAR, Epson Moverio BT-200, GlassUp, Google Glass, LaForge Icis, Laster See-Through, Meta Space Glasses, Optinvent ORA-S, and Vuzix M-100. Smart glasses were selected as the focus of this study over handheld mobile devices to promote a hands-free experience during a training task where the hands are needed to accomplish the task (e.g., climbing and maneuvering over uneven terrain). Each set of smart glasses was evaluated not with respect to each other, but with respect to their capabilities to support adaptive instruction in-the-wild and at the learner’s point-of-need. We examined a wide range of smart glass features and capabilities, and evaluated their compatibility with a representative military task, land navigation, to answer the question: what system design features (e.g., usability and interaction) are needed to support adaptive training for this individual psychomotor task beyond desktop applications so it can be taught anywhere (in-the-wild)?