Document Type

Conference Paper

Publication Date

2020

DOI

10.1109/smc42975.2020.9283015

Pages

2714-2721

Conference Name

2020 IEEE International Conference on Systems, Man, and Cybernetics, 11-14 Oct. 2020, Toronto, ON, Canada

Abstract

Visual 'point-and-click' interaction artifacts such as mouse and touchpad are tangible input modalities, which are essential for sighted users to conveniently interact with computer applications. In contrast, blind users are unable to leverage these visual input modalities and are thus limited while interacting with computers using a sequentially narrating screen-reader assistive technology that is coupled to keyboards. As a consequence, blind users generally require significantly more time and effort to do even simple application tasks (e.g., applying a style to text in a word processor) using only keyboard, compared to their sighted peers who can effortlessly accomplish the same tasks using a point-and-click mouse. This paper explores the idea of repurposing visual input modalities for non-visual interaction so that blind users too can draw the benefits of simple and efficient access from these modalities. Specifically, with word processing applications as the representative case study, we designed and developed NVMouse as a concrete manifestation of this repurposing idea, in which the spatially distributed word-processor controls are mapped to a virtual hierarchical 'Feature Menu' that is easily traversable non-visually using simple scroll and click input actions. Furthermore, NVMouse enhances the efficiency of accessing frequently-used application commands by leveraging a data-driven prediction model that can determine what commands the user will most likely access next, given the current 'local' screen-reader context in the document. A user study with 14 blind participants comparing keyboard-based screen readers with NVMouse, showed that the latter significantly reduced both the task-completion times and user effort (i.e., number of user actions) for different word-processing activities.

Comments

© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Publisher's version available at: https://dx.doi.org/10.1109/SMC42975.2020.9283015

Original Publication Citation

Lee, H.-N., Ashok, V., & Ramakrishnan, I. V. (2020). Repurposing visual input modalities for blind users: A case study of word processors. 2020 IEEE International Conference on Systems, Man, and Cybernetics, 11-14 Oct. 2020, Toronto, ON, Canada. https://dx.doi.org/10.1109/SMC42975.2020.9283015

Share

COinS