Date of Award

Spring 2020

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Psychology

Committee Director

James P. Bliss

Committee Member

Mark Scerbo

Committee Member

Konstantin Cigularov

Abstract

Automation is pervasive across all task domains, but its adoption poses unique challenges within the intelligence, surveillance, and reconnaissance (ISR) domain. When users are unable to establish optimal levels of trust in the automation, task accuracy, speed, and automation usage suffer (Chung & Wark, 2016). Degraded visual environments (DVEs) are a particular problem in ISR; however, their specific effects on trust and task performance are still open to investigation (Narayanaswami, Gandhe, & Mehra, 2010). Research suggests that transparency of automation is necessary for users to accurately calibrate trust levels (Lyons et al., 2017). Chen et al. (2014) proposed three levels of transparency, with varying amounts of information provided to the user at each level. Transparency may reduce the negative effects of DVEs on trust and performance, but the optimal level of transparency has not been established (Nicolau & McKnight, 2006). The current study investigated the effects of varying levels of transparency and image haze on task performance and user trust in automation. A new model predicting trust from attention was also proposed. A secondary aim was to investigate the usefulness of task shedding and accuracy as measures of trust. A group of 48 undergraduates attempted to identify explosive emplacement activity within a series of full motion video (FMV) clips, aided by an automated analyst. The experimental setup was intended to replicate Level 5 automation (Sheridan & Verplank, 1978). Reliability of the automated analyst was primed to participants as 78% historical accuracy. For each clip, participants could shed their decision to an automated analyst. Higher transparency of

automation predicted significantly higher accuracy, whereas hazy visual stimuli predicted significantly lower accuracy and 2.24 times greater likelihood of task shedding. Trust significantly predicted accuracy, but not task shedding. Participants were fastest in the medium transparency condition. The proposed model of attention was not supported; however, participants’ scanning behavior differed significantly between hazy and zero haze conditions. The study was limited by task complexity due to efforts to replicate real-world conditions, leading to confusion on the part of some participants. Results suggested that transparency of automation is critical, and should include purpose, process, performance, reason, algorithm, and environment information. Additional research is needed to explain task shedding behavior and to investigate the relationship between degrade visual environments, transparency of automation, and trust in automation.

Rights

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).

DOI

10.25777/7sba-p044

ISBN

9798607333386

ORCID

0000-0003-1164-906X

Share

COinS