Date of Award
Fall 2023
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Psychology
Program/Concentration
Human Factors Psychology
Committee Director
Jing Chen
Committee Director
Mark Scerbo
Committee Member
Jeremiah Still
Committee Member
Hong Yang
Abstract
The study examines how trust in automation is influenced by initial framing of information before interaction and how later active calibration methods can further influence trust repair or dampening after an automation error in a three-experiment study. As more human drivers begin to use automated driving systems (ADSs) for the first time, their initial understanding of the system can influence their trust leading to a miscalibration of trust. Prior studies have investigated how trust develops through interactions with an automated system, but few have looked at integrating swift trust and framing to calibrate trust before interaction and investigate further active calibration methods after an error. We conducted three experiments using multiple drives with an ADS to test manipulations of user’s initial trust calibration, how resilient that trust manipulation would be to an automation error, and if the trust could be repaired or further dampened after the error. Three initial framing methods were employed before the drives: Positive/Promotion, Control, and Negative/Dampening. The second experiment implemented an error during one of the drives and the third experiment implemented a positive, control, or negative active trust calibration strategy after the error. Positive/Promotion framing did indeed show an increase in trust for the first experiment and that increased trust was resilient in drives after an error in the second experiment. However, the third experiment showed mixed results and was unable to demonstrate an effect of active trust calibration after the error. Overall, the study showed that framing information is impactful on trust for driver’s initial interactions with an ADS, certain active calibration methods might not be effective depending on the individual, and designers and researchers should be careful to consider these effects to avoid endorsing overtrust or undertrust.
Rights
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
DOI
10.25777/qtn2-k597
ISBN
9798381448382
Recommended Citation
Mishler, Scott A..
"Framing Automation Trust: How Initial Information About Automated Driving Systems Influences Swift Trust in Automation and Trust Repair for Human Automation Collaboration"
(2023). Doctor of Philosophy (PhD), Dissertation, Psychology, Old Dominion University, DOI: 10.25777/qtn2-k597
https://digitalcommons.odu.edu/psychology_etds/417
ORCID
0000-0001-9104-1710
Included in
Automotive Engineering Commons, Human Factors Psychology Commons, Transportation Commons