Date of Award
Master of Science (MS)
Bryan E. POrter
The current study examined the effect of automation transparency on user trust and blame during forced moral outcomes. Participants read through moral scenarios in which an autonomous vehicle did or did not convey information about its decision prior to making a utilitarian or non-utilitarian decision. Participants also provided moral acceptance ratings for autonomous vehicles and humans when making identical moral decisions.
It was expected that trust would be highest for utilitarian outcomes and blame would be highest for non-utilitarian outcomes. When the vehicle provided information about its decision, trust and blame were expected to increase. Results showed that moral outcome and transparency did not influence trust independently. Specifically, trust was highest for non-transparent non- utilitarian outcomes and lowest for non-transparent utilitarian outcomes. Blame was not found to be influenced by either transparency, moral outcome, or their combined effects. Interestingly, acceptance was determined to be higher for autonomous vehicles that made the same utilitarian decision as humans, though no differences were found for non-utilitarian outcomes.
This research draws on the importance of active and passive harm and suggests that the type of automation transparency conveyed to an operator may be inappropriate in the presence of actively harmful moral outcomes. Theoretical insights into how ethical decisions are evaluated when different agents (human or autonomous) are responsible for active or passive moral decisions are discussed.
Hatfield, Nathan A..
"The Effects of Automation Transparency and Ethical Outcomes on User Trust and Blame Towards Fully Autonomous Vehicles"
(2018). Master of Science (MS), thesis, Psychology, Old Dominion University, DOI: 10.25777/hnh1-cq36