Date of Award

Winter 2018

Document Type


Degree Name

Master of Science (MS)



Committee Director

Bryan E. Porter

Committee Member

Yusuke Yamani

Committee Member

Jeremiah Still


The current study examined the effect of automation transparency on user trust and blame during forced moral outcomes. Participants read through moral scenarios in which an autonomous vehicle did or did not convey information about its decision prior to making a utilitarian or non-utilitarian decision. Participants also provided moral acceptance ratings for autonomous vehicles and humans when making identical moral decisions.

It was expected that trust would be highest for utilitarian outcomes and blame would be highest for non-utilitarian outcomes. When the vehicle provided information about its decision, trust and blame were expected to increase. Results showed that moral outcome and transparency did not influence trust independently. Specifically, trust was highest for non-transparent non- utilitarian outcomes and lowest for non-transparent utilitarian outcomes. Blame was not found to be influenced by either transparency, moral outcome, or their combined effects. Interestingly, acceptance was determined to be higher for autonomous vehicles that made the same utilitarian decision as humans, though no differences were found for non-utilitarian outcomes.

This research draws on the importance of active and passive harm and suggests that the type of automation transparency conveyed to an operator may be inappropriate in the presence of actively harmful moral outcomes. Theoretical insights into how ethical decisions are evaluated when different agents (human or autonomous) are responsible for active or passive moral decisions are discussed.


In Copyright. URI: This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).