Computer Ethics - Philosophical Enquiry (CEPE) Proceedings
Conference Section
AI Ethics III
Publication Date
5-29-2019
Document Type
Paper
DOI
10.25884/6q27-6t77
Author ORCiD
0000-0003-4991-0817
Abstract
Human supremacy is the widely held view that human interests ought to be privileged over other interests as a matter of public policy. Posthumanism is an historical and cultural situation characterized by a critical reevaluation of anthropocentrist theory and practice. This paper draws on Rosi Braidotti’s critical posthumanism and the critique of ideal theory in Charles Mills and Serene Khader to address the use of human supremacist rhetoric in AI ethics and policy discussions, particularly in the work of Joanna Bryson. This analysis leads to identifying a set of risks posed by human supremacist policy in a posthuman context, specifically involving the classification of agents by type.
Custom Citation
Estrada, D. (2019). Human supremacy as posthuman risk. In D. Wittkower (Ed.), 2019 Computer Ethics - Philosophical Enquiry (CEPE) Proceedings, (26 pp.). doi: 10.25884/6q27-6t77 Retrieved from https://digitalcommons.odu.edu/cepe_proceedings/vol2019/iss1/13
Included in
Applied Ethics Commons, Artificial Intelligence and Robotics Commons, Critical and Cultural Studies Commons