Algorithm Ethics I
In 2017, Amazon discontinued an attempt at developing a hiring algorithm which would enable the company to streamline its hiring processes due to apparent gender discrimination. Specifically, the algorithm, trained on over a decade’s worth of resumes submitted to Amazon, learned to penalize applications that contained references to women, that indicated graduation from all women’s colleges, or otherwise indicated that an applicant was not male. Amazon’s algorithm took up the history of Amazon’s applicant pool and integrated it into its present “problematic situation,” for the purposes of future action. Consequently, Amazon declared the project a failure: even after attempting to edit the algorithm to ensure neutrality to terms like “women,” Amazon executives were not convinced that the algorithm would not engage in biased sorting of applicants. While the incident was held up as yet another way in which bias derailed an application of machine learning, this paper contends that the “failure,” viewed phenomenologically and pragmatically, could be articulated as a success. Specifically, this paper contends that if we view the algorithm’s bias as making present that which is habitual, or that which fades into the social background, these failures could be valuable tools for evaluating current social and cultural practices. Thus, this paper contends that, rather than treating biased algorithms as “failures,” it may be more productive to view algorithmic bias as demonstrative of a social or cultural organization that gives rise to bias. These biased algorithms, therefore, function as modes of diagnosing the ways in which inequalities are institutionalized and replicated within organizations. They are, for John Dewey, forms of technology, insofar as technology refers to the methods of inquiry into problematic situations, which serve to make clear the organization of our society. This paper argues that we should take seriously the results of biased algorithms, not as the projected completion of action, but as processes of inquiry that indicate the ways in which our society is organized to replicate inequality.
Flowers, J. C. (2019). Rethinking algorithmic bias through phenomenology and pragmatism. In D. Wittkower (Ed.), 2019 Computer Ethics - Philosophical Enquiry (CEPE) Proceedings, (27 pp.). doi: 10.25884/mh5z-fb89 Retrieved from https://digitalcommons.odu.edu/cepe_proceedings/vol2019/iss1/14
Critical and Cultural Studies Commons, Digital Humanities Commons, Feminist, Gender, and Sexuality Studies Commons, Gender, Race, Sexuality, and Ethnicity in Communication Commons, Metaphysics Commons, Other Philosophy Commons, Race, Ethnicity and Post-Colonial Studies Commons, Science and Technology Studies Commons, Social Media Commons, Theory and Algorithms Commons