•  
  •  
 

Computer Ethics - Philosophical Enquiry (CEPE) Proceedings

Publication Date

5-29-2019

Document Type

Paper

DOI

10.25884/xkzx-4c75

Author ORCiD

0000-0003-4411-6577

Abstract

Trust is defined as a belief of a human H (`the trustor') about the ability of an agent A (the `trustee') to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust A, H demands A to be self-assertive about confidence and able to self-correct its own models. In the Bayesian approach trust can be applied not only to humans, but to artificial agents (e.g. Machine Learning algorithms). We explain the advantage the metacognitive trust when compared to mainstream approaches and how it relates to virtue epistemology. The metacognitive ethics of trust is swiftly discussed.

Custom Citation

Muntean, I. (2019). A metacognitive approach to trust and a case study: Artificial agency. In D. Wittkower (Ed.), 2019 Computer Ethics - Philosophical Enquiry (CEPE) Proceedings, (14 pp.). doi: 10.25884/xkzx-4c75 Retrieved from https://digitalcommons.odu.edu/cepe_proceedings/vol2019/iss1/17

Share

COinS