Date of Award
Summer 2024
Document Type
Dissertation
Degree Name
Doctor of Philosophy (PhD)
Department
Engineering Management & Systems Engineering
Program/Concentration
Engineering Management and Systems Engineering
Committee Director
Andrew J. Collins
Committee Member
Steve Cotter
Committee Member
David Selover
Abstract
A wide array of techniques within explainable artificial intelligence (XAI) have been developed to measure the importance of features in machine learning models. A notable portion of these methods draws upon principles of cooperative game theory (CGT), with the Shapley value emerging as a widely used solution concept. Despite the rising prominence of the Shapley value, other promising solutions from cooperative game theory—such as the Nucleolus, Banzhaf power index, Shapley-Shubik power index, and solutions to conflicting claims problems—have been comparatively overlooked, even though they hold significant potential. In this dissertation, multiple XAI methods based on these other CGT solutions are proposed. These methods were applied in both linear and classification scenarios, addressing datasets with both independent features and multicollinearity concerns. Prior work considered the sensitivity of explanations through permutation tests or the accuracy of explanations to evaluate XAI methods. However, these approaches do not address the uncertainty or the consistency associated with the feature importance evaluations. In this dissertation, a weighted Shannon entropy-based permutation relative importance evaluation (PRIME) metric is proposed to assess the consistency of feature importance methods in determining the relevance of the features. This metric integrates the established methods of permutation tests and weighted Shannon entropy to conduct the evaluation. The novelty of this dissertation lies in (1) demonstrating the applicability of numerous CGT solutions to measure feature importance values, (2) showing the effectiveness of these techniques using permutation relative importance evaluation metric, and (3) employing these methods to investigate input data that can be used for an agent-based model. The results show that the Shapley-Shubik, Banzhaf power index and conflicting claims problems-based feature importance methods offer advantages over Shapley value-based methods due to their unique properties when explaining feature importance values. The findings also demonstrate that PRIME can effectively evaluate feature importance methods.
Rights
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/ This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).
DOI
10.25777/xy0g-x273
ISBN
9798384453710
Recommended Citation
Grigoryan, Gayane.
"Explainable Artificial Intelligence: Methods and Evaluation"
(2024). Doctor of Philosophy (PhD), Dissertation, Engineering Management & Systems Engineering, Old Dominion University, DOI: 10.25777/xy0g-x273
https://digitalcommons.odu.edu/emse_etds/234
ORCID
0000-0002-8567-9643