Abstract
As artificial intelligence continues to evolve rapidly with emerging innovations, mass-scale digitization could be disrupted due to unfair algorithms with historically biased data. With the rising concerns of algorithmic bias, detecting biases is essential in mitigating and implementing an algorithm that promotes inclusive representation. The spread of ubiquitous artificial intelligence means that improving modeling robustness is at its most crucial point. This paper examines the omnipotence of artificial intelligence and its resulting bias, examples of AI bias in different groups, and a potential framework and mitigation strategies to improve AI fairness and remove AI bias from modeling techniques.
Faculty Advisor/Mentor
Kazi Islam
Document Type
Paper
Disciplines
Artificial Intelligence and Robotics | Information Security | Theory and Algorithms
DOI
10.25776/8ktt-kk62
Publication Date
2022
Upload File
wf_yes
Included in
Artificial Intelligence and Robotics Commons, Information Security Commons, Theory and Algorithms Commons
Mitigation of Algorithmic Bias to Improve AI Fairness
As artificial intelligence continues to evolve rapidly with emerging innovations, mass-scale digitization could be disrupted due to unfair algorithms with historically biased data. With the rising concerns of algorithmic bias, detecting biases is essential in mitigating and implementing an algorithm that promotes inclusive representation. The spread of ubiquitous artificial intelligence means that improving modeling robustness is at its most crucial point. This paper examines the omnipotence of artificial intelligence and its resulting bias, examples of AI bias in different groups, and a potential framework and mitigation strategies to improve AI fairness and remove AI bias from modeling techniques.