ORCID

0000-0003-4798-9154 (Buskirk)

Document Type

Conference Paper

Publication Date

2025

Publication Title

Proceedings of Machine Learning Research

Volume

294

Pages

187-211

Conference Name

4th European Workshop on Algorithmic Fairness, EWAF 2025, June 30-July 2, 2025, Eindhoven, Netherlands

Abstract

Data-driven decisions, often based on predictions from machine learning (ML) models are becoming ubiquitous. For these decisions to be just, the underlying ML models must be fair, i.e., work equally well for all parts of the population such as groups defined by gender or age. What are the logical next steps if, however, a trained model is accurate but not fair? How can we guide the whole data pipeline such that we avoid training unfair models based on inadequate data, recognizing possible sources of unfairness early on? How can the concepts of data-based sources of unfairness that exist in the fair ML literature be organized, perhaps in a way to gain new insight? In this paper, we explore two total error frameworks from the social sciences, Total Survey Error and its generalization Total Data Quality, to help elucidate issues related to fairness and trace its antecedents. The goal of this thought piece is to acquaint the fair ML community with these two frameworks, discussing errors of measurement and errors of representation through their organized structure. We illustrate how they may be useful, both practically and conceptually.

Rights

© 2025 Copyright held by the owner/authors.

This paper is published under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC-BY-NC-ND 4.0) License.

Original Publication Citation

Schenk, P. O., Kern, C., & Buskirk, T. D. (2025). Fares on fairness: Using a total error framework to examine the role of measurement and representation in training data on model fairness and bias. Proceedings of Machine Learning Research. 294, 187-211. https://proceedings.mlr.press/v294/schenk25a.html

Share

 
COinS