FairPipes: Data Mutation Pipelines for Machine Learning Fairness

Résultats de recherche: Contribution à un événement scientifique (non publié)ArticleRevue par des pairs

27 Téléchargements (Pure)

Résumé

Machine Learning (ML) models are ubiquitous in decision-making applications impacting citizens' lives: credit attribution, crime recidivism, etc. In addition to seeking high performance and generalization abilities, ensuring that ML models do not discriminate against citizens regarding their age, gender, or race is essential. To this end, researchers developed various \emph{fairness} assessment techniques, comprising fairness metrics and mitigation approaches, notably at the model level.
However, the sensitivity of ML models to fairness data perturbations has been less explored. This paper presents mutation-based pipelines to emulate fairness variations in the data once the model is deployed. FairPipes implements mutation operators that shuffle sensitive attributes, add new values, or affect their distribution. We evaluated FairPipes on seven ML models over three datasets. Our results highlight different fairness sensitivity behaviors across models, from the most sensitive perceptrons to the insensitive support vector machines. We also consider the role of model optimization in fairness performance, being variable across models. FairPipes automates fairness testing at deployment time, informing researchers and practitioners on the fairness sensitivity evolution of their ML models.
langue originaleAnglais
Etat de la publicationPublié - 2024

Empreinte digitale

Examiner les sujets de recherche de « FairPipes: Data Mutation Pipelines for Machine Learning Fairness ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation