Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning

Pieter Delobelle, Paul Temple, Gilles Perrouin, Benoît Frénay, Patrick Heymans, Bettina Berendt

Résultats de recherche: Contribution dans un livre/un catalogue/un rapport/dans les actes d'une conférenceArticle dans les actes d'une conférence/un colloque

60 Téléchargements (Pure)


Machine learning is being integrated into a growing number of critical systems with far-reaching impacts on society. Unexpected behaviour and unfair decision processes are coming under increasing scrutiny due to this widespread use and its theoretical considerations. Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable. We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets. Our framework relies on two inter-operating adversaries to improve fairness. First, a model is trained with the goal of preventing the guessing of protected attributes' values while limiting utility losses. This first step optimizes the model's parameters for fairness. Second, the framework leverages evasion attacks from adversarial machine learning to generate new examples that will be misclassified. These new examples are then used to retrain and improve the model in the first step. These two steps are iteratively applied until a significant improvement in fairness is obtained. We evaluated our framework on well-studied datasets in the fairness literature -- including COMPAS -- where it can surpass other approaches concerning demographic parity, equality of opportunity and also the model's utility. We also illustrate our findings on the subtle difficulties when mitigating unfairness and highlight how our framework can assist model designers.
langue originaleAnglais
titre1st workshop on Bias and Fairness in AI, co-located with ECMLPKDD 2020
Nombre de pages15
Etat de la publicationPublié - 14 mai 2020
EvénementBias and Fairness in AI (BIAS 2020) -
Durée: 18 sept. 202018 sept. 2020

Atelier de travail

Atelier de travailBias and Fairness in AI (BIAS 2020)
Titre abrégéBIAS 2020
Adresse Internet

Empreinte digitale

Examiner les sujets de recherche de « Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation