Conditional Contrastive Learning for Improving Fairness in Self-Supervised Learning

Created by MG96

External Public cs.LG

Statistics

Citations
12
References
70
Last updated
Loading...
Authors

Martin Q. Ma Yao-Hung Hubert Tsai Paul Pu Liang Han Zhao Kun Zhang Ruslan Salakhutdinov Louis-Philippe Morency
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Semantic Scholar Paper Semantic Scholar
Abstract

Contrastive self-supervised learning (SSL) learns an embedding space that maps similar data pairs closer and dissimilar data pairs farther apart. Despite its success, one issue has been overlooked: the fairness aspect of representations learned using contrastive SSL. Without mitigation, contrastive SSL techniques can incorporate sensitive information such as gender or race and cause potentially unfair predictions on downstream tasks. In this paper, we propose a Conditional Contrastive Learning (CCL) approach to improve the fairness of contrastive SSL methods. Our approach samples positive and negative pairs from distributions conditioning on the sensitive attribute, or empirically speaking, sampling positive and negative pairs from the same gender or the same race. We show that our approach provably maximizes the conditional mutual information between the learned representations of the positive pairs, and reduces the effect of the sensitive attribute by taking it as the conditional variable. On seven fairness and vision datasets, we empirically demonstrate that the proposed approach achieves state-of-the-art downstream performances compared to unsupervised baselines and significantly improves the fairness of contrastive SSL models on multiple fairness metrics.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.