Contrastive Credibility Propagation for Reliable Semi-Supervised Learning

Created by MG96

External Public cs.LG

Statistics

Citations
0
References
72
Last updated
Loading...
Authors

Brody Kutt Pralay Ramteke Xavier Mignot Pamela Toman Nandini Ramanan Sujit Rokka Chhetri Shan Huang Min Du William Hewlett
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Semantic Scholar Paper Semantic Scholar
Abstract

Producing labels for unlabeled data is error-prone, making semi-supervised learning (SSL) troublesome. Often, little is known about when and why an algorithm fails to outperform a supervised baseline. Using benchmark datasets, we craft five common real-world SSL data scenarios: few-label, open-set, noisy-label, and class distribution imbalance/misalignment in the labeled and unlabeled sets. We propose a novel algorithm called Contrastive Credibility Propagation (CCP) for deep SSL via iterative transductive pseudo-label refinement. CCP unifies semi-supervised learning and noisy label learning for the goal of reliably outperforming a supervised baseline in any data scenario. Compared to prior methods which focus on a subset of scenarios, CCP uniquely outperforms the supervised baseline in all scenarios, supporting practitioners when the qualities of labeled or unlabeled data are unknown.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.