A Theoretical Characterization of Optimal Data Augmentations in Self-Supervised Learning

Created by MG96

External Public cs.LG

Statistics

Citations
0
References
0
Last updated
Loading...
Authors

Shlomo Libo Feigin Maximilian Fleissner Debarghya Ghoshdastidar
Project Resources

Name Type Source Actions
ArXiv Paper Paper arXiv
Semantic Scholar Paper Semantic Scholar
Abstract

Data augmentations play an important role in the recent success of Self-Supervised Learning (SSL). While commonly viewed as encoding invariances into the learned representations, this interpretation overlooks the impact of the pretraining architecture and suggests that SSL would require diverse augmentations which resemble the data to work well. However, these assumptions do not align with empirical evidence, encouraging further theoretical understanding to guide the principled design of augmentations in new domains. To this end, we use kernel theory to derive analytical expressions for data augmentations that achieve desired target representations after pretraining. We consider two popular non-contrastive losses, VICReg and Barlow Twins, and provide an algorithm to construct such augmentations. Our analysis shows that augmentations need not be similar to the data to learn useful representations, nor be diverse, and that the architecture has a significant impact on the optimal augmentations.

Note:

No note available for this project.

No note available for this project.
Contact:

No contact available for this project.

No contact available for this project.