17/04/2021
Unsupervised pretraining is an approach that leverages a large unlabeled data pool to learn data features. However, it requires billion-scale datasets and a month-long training time to surpass its supervised counterpart on fine-tuning in many computer vision tasks. In this study, we propose a novel method, Diffeomorphism Matching (DM), to overcome those challenges. The proposed method combines self-supervised learning and knowledge distillation to equivalently map the feature space of a student model to that of a big pretrained teacher model. On the Chest X-ray dataset, our method alleviates the need to acquire billions of radiographs and substantially reduces pretraining time by 95%. In addition, our pretrained model outperforms other pretrained models by at least 4.2% in F1 score on the CheXpert dataset and 0.7% in Dice score on the SIIM Pneumothorax dataset.