Poster
Masking meets Supervision: A Strong Learning Alliance
Byeongho Heo · Taekyung Kim · Sangdoo Yun · Dongyoon Han
Pre-training with random masked inputs has emerged as a novel trend in self-supervised training. However, supervised learning still faces a challenge in adopting masking augmentations, primarily due to unstable training. In this paper, we propose a novel way to involve masking augmentations dubbed Masked Sub-model (MaskSub). MaskSub consists of the main-model and sub-model, the latter being a part of the former. The main-model undergoes conventional training recipes, while the sub-model merits intensive masking augmentations, during training. MaskSub tackles the challenge by mitigating adverse effects through a relaxed loss function similar to a self-distillation loss. Our analysis shows that MaskSub significantly improves performance, with the training loss converging faster than in standard training, which suggests our method stabilizes the training process. We further validate MaskSub across diverse training scenarios and models, including DeiT-III training, MAE finetuning, CLIP finetuning, BERT training, and hierarchical architectures (ResNet and Swin Transformer). Our results show that MaskSub consistently achieves significant performance gains across all the cases. MaskSub provides a practical and effective solution for introducing additional regularization under various training recipes. Our code will be publicly available.
Live content is unavailable. Log in and register to view live content