From Static to Dynamic: Exploring Self-supervised Image-to-Video Representation Transfer Learning
Abstract
Recent studies have made notable progress in video representation learning by transferring image-pretrained models to video tasks. This process typically introduces complex temporal processing modules with fine-tuning on video data. However, fine-tuning heavy modules may compromise inter-video semantic separability, i.e., the essential ability to distinguish objects across videos. While reducing the tunable parameters conversely hinders their intra-video temporal consistency, which is required to produce stable representations for the same object in a video. This dilemma indicates a potential trade-off between the intra-video temporal consistency and inter-video semantic separability during image-to-video transfer. To this end, we propose the Consistency-Separability Trade-off Transfer Learning (Co-Settle) framework, which applies a lightweight projection layer on top of the frozen image-pretrained encoder to adjust representation space with a temporal cycle consistency objective and a semantic separability constraint. We further provide a theoretical support showing that the optimized projection yields a better trade-off between the two properties under appropriate conditions. Experiments on eight image-pretrained models demonstrate consistent performance improvements across multiple levels of video tasks with only five epochs of self-supervised training. The code is available in the Supplemental Material.