Skip to yearly menu bar Skip to main content


MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training

Pavan Kumar Anasosalu Vasu · Hadi Pouransari · Fartash Faghri · Raviteja Vemulapalli · Oncel Tuzel

Arch 4A-E Poster #130
[ ] [ Project Page ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT


Contrastive pretraining of image-text foundation models, such as CLIP, demonstrated excellent zero-shot performance and improved robustness on a wide range of downstream tasks. However, the utilization of large transformer-based text and image encoders in these models introduces significant memory and latency overhead, posing challenges for deployment on mobile devices. In this work, we introduce MobileCLIP, a new family of efficient image-text models optimized for runtime performance along with a novel, efficient, and highly effective training approach, referred to as multi-modal reinforced training. The proposed training approach leverages knowledge transfer from an image captioning model and an ensemble of strong CLIP encoders to improve the accuracy of efficient models, while avoiding training time compute overhead by storing the additional knowledge in a reinforced dataset. MobileCLIP sets a new state-of-the-art latency-accuracy tradeoff for zero-shot classification and retrieval tasks on several datasets. Our MobileCLIP-S2 variant is 2.3x faster while more accurate in terms of average zero-shot performance when compared to previous best ViT-B/16 based CLIP model. We also demonstrate the effectiveness of the proposed multi-modal reinforced training in isolation by training a CLIP model with standard ViT-B/16 image backbone and achieving +2.9% average performance improvement compared to the previous best on OpenCLIP 38 evaluation benchmarks. Moreover, we show that the proposed approach achieves 10x-1000x improved learning efficiency when compared with non-reinforced CLIP training.

Live content is unavailable. Log in and register to view live content