PowerCLIP: Powerset Alignment for Fine-Grained Contrastive Pre-Training
Masaki Kawamura ⋅ Nakamasa Inoue ⋅ Rintaro Yanagi ⋅ Hirokatsu Kataoka ⋅ Rio Yokota
Abstract
Contrastive pre-training frameworks such as CLIP have demonstrated impressive zero-shot performance across a range of vision-language tasks. Recent studies have shown that aligning individual text tokens with specific image patches or regions enhances fine-grained compositional understanding. However, it remains challenging to capture compositional semantics spanning multiple image regions.To address this limitation, we propose PowerCLIP, a novelcontrastive pre-training framework enhanced by powerset alignment, which exhaustively optimizes region-to-phrase alignments by minimizing the loss defined between powersets of image regions and textual parse trees.As this approach increases computational complexity exponentially due to the combinatorial explosion in the number of region subsets, we introduce efficient non-linear aggregators (NLAs) that reduce complexity from $\mathit{\mathcal{O}(2^{M})}$ to $\mathit{\mathcal{O}(M)}$ with respect to the number of regions $M$, provably approximating the exact loss value with arbitrary precision.Our extensive experiments demonstrate that PowerCLIPoutperforms state-of-the-art methodsin zero-shot classification and retrieval tasks, underscoring compositionality and robustness of our approach. Our code will be made publicly available.
Successful Page Load