Skip to yearly menu bar Skip to main content


Label Propagation for Zero-shot Classification with Vision-Language Models

Vladan Stojnić · Yannis Kalantidis · Giorgos Tolias

Arch 4A-E Poster #358
[ ] [ Project Page ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Vision-Language Models (VLMs) have demonstrated impressive performance on zero-shot classification, i.e. classification when provided merely with a list of class names. In this paper, we tackle the case of zero-shot classification in the presence of unlabeled data. We leverage the graph structure of the unlabeled data and introduce ZLaP, a method based on label propagation (LP) that utilizes geodesic distances for classification. We tailor LP to graphs containing both text and image features and further propose an efficient method for performing inductive inference based on a dual solution and a sparsification step. We perform extensive experiments to evaluate the effectiveness of our method on 14 common datasets and show that ZLaP outperforms the latest related works. Code:

Live content is unavailable. Log in and register to view live content