Skip to yearly menu bar Skip to main content


Poster

COSMIC: Clique-Oriented Semantic Multi-space Integration for Robust CLIP Test-Time Adaptation

Fanding Huang · Jingyan Jiang · Qinting Jiang · Li Hebei · Faisal Nadeem Khan · Zhi Wang


Abstract:

Vision-language models (VLMs) face significant challenges in test-time adaptation to novel domains. While cache-based methods show promise by leveraging historical information, they struggle with both caching unreliable feature-label pairs and indiscriminately using single-class information during querying, significantly compromising adaptation accuracy. To address these limitations, we propose \textbf{COSMIC} (\underline{C}lique-\underline{O}riented \underline{S}emantic \underline{M}ulti-space \underline{I}ntegration for \underline{C}LIP), a robust test-time adaptation framework that enhances adaptability through multi-granular, cross-modal semantic caching and graph-based querying mechanisms. Our framework introduces two key innovations: \textit{Dual Semantics Graph} (DSG) and \textit{Clique Guided Hyper-class} (CGH). The Dual Semantics Graph constructs complementary semantic spaces by incorporating textual features, coarse-grained CLIP features, and fine-grained DINOv2 features to capture rich semantic relationships. Building upon these dual graphs, the Clique Guided Hyper-class component leverages structured class relationships to enhance prediction robustness through correlated class selection. Extensive experiments demonstrate COSMIC's superior performance across multiple benchmarks, achieving significant improvements over state-of-the-art methods: 15.81\% gain on out-of-distribution tasks and 5.33\% on cross-domain generation with CLIP RN-50.

Live content is unavailable. Log in and register to view live content