Skip to yearly menu bar Skip to main content


A Simple Recipe for Language-guided Domain Generalized Segmentation

Mohammad Fahes · TUAN-HUNG VU · Andrei Bursuc · Patrick PĂ©rez · Raoul de Charette

Arch 4A-E Poster #379
[ ] [ Project Page ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


Generalization to new domains not seen during training is one of the long-standing challenges in deploying neural networks in real-world applications. Existing generalization techniques either necessitate external images for augmentation, and/or aim at learning invariant representations by imposing various alignment constraints. Large-scale pretraining has recently shown promising generalization capabilities, along with the potential of binding different modalities. For instance, the advent of vision-language models like CLIP has opened the doorway for vision models to exploit the textual modality. In this paper, we introduce a simple framework for generalizing semantic segmentation networks by employing language as the source of randomization. Our recipe comprises three key ingredients: (i) the preservation of the intrinsic CLIP robustness through minimal fine-tuning, (ii) language-driven local style augmentation, and (iii) randomization by locally mixing the source and augmented styles during training. Extensive experiments report state-of-the-art results on various generalization benchmarks.

Live content is unavailable. Log in and register to view live content