Exploring the Underwater World Segmentation without Extra Training
Abstract
Accurate segmentation of marine organisms is vital for biodiversity monitoring and ecological assessment, yet existing datasets and models remain largely limited to terrestrial scenes. To bridge this gap, we introduce AquaOV255, the first large-scale and fine-grained underwater segmentation dataset containing 255 categories and over 20K images, covering diverse marine organisms and man-made objects for open-vocabulary evaluation. Furthermore, we establish the first underwater open-vocabulary segmentation benchmark, UOVSBench, by integrating AquaOV255 with five additional underwater datasets to enable comprehensive cross-domain evaluation. Alongside, we present Earth2Ocean, a training-free open-vocabulary segmentation framework that transfers terrestrial vision–language models (VLMs) to underwater domains without any additional underwater training. Earth2Ocean consists of two core components: a Geometric-guided Visual Mask Generator (GMG) that refines visual features via self-similarity geometric priors for local structure perception, and a Category-visual Semantic Alignment(CSA) module that enhances text embeddings through multimodal large language model reasoning and scene-aware template construction. Extensive experiments on the UOVSBench benchmark demonstrate that Earth2Ocean achieves over 6+ mIoU improvement on average while maintaining efficient inference.