Skip to yearly menu bar Skip to main content


Poster

A Unified Image-Dense Annotation Generation Model for Underwater Scenes

Hongkai Lin · Dingkang Liang · Zhenghao Qi · Xiang Bai


Abstract:

Underwater dense prediction, especially depth estimation and semantic segmentation, is crucial for comprehensively understanding underwater scenes.Nevertheless, high-quality and large-scale underwater datasets with dense annotations remained scarce because of the complex environment and the exorbitant data collection costs. This paper proposes a unified Text-to-Image and DEnse annotation generation method (TIDE) for underwater scenes. It relies solely on text as input to simultaneously generate realistic underwater images and multiple highly consistent dense annotations. Specifically, we unify the generation of text-to-image and text-to-dense annotations within a single model.The Implicit Layout Sharing mechanism (ILS) and cross-modal interaction method called Time Adaptive Normalization (TAN) are introduced to jointly optimize the consistency between image and dense annotations.We synthesize a large underwater dataset using TIDE to validate the effectiveness of our method in underwater dense prediction tasks.The results demonstrate that our method effectively improves the performance of existing underwater dense prediction models and mitigates the scarcity of underwater data with dense annotations.Our method can offer new perspectives on alleviating data scarcity issues in other fields.

Live content is unavailable. Log in and register to view live content