Skip to yearly menu bar Skip to main content


Poster

AlignSAM: Aligning Segment Anything Model to Open Context via Reinforcement Learning

Duojun Huang · Xinyu Xiong · Jie Ma · Jichang Li · Zequn Jie · Lin Ma · Guanbin Li


Abstract:

Powered by massive curated training data, Segment Anything Model (SAM) has demonstrated its impressive generalization capabilities in open-world scenarios with the guidance of manual prompts. However, the vanilla SAM is class-agnostic and heavily relies on user-provided prompts to segment objects of interest. Customizing it into diversified tasks becomes necessary to identify specific targets, preventing suboptimal segmentation performance. In this paper, we propose a novel framework, termed AlignSAM, designed for automatic prompting to align SAM within an open context through reinforcement learning. Anchored by an agent, AlignSAM enables the generality of the SAM model across diverse downstream tasks while keeping its parameters frozen. Specifically, AlignSAM initiates a prompting agent to iteratively refine segmentation predictions by interacting with the foundational model. It integrates an additional reinforcement learning network to provide informative prompts to the foundational models. Additionally, a semantic recalibration module is introduced to provide prompt labels, enhancing the agent's proficiency in handling tasks encompassing explicit and implicit semantics. Experiments conducted on various challenging segmentation tasks among existing foundation models demonstrate the superiority of the proposed AlignSAM over state-of-the-art approaches.

Live content is unavailable. Log in and register to view live content