Poster
SmartEraser: Remove Anything from Images using Masked-Region Guidance
Longtao Jiang · Zhendong Wang · Jianmin Bao · Wengang Zhou · Dongdong Chen · Lei Shi · Dong Chen · Houqiang Li
Object removal has so far been dominated by the mask-and-inpain paradigm, where the masked region is excluded from the input, leaving models relying on unmasked areas to inpaint the missing region. However, this approach lacks contextual information for the masked area, often resulting in unstable performance. In this work, we introduce SmartEraser, built with a new removing paradigm called Masked-Region Guidance. This paradigm retains the masked region in the input, using it as guidance for the removal process. It offers several distinct advantages: (a) it guides the model to accurately identify the object to be removed, preventing its regeneration in the output; (b) since the user mask often extends beyond the object itself, it aids in preserving the surrounding context in the final result. Leveraging this new paradigm, we present Syn4Removal, a large-scale object removal dataset, where instance segmentation data is used to copy and paste objects onto images as removal targets, with the original images serving as ground truths.Experimental results demonstrate that our model, SmartEraser, significantly outperforms existing methods, achieving superior performance in object removal, especially in complex scenes with intricate compositions. We will release the code, dataset, and models.
Live content is unavailable. Log in and register to view live content