Poster
AA: Adaptive Transformation Agent for Text-Guided Subject-Position Variable Background Inpainting
Yizhe Tang · Zhimin Sun · Yuzhen Du · Ran Yi · Guangben Lu · Teng Hu · LUYING LI · Lizhuang Ma · FangYuan Zou
[
Abstract
]
Abstract:
Image inpainting aims to fill the missing region of an image.Recently, there has been a surge of interest in foreground-conditioned background inpainting, a sub-task that fills the background of an image while the foreground subject and associated text prompt are provided.Existing background inpainting methods typically strictly preserve the subject's original position from the source image,resulting in inconsistencies between the subject and the generated background.To address this challenge, we propose a new task, the Text-Guided Subject-Position Variable Background Inpainting'', which aims to dynamically adjust the subject position to achieve a harmonious relationship between the subject andthe inpainted background, and propose the Adaptive Transformation Agent (AA) for this task.Firstly, we design a PosAgent Block that adaptively predicts an appropriate displacement based on given features to achieve variable subject-position.Secondly, we design the Reverse Displacement Transform (RDT) module, which arranges multiple PosAgent blocks in a reverse structure, to transform hierarchical feature maps from deep to shallow based on semantic information.Thirdly, we equip AA with a Position Switch Embedding to control whether the subject's position in the generated image is adaptively predicted or fixed.Extensive comparative experiments validate the effectiveness of our AA approach, which not only demonstrates superior inpainting capabilities in subject-position variable inpainting, but also ensures good performance on subject-position fixed inpainting.
Live content is unavailable. Log in and register to view live content