Poster
Dragin3D: Image Editing by Dragging in 3D Space
Weiran Guang · Xiaoguang Gu · Mengqi Huang · Zhendong Mao
Interactive drag editing of images is a valuable task that has gained considerable attention for its precision and controllability. However, existing approaches have primarily focused on manipulating the shape or movement of objects in 2D plane. We propose to extend this drag-based editing task to 3D space. Firstly, we utilize the trajectory of two points to represent the rotational trajectory of the object. Gaussian maps of a circle and a square are centered at these two points, respectively. We use distinct shapes to ensure that symmetric views produce different object representations. Secondly, we introduce a lightweight mapping network to embed the object features into two Gaussian maps to obtain a continuous control condition that guides the model in learning the correspondence between the trajectory and the object. Finally, to overcome the limitations of current 3D object reconstruction datasets, which typically consist of object maps with transparent backgrounds, we affix random backgrounds to them. This modification helps improve the model's ability to ignore background interference when editing real images with complex backgrounds. Experiments demonstrate that our approach successfully achieves object rotation within the drag framework and demonstrates strong generalization to real-world images.
Live content is unavailable. Log in and register to view live content