Skip to yearly menu bar Skip to main content


Poster

3DFastEdit: Training-Free Fast and Controllable 3D Editing

Ziya Erkoc · Can Gümeli · Chaoyang Wang · Matthias Nießner · Angela Dai · Peter Wonka · Hsin-Ying Lee · Peiye Zhuang


Abstract:

We propose a training-free approach to 3D editing that enables the editing of a single shape and the reconstruction of a mesh within a few minutes. Leveraging 4-view images, user-guided text prompts, and rough 2D masks, our method produces an edited 3D mesh that aligns with the prompt. For this, our approach performs synchronized multi-view image editing in 2D. However, targeted regions to be edited are ambiguous due to projection from 3D to 2D. To ensure precise editing only in intended regions, we develop a 3D segmentation pipeline that detects edited areas in 3D space. Additionally, we introduce a merging algorithm to seamlessly integrate edited 3D regions with original input. Extensive experiments demonstrate the superiority of our method over previous approaches, enabling fast, high-quality editing while preserving unintended regions.

Live content is unavailable. Log in and register to view live content