Skip to yearly menu bar Skip to main content


SHAP-EDITOR: Instruction-Guided Latent 3D Editing in Seconds

Minghao Chen · Junyu Xie · Iro Laina · Andrea Vedaldi

Arch 4A-E Poster #223
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


We propose a novel feed-forward 3D editing framework called Shap-Editor. Prior research on editing 3D objects primarily concentrated on editing individual objects by leveraging off-the-shelf 2D image editing networks, utilizing a process called 3D distillation, which transfers knowledge from the 2D network to the 3D asset. Distillation necessitates at least tens of minutes per asset to attain satisfactory editing results, thus it is not very practical. In contrast, we ask whether 3D editing can be carried out directly by a feed-forward network, eschewing test-time optimization. In particular, we hypothesise that this process can be greatly simplified by first encoding 3D objects into a suitable latent space. We validate this hypothesis by building upon the latent space of Shap-E. We demonstrate that direct 3D editing in this space is possible and efficient by learning a feed-forward editor network that only requires approximately one second per edit. Our experiments show that Shap-Editor generalises well to both in-distribution and out-of-distribution 3D assets with different prompts and achieves superior performance compared to methods that carry out test-time optimisation for each edited instance.

Live content is unavailable. Log in and register to view live content