Skip to yearly menu bar Skip to main content


Poster

Concept Lancet: Representation Decomposition and Transplant for Diffusion-Based Image Editing

Jinqi Luo · Tianjiao Ding · Kwan Ho Ryan Chan · Hancheng Min · Chris Callison-Burch · Rene Vidal


Abstract: Diffusion models are widely used for image editing tasks. Existing editing methods often design a representation manipulation procedure (e.g., Cat Dog, Sketch Painting) by curating an edit direction in the text embedding or score space. However, such a procedure faces a key challenge: overestimating the edit strength harms visual consistency while underestimating it fails the editing task. Notably, each source image may require a different editing strength, and it is costly to search for an appropriate strength via trial-and-error. To address this challenge, we propose Concept Lancet (CoLan), a zero-shot plug-and-play framework for principled representation manipulation in diffusion-based image editing. At inference time, we decompose the source input in the latent (text embedding or diffusion score) space as a sparse linear combination of the representations of the collected visual concepts and phrases. This allows us to accurately estimate the presence of concepts in each image, which informs the edit. Based on the editing task (replace, add, or remove), we perform a customized concept transplant process to impose the corresponding editing direction. To sufficiently model the concept space, we curate a conceptual representation dataset, CoLan-150K, which contains diverse descriptions and scenarios of visual concepts and phrases for the latent dictionary. Experiments on multiple diffusion-based image editing baselines show that methods equipped with CoLan achieve state-of-the-art performance in editing effectiveness and consistency preservation.

Live content is unavailable. Log in and register to view live content