Skip to yearly menu bar Skip to main content


Poster

Difference Inversion : Interpolate and Isolate the Difference with Token Consistency for Image Analogy Generation

Hyunsoo Kim · Donghyun Kim · Suhyun Kim


Abstract: How can we generate an image BB that satisfies A:A::B:BA:A::B:B, given the input images AA,AA and BB?Recent works have tackled this challenge through approaches like visual in-context learning or visual instruction. However, these methods are typically limited to specific models (\eg InstructPix2Pix. Inpainting models) rather than general diffusion models (\eg Stable Diffusion, SDXL). This dependency may lead to inherited biases or lower editing capabilities. In this paper, we propose Difference Inversion, a method that isolates only the difference from AA and AA and applies it to BB to generate a plausible BB. To address model dependency, it is crucial to structure prompts in the form of a "Full Prompt" suitable for input to stable diffusion models, rather than using an "Instruction Prompt". To this end, we accurately extract the Difference between AA and AA and combine it with the prompt of BB, enabling a plug-and-play application of the difference. To extract a precise difference, we first identify it through 1) Delta Interpolation. Additionally, to ensure accurate training, we propose the 2) Token Consistency Loss and 3) Zero Initialization of Token Embeddings. Our extensive experiments demonstrate that Difference Inversion outperforms existing baselines both quantitatively and qualitatively, indicating its ability to generate more feasible BB in a model-agnostic manner.

Live content is unavailable. Log in and register to view live content