Skip to yearly menu bar Skip to main content


Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

Dongyeun Lee · Jae Young Lee · Doyeon Kim · Jaehyun Choi · Jaejun Yoo · Junmo Kim

West Building Exhibit Halls ABC 179


Recent studies show strong generative performance in domain translation especially by using transfer learning techniques on the unconditional generator. However, the control between different domain features using a single model is still challenging. Existing methods often require additional models, which is computationally demanding and leads to unsatisfactory visual quality. In addition, they have restricted control steps, which prevents a smooth transition. In this paper, we propose a new approach for high-quality domain translation with better controllability. The key idea is to preserve source features within a disentangled subspace of a target feature space. This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model. Our extensive experiments show that the proposed method can produce more consistent and realistic images than previous works and maintain precise controllability over different levels of transformation. The code is available at LeeDongYeun/FixNoise.

Chat is not available.