Skip to yearly menu bar Skip to main content


PLACE: Adaptive Layout-Semantic Fusion for Semantic Image Synthesis

Zhengyao Lv · Yuxiang Wei · Wangmeng Zuo · Kwan-Yee K. Wong

Arch 4A-E Poster #433
award Highlight
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


Recent advancements in large-scale pre-trained text-to-image models have led to remarkable progress in semantic image synthesis. Nevertheless, synthesizing high-quality images with consistent semantics and layout remains a challenge. In this paper, we propose the adaPtive LAyout-semantiC fusion modulE (PLACE) that harnesses pre-trained models to alleviate the aforementioned issues. Specifically, we first employ the layout control map to faithfully represent layouts in the feature space. Subsequently, we combine the layout and semantic features in a timestep-adaptive manner to synthesize images with realistic details. During fine-tuning, we propose the Semantic Alignment (SA) loss to further enhance layout alignment. Additionally, we introduce the Layout-Free Prior Preservation (LFP) loss, which leverages unlabeled data to maintain the priors of pre-trained models, thereby improving the visual quality and semantic consistency of synthesized images. Extensive experiments demonstrate that our approach performs favorably in terms of visual quality, semantic consistency, and layout alignment. The source code and model are available at \href{}{PLACE}.

Live content is unavailable. Log in and register to view live content