Skip to yearly menu bar Skip to main content


Exact Fusion via Feature Distribution Matching for Few-shot Image Generation

Yingbo Zhou · Yutong Ye · Pengyu Zhang · Xian Wei · Mingsong Chen

Arch 4A-E Poster #348
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: Few-shot image generation, as an important yet challenging visual task, still suffers from the trade-off between generation quality and diversity. According to the principle of feature-matching learning, existing fusion-based methods usually fuse different features by using similarity measurements or attention mechanisms, which may match features inaccurately and lead to artifacts in the texture and structure of generated images. In this paper, we propose an exact $\textbf{F}$usion via $\textbf{F}$eature $\textbf{D}$istribution matching $\textbf{G}$enerative $\textbf{A}$dversarial $\textbf{N}$etwork ($\textbf{F2DGAN}$) for few-shot image generation. The rationale behind this is that feature distribution matching is much more reliable than feature matching to explore the statistical characters in image feature space for limited real-world data. To model feature distributions from only a few examples for feature fusion, we design a novel variational feature distribution matching fusion module to perform exact fusion by empirical cumulative distribution functions. Specifically, we employ a variational autoencoder to transform deep image features into distributions and fuse different features exactly by applying histogram matching. Additionally, we formulate two effective losses to guide the matching process for better fitting our fusion strategy. Extensive experiments compared with state-of-the-art methods on three public datasets demonstrate the superiority of F2DGAN for few-shot image generation in terms of generation quality and diversity, and the effectiveness of data augmentation in downstream classification tasks.

Live content is unavailable. Log in and register to view live content