Skip to yearly menu bar Skip to main content


Puff-Net: Efficient Style Transfer with Pure Content and Style Feature Fusion Network

Sizhe Zheng · Pan Gao · Peng Zhou · Jie Qin

Arch 4A-E Poster #316
[ ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT


Style Transfer belongs to the task of image generation. It aims to render an image with the artistic features of a style image, while maintaining the original structure. Many methods have been put forward for this task, but some challenges still exist. It is difficult for CNN-based methods to handle global information and long-range dependencies between the input images. Afterwards, some researchers have proposed transformer-based methods. Although transformer can better model the relationship between the content image and style image, these methods require high-cost hardware and time-consuming inference. To address these issues, we design a novel transformer model that includes only encoders, thus significantly reducing the computational cost. In addition, we also find that the result images generated by existing style transfer methods may lead to the image under-stylied or missing content. In order to achieve better stylization, we design a content feature extractor and style feature extractor. Then we can feed pure content and style images into the transformer. Finally, we propose a network model termed Puff-Net, i.e., efficient style transfer with pure content and style feature fusion network. Through qualitative and quantitative experiments, we verify the performance advantages of our model compared to state-of-the-art models in the literature.

Live content is unavailable. Log in and register to view live content