Skip to yearly menu bar Skip to main content


Network-Free, Unsupervised Semantic Segmentation With Synthetic Images

Qianli Feng · Raghudeep Gadde · Wentong Liao · Eduard Ramon · Aleix Martinez

West Building Exhibit Halls ABC 286


We derive a method that yields highly accurate semantic segmentation maps without the use of any additional neural network, layers, manually annotated training data, or supervised training. Our method is based on the observation that the correlation of a set of pixels belonging to the same semantic segment do not change when generating synthetic variants of an image using the style mixing approach in GANs. We show how we can use GAN inversion to accurately semantically segment synthetic and real photos as well as generate large training image-semantic segmentation mask pairs for downstream tasks.

Chat is not available.