Poster
Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation
Kunpeng Qiu · Zhiqiang Gao · Zhiying Zhou · MINGJIE SUN · Yongxin Guo
Deep learning has revolutionized medical image segmentation, but its full potential is limited by the scarcity of annotated datasets. Diffusion models are used to generate synthetic image-mask pairs to expand these datasets, yet they also face the same data scarcity issues they aim to address. Traditional mask-only models often produce low-fidelity images due to insufficient generation of morphological characteristics, which can catastrophically undermine the reliability of segmentation models. To enhance morphological fidelity, we propose the Siamese-Diffusion model, which incorporates both image and mask prior controls during training and switches to mask-only guidance during sampling to preserve diversity and scalability. This model, comprising both Mask-Diffusion and Image-Diffusion, ensures high morphological fidelity by introducing a Noise Consistency Loss between the two diffusion processes, guiding the convergence trajectory of Mask-Diffusion toward higher-fidelity local minima in the parameter space. Extensive experiments validate the superiority of our method: with Siamese-Diffusion, SANet achieves mDice and mIoU improvements of 3.6% and 4.4% on the Polyps dataset, while UNet shows mDice and mIoU improvements of 1.52% and 1.64% on the ISIC2018 dataset. Code will be released.
Live content is unavailable. Log in and register to view live content