Skip to yearly menu bar Skip to main content


Poster

Weakly Supervised Segmentation With Point Annotations for Histopathology Images via Contrast-Based Variational Model

Hongrun Zhang · Liam Burrows · Yanda Meng · Declan Sculthorpe · Abhik Mukherjee · Sarah E. Coupland · Ke Chen · Yalin Zheng

West Building Exhibit Halls ABC 312

Abstract:

Image segmentation is a fundamental task in the field of imaging and vision. Supervised deep learning for segmentation has achieved unparalleled success when sufficient training data with annotated labels are available. However, annotation is known to be expensive to obtain, especially for histopathology images where the target regions are usually with high morphology variations and irregular shapes. Thus, weakly supervised learning with sparse annotations of points is promising to reduce the annotation workload. In this work, we propose a contrast-based variational model to generate segmentation results, which serve as reliable complementary supervision to train a deep segmentation model for histopathology images. The proposed method considers the common characteristics of target regions in histopathology images and can be trained in an end-to-end manner. It can generate more regionally consistent and smoother boundary segmentation, and is more robust to unlabeled ‘novel’ regions. Experiments on two different histology datasets demonstrate its effectiveness and efficiency in comparison to previous models. Code is available at: https://github.com/hrzhang1123/CVMWSSegmentation.

Chat is not available.