Rethinking Box Supervision: Bias-Free Weakly Supervised Medical Segmentation
Abstract
Pixel-level annotations for medical image segmentation are costly and labor-intensive, often requiring expert knowledge. Bounding box labels provide a more scalable alternative but introduce strong box-shaped bias that hampers segmentation quality. We propose WeakMed, a general-purpose weakly supervised segmentation framework that removes the dependence on pixel-level masks while overcoming the structural limitations of box supervision. WeakMed introduces two lightweight, plug-and-play training components: (1) a Mask-to-Box (M2B) transformation that aligns predicted masks with box annotations to reduce label mismatch and box-induced bias, and (2) a Scale Consistency (SC) loss that enforces multi-scale self-supervision to address the ambiguity and instability of weak labels. Both modules are used only during training and impose no inference overhead. Across 9 segmentation tasks, 10 datasets, and 6 imaging modalities, WeakMed consistently surpasses existing weakly supervised methods and achieves performance competitive with fully supervised baselines. These results demonstrate its practicality as a low-cost yet high-quality solution for medical image segmentation. Codes will be released.