D-Convexity: A Unified Differentiable Convex Shape Prior via Quasi-Concavity for Data-driven Image Segmentation
Shengzhe Chen ⋅ Hao Yan
Abstract
Convexity is a fundamental geometric prior that underlies many natural and man-made structures, yet remains challenging to impose effectively in end-to-end trainable segmentation networks. We revisit convexity from a functional perspective and propose a unified, threshold-free convexity prior based on quasi-concavity of the network output mask function $u$. Instead of constraining a single binary segmentation, we require all super-level sets of $u$ to be convex, transforming global shape constraints into local, differentiable inequalities on $u$ and its derivatives. From this principle we derive zero, first, and second-order characterizations, yielding respectively a local midpoint convexification operator, a gradient based condition linked to supporting hyperplanes, and a sufficient second-order inequality expressed by a quadratic form on the tangent plane. The first and second-order formulations produce a compact convolutional loss that can be densely applied across the image without thresholding. Our quasi-concavity losses integrate seamlessly with modern segmentation networks via the proposed convex gradient projection module (CGPM). They consistently enforce convexity and improve shape regularity across multiple datasets, outperforming networks tailored for retinal segmentation and surpassing prior shape-aware methods. Remarkably, our analysis unifies a wide spectrum of previous convex shape models, from discrete 1–0–1 line constraints and graph-cuts convexity formulations to curvature or signed distance Laplacian based level-set priors under one continuous, differentiable framework.
Successful Page Load