MonoSAOD: Monocular 3D Object Detection with Sparsely Annotated Label
Abstract
Monocular 3D Object Detection has achieved impressive performance on densely annotated datasets. However, it struggles when only a fraction of objects are labeled due to the high cost of 3D annotation. This sparsely-annotated setting is common in real-world scenarios where annotating every object is impractical.To address this, we propose a novel framework for sparsely-annotated monocular 3D object detection with two key modules.First, we propose Road-Aware Patch Augmentation (RAPA), which leverages sparse annotations by augmenting segmented object patches onto road regions while preserving 3D geometric consistency. Second, we propose Prototype-Based Filtering (PBF), which generates high-quality pseudo-labels by filtering predictions through prototype similarity and depth uncertainty. PBF maintains global 2D RoI feature prototypes and selects pseudo-labels that are both feature-consistent with learned prototypes and have reliable depth estimates.Our training strategy combines geometry-preserving augmentation with prototype-guided pseudo-labeling to achieve robust detection under sparse supervision.Extensive results demonstrate the effectiveness of the proposed method. The source code will be publicly available.