Skip to yearly menu bar Skip to main content


UniMODE: Unified Monocular 3D Object Detection

Zhuoling Li · Xiaogang Xu · Ser-Nam Lim · Hengshuang Zhao

Arch 4A-E Poster #191
award Highlight
[ ]
Thu 20 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: Realizing unified monocular 3D object detection, including both indoor and outdoor scenes, holds great importance in applications like robot navigation. However, involving various scenarios of data to train models poses challenges due to their significantly different characteristics, e.g., diverse geometry properties and heterogeneous domain distributions. To address these challenges, we build a detector based on the bird's-eye-view (BEV) detection paradigm, where the explicit feature projection is beneficial to addressing the geometry learning ambiguity when employing multiple scenarios of data to train detectors. Then, we split the classical BEV detection architecture into two stages and propose an uneven BEV grid design to handle the convergence instability caused by the aforementioned challenges. Moreover, we develop a sparse BEV feature projection strategy to reduce computational cost and a unified domain alignment method to handle heterogeneous domains. Combining these techniques, a unified detector UniMODE is derived, which surpasses the previous state-of-the-art on the challenging Omni3D dataset (a large-scale dataset including both indoor and outdoor scenes) by 4.9% $\rm AP_{3D}$, revealing the first successful generalization of a BEV detector to unified 3D object detection.

Live content is unavailable. Log in and register to view live content