Skip to yearly menu bar Skip to main content


SeaBird: Segmentation in Bird’s View with Dice Loss Improves Monocular 3D Detection of Large Objects

Abhinav Kumar · Yuliang Guo · Xinyu Huang · Liu Ren · Xiaoming Liu

Arch 4A-E Poster #58
[ ] [ Project Page ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT


Monocular 3D detectors achieve remarkable performance on cars and smaller objects. However, their performance drops on larger objects, leading to fatal accidents. Some attribute the failures to training data scarcity or their receptive field requirements of large objects. In this paper, we highlight this understudied problem of generalization to large objects and find that modern frontal detectors struggle to generalize to large objects even on balanced datasets. We argue that the cause of failure is the sensitivity of depth regression losses to noise of larger objects. To bridge this gap, we comprehensively investigate regression and dice losses, examining their robustness under varying error levels and object sizes. We mathematically prove that the dice loss leads to superior noise-robustness and model convergence for large objects compared to regression losses for a simplified case. Leveraging our theoretical insights, we propose SeaBird (Segmentation in Bird’s View) as the first step towards generalizing to large objects. SeaBird effectively integrates BEV segmentation on foreground objects for 3D detection, with the segmentation head trained with the dice loss. SeaBird achieves SoTA results on the KITTI-360 leaderboard and improves existing detectors on nuScenes leaderboard, particularly for large objects. Our code and models will be publicly available.

Live content is unavailable. Log in and register to view live content