Skip to yearly menu bar Skip to main content


Poster

Multi-Object Tracking in the Dark

Xinzhe Wang · Kang Ma · Qiankun Liu · Yunhao Zou · Ying Fu


Abstract:

Low-light scenes are prevalent in the real world applications (e.g., autonomous driving, security cameras at night). Recently, multi-object tracking in various practical use cases have garnered a lot of attention, but the multi-object tracking in the dark scenes is rarely considered. In this paper, we focus on multi-object tracking in the dark scenes. To address the lack of dataset, we build a \textbf{L}ow-light \textbf{M}ulti-\textbf{O}bject \textbf{T}racking (\textbf{LMOT}) dataset. LMOT provides highly aligned low-light video pairs captured by our dual-camera system and high quality multi-object tracking annotations for all videos. Then, we propose a Low-light Multi-object Tracking method, termed as \textbf{LTrack}. We introduce the adaptive low-pass downsample module to enhance low frequency components of images outside the sensor noises. Degradation suppression learning strategy enable the model to learn invariant information under the noise disturbance and image quality degradation. These components improve the robustness of multi-object tracking in the dark scenes. We conduct a comprehensive analysis to our LMOT dataset and proposed LTrack. Experimental results demonstrate the superiority of the proposed method and its competitiveness in real night low-light scenes.

Live content is unavailable. Log in and register to view live content