Probabilistic Discrepancy Learning for Roadside LiDAR Scene Completion
Abstract
We propose a probabilistic discrepancy learning approach for roadside LiDAR scene completion (PDL). Conventional methods focus on object-level completion and scene completion from ego-vehicle viewpoint. These methods struggle to cope with long-term or total occlusions caused by roadside sensors with fixed viewpoints. To address this issue, we compensate for occlusion roadside point clouds by introducing external visual information. Specifically, Our PDL is mainly divided into probabilistic pose discrepancy minimization and scene discrepancy learning. We employ probabilistic pose discrepancy minimization to correct noisy poses from vision-based detectors, while utilizing a diffusion model within scene discrepancy learning for robust full-scene completion.Furthermore, we introduce regional and global sampling discrepancy learning losses to achieve robust and efficient training. We conducted extensive experiments on the V2X-Seq and TUMTraf-V2X roadside datasets. Results demonstrate that DT-VEM achieves state-of-the-art performance, with average reductions of 14.5\% in chamfer distance (CD) and 6\% in 3D Jensen Shannon divergence (JSD) compared to existing methods.