BEA-GS : BEyond RAdiance Supervision in 3DGS for Precise Object Extraction
Abstract
Most Gaussian Splatting techniques that provide a 3D semantic representation of the scene don't optimize the underlying 3D geometry of the scene. This makes object-level editing or asset extraction challenging. Recent methods, like COBGS, Trace3D, and ObjectGS, acknowledge this limitation and propose approaches that modify the geometry of the scene to represent the underlying semantics. We go a step further and propose a novel solution that provides near perfect boundaries in object extraction. We do so by introducing two new losses in the optimization that take care of: 1. Modifying the geometry of visible Gaussians to respect semantic boundaries, and, 2. Modifying the geometry of non-visible Gaussians that appear once the object is extracted. Our first loss propagates gradients directly through the rasterization to allow for seamless integration within the optimization of the Gaussian parameters. Our second loss also propagates gradients to the Gaussian parameters, but does so without passing through the rasterization. This allows it to modify the geometry of the scene, even if not much transmittance arrives to a Gaussian (partial or non-visible). Exhaustive comparisons to 12 state of the art methods over 4 datasets, using six metrics, demonstrate that our approach produces overall the best boundary segmentation to date.