FluidGaussian: Propagating Simulation-Based Uncertainty Toward Functionally-Intelligent 3D Reconstruction
Abstract
Real objects inhabit a physical world and must behave plausibly during interaction with other physical objects. However, current methods that perform 3D reconstructions of real-world scenes from multi-view images optimize primarily for visual fidelity, i.e., they train with photometric losses and reason about uncertainty in the image or representation space. This appearance-centric view overlooks body contacts and couplings, conflates function-critical regions (e.g., aerodynamic or hydrodynamic surfaces) with ornamentation, and reconstructs structures suboptimally, even when physical regularizers are added. We consider the question: How can 3D reconstruction become aware of real-world interactions and underlying object function, beyond visual cues? We propose FluidGaussian, a plug-and-play method that tightly couples geometry reconstruction with ubiquitous fluid-structure interactions to assess surface quality at high granularity. (1) We define our simulation-based uncertainty induced from fluid simulations that capture physical plausibility. (2) We integrate our uncertainty with NBV (next-best-view) policies to prioritize views that improve both visual and physical fidelity. On NeRF Synthetic (Blender), Mip-NeRF 360, and DrivAerNet++, our method yields up to +8.6% PSNR, and -62.3% velocity divergence, with PSNR gains on function-critical surfaces of +7.7%.