DiffusionHarmonizer: Bridging Neural Reconstruction and Photorealistic Simulation with Online Diffusion Enhancer
Abstract
Simulation is essential to the development and evaluation of autonomous robots such as self-driving vehicles. Neural reconstruction is emerging as a promising solution as it enables simulating a wide variety of scenarios from real-world data alone in an automated and scalable way. However, while methods such as NeRF and 3D Gaussian Splatting can produce visually compelling results, they often exhibit artifacts particularly when rendering novel views, and fail to realistically integrate inserted dynamic objects, especially when they were captured from different scenes. To overcome these limitations we introduce DiffusionHarmonizer, an online generative enhancement framework that transforms renderings from such imperfect scenes into photorealistic, temporally consistent outputs. At its core is a single-step temporally-conditioned enhancer that is converted from a pretrained multi-step image diffusion model, capable of running in online simulators on a single GPU. The key to training it effectively, is a custom data curation pipeline that constructs synthetic–real pairs emphasizing appearance harmonization, artifact correction, and lighting realism. Experiments show that DiffusionHarmonizer substantially improves perceptual realism, being chosen by 84.28\% of users in our comparative study over the second best method. Furthermore, it matches the temporal coherence of state-of-the art video models while maintaining the inference efficiency of single-step image models, offering a scalable and practical solution for photorealistic simulation in both research and production settings.