Skip to yearly menu bar Skip to main content


Poster

MirrorVerse: Pushing Diffusion Models to Realistically Reflect the World

Ankit Dhiman · Manan Shah · R. Venkatesh Babu


Abstract:

Diffusion models have become central to various image editing tasks, yet they often fail to fully adhere to physical laws, particularly with effects like shadows, reflections, and occlusions. In this work, we address the challenge of generating photorealistic mirror reflections using diffusion-based generative models. Despite extensive training data, existing diffusion models frequently overlook the nuanced details crucial to authentic mirror reflections. Recent approaches have attempted to resolve this by creating synthetic datasets and framing reflection generation as an inpainting task; however, they struggle to generalize across different object orientations and positions relative to the mirror.Our method overcomes these limitations by introducing key augmentations into the synthetic data pipeline: (1) random object positioning, (2) randomized rotations, and (3) grounding of objects, significantly enhancing generalization across poses and placements. To further address spatial relationships and occlusions in scenes with multiple objects, we implement a strategy to pair objects during dataset generation, resulting in a dataset robust enough to handle these complex scenarios. Achieving generalization to real-world scenes remains a challenge, so we introduce a three-stage training curriculum to train a conditional generative model, aimed at improving real-world performance. We provide extensive qualitative and quantitative evaluations to support our approach, and the code and data will be released for research purposes.

Live content is unavailable. Log in and register to view live content