Poster
GeoAvatar: Geometrically-Consistent Multi-Person Avatar Reconstruction from Sparse Multi-View Videos
SooHyun Lee · SeoYeon Kim · HeeKyung Lee · Won-Sik Cheong · Jooho Lee
Multi-person avatar reconstruction from sparse multiview videos is challenging.The independent reconstruction of individual avatars often fails to capture the geometric relationships among multiple instances, resulting in inter-penetrations between avatars.While some researchers have resolved this issue using neural volumetric rendering techniques, these approaches suffer from huge computational costs for rendering and training.In this paper, we propose a multi-person avatar reconstruction method that reconstructs 3D avatars while preserving the geometric relations between people.Our 2D Gaussian Splatting (2DGS)-based avatar representation allows us to represent geometrically accurate surfaces of multiple instances that support sharp inside-outside tests.To efficiently influence the occluded instances, we design a differentiable multi-layer alpha blending system compatible with the GS rendering pipeline.We mitigate inter-penetrations among avatars by penalizing segmentation discrepancies and seeing through near-contact regions to reveal penetrating parts.We also utilize monocular priors to enhance quality in less-observed and textureless surfaces.Our proposed method achieves fast reconstruction while maintaining state-of-the-art performance in terms of geometry and rendering quality.We demonstrate the efficiency and effectiveness of our method on a multi-person dataset containing close interactions.
Live content is unavailable. Log in and register to view live content