Skip to yearly menu bar Skip to main content


Poster

Learning from Synchronization: Self-Supervised Uncalibrated Multi-View Person Association in Challenging Scenes

Keqi Chen · vinkle srivastav · Didier MUTTER · Nicolas Padoy


Abstract:

Multi-view person association is a fundamental step towards multi-view analysis of human activities. Although the person re-identification features have been proven effective, they become unreliable in challenging scenes where persons share similar appearances. Therefore, cross-view geometric constraints are required for a more robust association. However, most existing approaches are either fully-supervised using ground-truth identity labels or require calibrated camera parameters that are hard to obtain. In this work, we investigate the potential of learning from multi-view synchronization, and propose a self-supervised uncalibrated multi-view person association approach, Self-MVA, without using any annotations. Specifically, we propose a self-supervised learning framework, consisting of an encoder-decoder model and a self-supervised pretext task, cross-view image synchronization, which aims to distinguish whether two images from different views are captured at the same time. The model encodes each person's unified geometric features and appearance features for association and decodes the geometric features to predict the 2d positions in the original view. To train the model, we apply Hungarian matching to bridge the gap between instance-wise and image-wise distances, and then utilize synchronization labels for metric learning. To further reduce the solution space, we propose two types of self-supervised linear constraints: multi-view localization and pairwise edge association. Extensive experiments on three challenging public benchmark datasets (WILDTRACK, MVOR, and SOLDIERS) show that our approach achieves state-of-the-art results, surpassing existing unsupervised and fully-supervised approaches.

Live content is unavailable. Log in and register to view live content