Object-Generalized Re-Identification: A Step Towards Universal Instance Perception
Abstract
The object re-identification (ReID) task aims to recognize the same individual object across diverse viewpoints and sensing conditions.Although person and vehicle ReID have achieved remarkable success, most existing methods are built on the assumption that training and testing data come from the same object category.This constraint requires separate models for each category, which limits scalability and generalization.To address this limitation, we introduce Object-Generalized Re-Identification (OG-ReID), a new paradigm that learns unified identity representations transferable across different object categories.Unlike conventional domain generalization that focuses on appearance variations within a single category, OG-ReID deals with category shifts caused by intrinsic structural differences in identity cues. To achieve this goal, we introduce the Meta-Generalized Object Re-Identification (MGOR) framework, which treats meta-learning as semantic distributional regularization, exposing the model to controlled category shifts so that invariance emerges as an equilibrium between semantic diversity and identity discrimination.Extensive evaluations on more than 100 unseen object categories from multiple domains show that MGOR outperforms existing ReID approaches without any target-domain adaptation, advancing toward universal identity perception beyond domain and category boundaries.