Skip to yearly menu bar Skip to main content


Poster

Person De-reidentification: A Variation-guided Identity Shift Modeling

Yi-Xing Peng · Yu-Ming Tang · Kun-Yu Lin · Qize Yang · Jingke Meng · Xihan Wei · Wei-Shi Zheng


Abstract:

Person re-identification (ReID) is to associate images of individuals from different camera views against cross-view variations. Like other surveillance technologies, Re-ID faces serious privacy challenges, particularly the potential for unauthorized tracking. Although various tasks (e.g., face recognition) have developed machine unlearning techniques to address privacy concerns, such approaches have not yet been explored within the Re-ID field. In this work, we pioneer the exploration of the person de-reidentification (De-ReID) problem and present its inherent challenges. In the context of ReID, De-ReID is to unlearn the knowledge about accurately matching specific persons so that these unlearned persons'' cannot be re-identified across cameras for privacy guarantee. The primary challenge is to achieve the unlearning without degrading the identity-discriminative feature embeddings to ensure the model's utility. To address this, we formulate a De-ReID framework that utilizes a labeled dataset of unlearned persons for unlearning and an unlabeled dataset of other persons for knowledge preservation. Instead of unlearning based on (pseudo) identity labels, we introduce a variation-guided identity shift mechanism that unlearns the specific persons by fitting the variations in their images, irrespective of their identity, while preserving ReID ability on other persons by overcoming the variations in other images. As a result, the model shifts the unlearned persons to a feature space that is vulnerable to cross-view variations. Extensive experiments on benchmark datasets demonstrate the superiority of our method.

Live content is unavailable. Log in and register to view live content