FedBPrompt: Federated Domain Generalization Person Re-Identification via Body Distribution Aware Visual Prompts
Abstract
Federated Domain Generalization for Person Re-Identification (FedDG-ReID) aims to learn domain-invariant representations from decentralized data. Although Vision Transformers (ViTs) are widely adopted, their global attention often fails to distinguish pedestrians from high similarity backgrounds or diverse viewpoints---a challenge further amplified by cross-client distribution shifts in FedDG-ReID. To address this, we propose Federated Body Distribution Aware Visual Prompt (FedBPrompt), which introduces learnable visual prompts to explicitly guide Transformer attention toward pedestrian-centric regions. FedBPrompt employs a Body Distribution Aware Visual Prompts Mechanism (BAPM) that divides prompts into two groups: Holistic Full Body Prompts suppress cross-client background noise, while Body Part Alignment Prompts capture fine-grained details robust to pose and viewpoint variations. To mitigate the high communication cost of large Transformer models, we further design a Prompt-based Fine-Tuning Strategy (PFTS) that freezes the ViT backbone and updates only lightweight prompts, significantly reducing communication overhead while maintaining adaptability. Extensive experiments demonstrate that BAPM effectively enhances feature discrimination and cross-domain generalization, while PFTS}achieves notable performance gains within only a few aggregation rounds. Moreover, both BAPM and PFTS can be easily integrated into existing ViT-based FedDG-ReID frameworks, making FedBPrompt a flexible and effective solution for federated person re-identification. The code will be released.