Cross-domain Dual-stream Feature Disentanglement for Brain Disorder Prediction with Sparsely Labeled PET
Abstract
Positron Emission Tomography (PET) can be used for the early diagnosis of various brain disorders. However, the annotation of PET scans requires the involvement of specialized nuclear medicine experts, making accurately annotated PET data extremely scarce. MRI-based cross-modal domain adaptation methods can improve the brain disorder classification accuracy with sparsely labeled PET data. However, existing methods fail to balance the core requirements of domain discrepancy elimination and modality-specific discriminative information retention in cross-modal tasks. Forced alignment often undermines the core pathological discriminative features of both modalities, making it difficult to meet the collaborative optimization demands of cross-modal brain disorder classification. To address this, we propose a Dual-Stream feature Disentanglement and Alignment (DSDA) framework designed for collaborative optimization of cross-modal domain adaptation and brain disorder classification. This framework first dynamically evaluates and explicitly decouples the critical brain regions relevant to the classification task from the non-critical regions that preserve brain structural integrity. It then applies differential processing to the two types of brain regions: topology-weighted feature alignment for non-critical regions and high-confidence feature fusion for critical regions. This differential processing ensures that the model effectively aligns features while preserving key discriminative information. Extensive experimental results on various datasets (e.g., ADNI, AIBL, and PPMI) demonstrate the effectiveness of DSDA which helps achieve the state-of-the-art performance.