Poster
Let Samples Speak: Mitigating Spurious Correlation by Exploiting the Clusterness of Samples
WEIWEI LI · Junzhuo Liu · Yuanyuan Ren · Yuchen Zheng · Yahao Liu · Wen Li
Deep learning models are known to often learn features that spuriously correlate with the class label during training but are irrelevant to the prediction task. Existing methods typically address this issue by annotating potential spurious attributes, or filtering spurious features based on some empirical assumptions (e.g., simplicity of bias). However, these methods may yield unsatisfying performance due to the intricate and elusive nature of spurious correlations in real-world data. In this paper, we propose a data-oriented approach to mitigate the spurious correlation in deep learning models. We observe that samples that are influenced by spurious features tend to exhibit a dispersed distribution in the learned feature space. This allows us to identify the presence of spurious features. Subsequently, we obtain a bias-invariant representation by neutralizing the spuriousfeatures based on a simple grouping strategy. Then, we learn a feature transformation to eliminate the spuriousfeatures by aligning with this bias-invariant representation. Finally, we update the classifier by incorporating the learned feature transformation and obtain an unbiased model. By integrating the aforementioned identifying, neutralizing, eliminating and updating procedures, we build an effective pipeline for mitigating spurious correlation. Experiments on four image and NLP debiasing benchmarks and one medical dataset demonstrate the effectiveness of our proposed approach, showing an improvement of worst-group accuracy by over 20\% compared to standard empirical risk minimization (ERM). Codes and checkpoints are available at https://anonymous.4open.science/r/ssc_debiasing-1CC8.
Live content is unavailable. Log in and register to view live content