Skip to yearly menu bar Skip to main content


Poster

Learning Debiased Representations via Conditional Attribute Interpolation

Yi-Kai Zhang · Qi-Wei Wang · De-Chuan Zhan · Han-Jia Ye

West Building Exhibit Halls ABC 332

Abstract:

An image is usually described by more than one attribute like “shape” and “color”. When a dataset is biased, i.e., most samples have attributes spuriously correlated with the target label, a Deep Neural Network (DNN) is prone to make predictions by the “unintended” attribute, especially if it is easier to learn. To improve the generalization ability when training on such a biased dataset, we propose a chi^2-model to learn debiased representations. First, we design a chi-shape pattern to match the training dynamics of a DNN and find Intermediate Attribute Samples (IASs) --- samples near the attribute decision boundaries, which indicate how the value of an attribute changes from one extreme to another. Then we rectify the representation with a chi-structured metric learning objective. Conditional interpolation among IASs eliminates the negative effect of peripheral attributes and facilitates retaining the intra-class compactness. Experiments show that chi^2-model learns debiased representation effectively and achieves remarkable improvements on various datasets.

Chat is not available.