Your Dissimilarities Define You: Complementary Learning Exploiting Class Diversities
Abstract
In this work, we exploit class dissimilarities in a rather novel way, which provides complementary learning information beyond correct classification, that is not fully utilized in existing learning paradigms. To model these dissimilarities, we introduce the concept of an opposite-class, which consists of everything that is not part of a corresponding class, i.e., all samples from non-target classes or samples from unknown classes. By setting appropriately encoded target distributions over the non-target classes, we explicitly optimize the model’s activation distributions across all non-target classes, which enhances class dissimilarity information and enables better control over the geometry of the learned representations. We analyze the convergence dynamics of our proposed approach, both theoretically and empirically, showing that it naturally pushes the representations towards neural collapse, leading to more discriminative and robust features. Our extensive evaluation across multiple classification settings demonstrates consistent improvements of our method on closed-set, open-set, few-shot classification, and domain generalization benchmarks. Our code is available at: (withheld for review, demo in supplementary material).