Poster
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
Kai Wang · Zekai Li · Zhi-Qi Cheng · Samir Khaki · Ahmad Sajedi · Ramakrishna Vedantam · Konstantinos N. Plataniotis · Alexander G. Hauptmann · Yang You
Dataset distillation has demonstrated strong performance on simple datasets like CIFAR, MNIST, and TinyImageNet but struggles to achieve similar results in more complex scenarios. In this paper, we propose a novel approach that \textbf{e}mphasizes the \textbf{d}iscriminative \textbf{f}eatures (obtained by Grad-CAM) for dataset distillation, called \textbf{EDF}. Our approach is inspired by a key observation: in simple datasets, high-activation areas typically occupy most of the image, whereas in complex scenarios, the size of these areas is much smaller. Unlike previous methods that treat all pixels equally when synthesizing images, EDF uses Grad-CAM activation maps to enhance high-activation areas. From a supervision perspective, we downplay supervision signals that have lower losses, as they contain common patterns. Additionally, to help the DD community better explore complex scenarios, we build the Complex Dataset Distillation (Comp-DD) benchmark by meticulously selecting sixteen subsets, eight easy and eight hard, from ImageNet-1K. Notably, EDF consistently outperforms SOTA results in complex scenarios, such as ImageNet-1K subsets. Hopefully, more researchers will be inspired and encouraged to enhance the practicality and efficacy of DD. Our code and benchmark will be made public.
Live content is unavailable. Log in and register to view live content