Post-training Feature Pruning for Fundus Images Classification
Abstract
Deep neural networks have achieved strong performance in fundus image classification, yet their flattened feature representations are often highly redundant. Such redundancy can lead to poor generalization across imaging devices, reduced interpretability, and inefficient use of model capacity. To address this issue, this study proposes a post-training feature pruning framework, termed {greedy feature pruning (GFP)}, which removes weak or redundant dimensions from the flattened features of trained backbones. GFP employs a greedy build-up process guided by performance metrics on the training set, constrained by a minimum feature keeping ratio, to identify compact yet discriminative subsets of features. Experiments are conducted on five public fundus datasets covering multiple tasks, including diabetic retinopathy detection (DDR, Messidor-2), glaucoma detection (PAPILA), multi-label classification (ODIR) and multi-class retinal disease classification (RETINA), using EfficientNetV2, ViT, and CoAtNet as backbones. Results show that GFP consistently improves AUROC and AUPRC across datasets while reducing the number of flattened features by up to 96\%. Feature visualizations and quantitative analyses confirm that GFP enhances the compactness and separability of latent features. Moreover, cross-dataset evaluation demonstrates that GFP improves transferability between datasets, indicating better domain robustness. Overall, the proposed GFP framework provides a simple yet effective approach for compressing feature representations and improving both discriminability and generalization in fundus image classification.