Skip to yearly menu bar Skip to main content


Neural Dependencies Emerging From Learning Massive Categories

Ruili Feng · Kecheng Zheng · Kai Zhu · Yujun Shen · Jian Zhao · Yukun Huang · Deli Zhao · Jingren Zhou · Michael Jordan · Zheng-Jun Zha

West Building Exhibit Halls ABC 332


This work presents two astonishing findings on neural networks learned for large-scale image classification. 1) Given a well-trained model, the logits predicted for some category can be directly obtained by linearly combining the predictions of a few other categories, which we call neural dependency. 2) Neural dependencies exist not only within a single model, but even between two independently learned models, regardless of their architectures. Towards a theoretical analysis of such phenomena, we demonstrate that identifying neural dependencies is equivalent to solving the Covariance Lasso (CovLasso) regression problem proposed in this paper. Through investigating the properties of the problem solution, we confirm that neural dependency is guaranteed by a redundant logit covariance matrix, which condition is easily met given massive categories, and that neural dependency is sparse, which implies one category relates to only a few others. We further empirically show the potential of neural dependencies in understanding internal data correlations, generalizing models to unseen categories, and improving model robustness with a dependency-derived regularize. Code to exactly reproduce the results in this work will be released publicly.

Chat is not available.