Adaptive Bayesian Early-Exit Networks for Efficient Non-Transferable Learning
Abstract
Non-transferable learning (NTL) aims to enforce usage restrictions by limiting a model’s generalization on target-domain data while maintaining its utility on the source domain. Current approaches face three major challenges: (1) low training efficiency due to retraining of the backbone network, (2) low inference efficiency, and (3) a rigid reliance on a shared, non-adaptive backbone network spanning both source and target domains. This shared setup, which aims to maximize source-domain performance and minimize target-domain performance, often introduces optimization conflicts due to overlapping class categories across source and target domains.In this paper, we propose a novel and efficient NTL approach using a dynamic Early-Exit Network, named ENL-DEE, which leverages Bayesian theory and dynamic neural networks to address these limitations. Our custom loss function guides source-domain data to exit at later stages of the network, maximizing model utility, while target-domain data exits earlier with non-semantic features, ensuring limited transferability. ENL-DEE offers three key advantages: (1) it enhances training efficiency by optimizing only the parameters of dynamic exit classifiers, bypassing the need to retrain the backbone; (2) it improves inference efficiency as data exits at various exit classifiers in the network; and (3) it resolves optimization conflicts by using distinct parameter sets for source and target domains, achieving higher performance on the source domain and lower performance on the target domain, thereby strengthening NTL. Extensive experiments across diverse datasets and model architectures validate the scalability, efficiency, and effectiveness of our approach.