Self-Attention Driven Tensor Representation for High-Order Data Recovery
Abstract
Low-rank tensor representation (LRTR) is an effective tool for compactly modeling high-order data. While nonlinear LRTR models can better capture real-world nonlinear dependencies, most existing methods rely on fixed mappings of multilayer perceptrons (MLPs) or convolutional neural networks (CNNs), limiting their ability to model complex global dependencies. To overcome this limitation, we construct a novel paradigm called Self-Attention Driven Tensor Representation (SADTR), which is the first framework that models nonlinearity from the perspective of self-attention. Specifically, we design a factor self-representation mechanism to establish dynamic global mapping, thereby adaptively capturing both local and non-local nonlinear dependencies. Moreover, we introduce an implicit sparse representation to impose sparsity constraint while avoiding additional optimization problems. As a result, the proposed SADTR can achieve a more accurate low-rank representation. In theory, we provide a detailed analysis to demonstrate the recoverability of SADTR. To validate the effectiveness of SADTR, we apply it to three representative high-order data recovery tasks. Experimental results demonstrate that SADTR consistently outperforms existing state-of-the-art LRTR methods.