Skip to yearly menu bar Skip to main content


Poster

Incomplete Multi-View Multi-label Learning via Disentangled Representation and Label Semantic Embedding

Xu Yan · Jun Yin · Jie Wen


Abstract:

In incomplete multi-view multi-label learning scenarios, it is crucial to use the missing multi-view data to extract consistent and specific representations from different data sources and to fully utilize the missing label information. However, most of the previous approaches ignore the separation problem between view-shared information and specific information. To address this problem, in this paper, we build an approach that can separate view consistent features from view specific features under the Variational Autoencoder (VAE) framework. Specifically, first we introduce cross-view reconstruction to learn view consistent features, and extract the shared information in each view through unsupervised pre-training. Subsequently, we develop a disentangling module to learn the disentangled specific features by minimizing the upper bound of mutual information between the consistent features and the specific features. Finally, we utilize a priori label relevance to guide the learning of label semantic embeddings, aggregating relevant semantic embeddings and maintaining the topology of label relevance in the semantic space. In extensive experiments, our model outperforms existing state-of-the-art algorithms on several real-world datasets, which fully validates its strong adaptability in the absence of views and labels.

Live content is unavailable. Log in and register to view live content