Skip to yearly menu bar Skip to main content


Poster

SeqMvRL: A Sequential Fusion Framework for Multi-view Representation Learning

Ren Wang · Haoliang Sun · Yuxiu Lin · Chuanhui Zuo · Yongshun Gong · Yilong Yin · Wenjia Meng


Abstract:

Multi-view representation learning integrates multiple observable views of an entity into a unified representation to facilitate downstream tasks. Current methods predominantly focus on distinguishing compatible components across views, followed by a single-step parallel fusion process. However, this parallel fusion is static in essence, overlooking potential conflicts among views and compromising representation ability. To address this issue, this paper proposes a novel \textbf{Seq}uential fusion framework for \textbf{M}ulti-\textbf{v}iew \textbf{R}epresentation \textbf{L}earning, termed \textbf{SeqMvRL}. Specifically, we model multi-view fusion as a sequential decision-making problem and construct a pairwise integrator (PI) and a next-view selector (NVS), which represent the \textit{environment} and \textit{agent} in reinforcement learning, respectively. PI merges the current fused feature with the selected view, while NVS is introduced to determine which view to fuse subsequently. By adaptively selecting the next optimal view for fusion based on the current fusion state, SeqMvRL thereby effectively reduces conflicts and enhances unified representation quality. Additionally, an elaborate novel reward function encourages the model to prioritize views that enhance the discriminability of the fused features. Experimental results demonstrate that SeqMvRL outperforms parallel fusion approaches in classification and clustering tasks.

Live content is unavailable. Log in and register to view live content