Skip to yearly menu bar Skip to main content


Poster

Star with Bilinear Mapping

Zelin Peng · Yu Huang · Zhengqin Xu · feilong tang · Ming Hu · Xiaokang Yang · Wei Shen


Abstract:

Contextual modeling is crucial for robust visual representation learning, especially in computer vision. Although Transformers have become a leading architecture for vision tasks due to their attention mechanism, the quadratic complexity of full attention operations presents substantial computational challenges. To address this, we introduce Star with Bilinear Mapping (SBM), a Transformer-like architecture that achieves global contextual modeling with linear complexity. SBM employs a bilinear mapping module (BM) with low-rank decomposition strategy and star operations (element-wise multiplication) to efficiently capture global contextual information. Our model demonstrates competitive performance on image classification and semantic segmentation tasks, delivering significant computational efficiency gains compared to traditional attention-based models.

Live content is unavailable. Log in and register to view live content