Poster
Parallel Sequence Modeling via Generalization Spatial Propagation Network
Hongjun Wang · Wonmin Byeon · Jiarui Xu · Jinwei Gu · Ka Chun Cheung · Jan Kautz · Xiaolong Wang · Kai Han · Sifei Liu
[
Abstract
]
Abstract:
We present the Generalized Spatial Propagation Network (GSPN), a new attention mechanism optimized for vision tasks that inherently captures 2D spatial structures. Existing attention models, including transformers, linear attention, and state-space models like Mamba, process multi-dimensional data as 1D sequences, compromising spatial coherence and efficiency. GSPN overcomes these limitations by directly operating on spatially coherent image data and forming dense pairwise connections through a unique line-scan approach. Central to GSPN is the Stability-Context Condition, which ensures stable, context-aware propagation across 2D sequences and reduces the effective sequence length to √N, significantly enhancing computational efficiency. With learnable, input-dependent weights and no reliance on positional embeddings, GSPN achieves superior spatial fidelity and state-of-the-art performance in vision tasks, including ImageNet classification, class-guided image generation, and text-to-image generation. Notably, GSPN accelerates SD-XL with softmax-attention by over 84× when generating 16K images. Codes will be released upon publication.
Live content is unavailable. Log in and register to view live content