Skip to yearly menu bar Skip to main content


SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design

Seokju Yun · Youngmin Ro

Arch 4A-E Poster #91
[ ] [ Project Page ]
Wed 19 Jun 5 p.m. PDT — 6:30 p.m. PDT

Abstract: Recently, efficient Vision Transformers have shown great performance with low latency on resource-constrained devices.Conventionally, they use 4$\times$4 patch embeddings and a 4-stage structure at the macro level, while utilizing sophisticated attention with multi-head configuration at the micro level.This paper aims to address computational redundancy at all design levels in a memory-efficient manner.We discover that using larger-stride patchify stem not only reduces memory access costs but also achieves competitive performance by leveraging token representations with reduced spatial redundancy from the early stages.Furthermore, our preliminary analyses suggest that attention layers in the early stages can be substituted with convolutions, and several attention heads in the latter stages are computationally redundant.To handle this, we introduce a single-head attention module that inherently prevents head redundancy and simultaneously boosts accuracy by parallelly combining global and local information.Building upon our solutions, we introduce SHViT, a Single-Head Vision Transformer that obtains the state-of-the-art speed-accuracy tradeoff.For example, on ImageNet-1k, our SHViT-S4 is 3.3$\times$, 8.1$\times$, and 2.4$\times$ faster than MobileViTv2 $\times$1.0 on GPU, CPU, and iPhone12 mobile device, respectively, while being 1.3\% more accurate.For object detection and instance segmentation on MS COCO using Mask-RCNN head, our model achieves performance comparable to FastViT-SA12 while exhibiting 3.8$\times$ and 2.0$\times$ lower backbone latency on GPU and mobile device, respectively.

Live content is unavailable. Log in and register to view live content