Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip to yearly menu bar Skip to main content


Poster

Transformers without Normalization

Jiachen Zhu · Xinlei Chen · Kaiming He · Yann LeCun · Zhuang Liu


Abstract: Normalization layers are ubiquitous in modern neural networks and have long been considered essential.In this work, we demonstrate that strong performance can be achieved on Transformers without normalization layers, by using a remarkably simple technique.We introduce Dynamic Tanh (DyT), an element-wise operation DyT(x)=tanh(αx), as a drop-in replacement for normalization layers in Transformers.DyT is inspired by the observation that layer normalization layers often produce tanh-like, S-shaped input-output mappings.By incorporating DyT, Transformers without any normalization layers can match or exceed the performance of their normalized counterparts, mostly without tuning training hyperparameters.We validate the efficacy of Transformers with DyT across diverse settings, ranging from recognition to generation, supervised to self-supervised learning, and computer vision to language models.These findings challenge the conventional understanding that normalization layers are indispensable in modern neural networks, and offer new insights into their role in deep neural networks.

Live content is unavailable. Log in and register to view live content