Bidirectional Normalizing Flow: From Data to Noise and Back
Yiyang Lu ⋅ Qiao Sun ⋅ Xianbang Wang ⋅ Zhicheng Jiang ⋅ Hanhong Zhao ⋅ Kaiming He
Abstract
Normalizing Flows (NFs) are a principled framework for generative modeling, consisting of a forward process and a reverse process. The forward process maps data to a simple prior distribution, while the reverse process generates samples by inverting this mapping. Traditional approaches focus on designing expressive forward transformations under strict requirement of explicitly invertibility, so that the reverse process can serve as their exact analytic inverse. Recent advances such as TARFlow enhance the forward model with Transformers and autoregressive structures, achieving state-of-the-art generation quality—but at the expense of slow sampling due to autoregressive decoding. In this work, we introduce Bidirectional Normalizing Flow ($\textbf{BiFlow}$), a new framework that removes the need for an exact analytic inverse by learning a flexible, data-driven reverse model to $\textbf{approximate}$ the inverse mapping. This relaxation enables richer architectures and loss formulations while preserving the probabilistic foundation of NFs. BiFlow performs direct, single-forward (1-NFE) generation, eliminating autoregressive bottlenecks and achieving up to two orders of magnitude faster sampling with improved generation quality. We hope this work encourages rethinking Normalizing Flows as direct, flexible, and efficient generative models.
Successful Page Load