Dynamic Momentum Recalibration in Online Gradient Learning
Abstract
Stochastic Gradient Descent (SGD) and its momentum-driven variants form the backbone of deep learning optimization, yet the underlying dynamics of their gradient behavior remain insufficiently understood. In this work, we reinterpret gradient updates through the lens of signal processing and reveal that fixed momentum coefficients inherently distort the balance between bias and variance, leading to skewed or suboptimal parameter updates. To address this, we propose SGDF (SGD with Filter), an optimizer inspired by the principles of Optimal Linear Filtering. SGDF computes an online, time-varying gain to dynamically refine gradient estimation by minimizing the mean-squared error, thereby achieving an optimal trade-off between noise suppression and signal preservation. Furthermore, our approach could extend to adaptive optimizers, enhancing their generalization potential. Extensive experiments across diverse architectures and benchmarks demonstrate that SGDF outperforms conventional momentum-based methods and achieves performance on par with or surpassing state-of-the-art optimizers.